Physical interaction

Very close to HRI but here we focus on the physical part of the interaction. 

Grasping

Many real world tasks require the robot to grasp an object and manipulate it. This is something that we humans are very good at but that for robots is still very complicated. It is complicated for several reasons. First, today's robot hands are not as flexible, durable and often not as strong as human hands. Second, such as basic thing as determining where and how to grasp an object is non-trivial. In some applications where a limited set of objects are to be handled one can use custom made grippers and hard coded grasping configurations but for the general case of unknown objects the robot has to be able to determine on its own how to grasp an object, taking into account what the objective or the task is. For example, grasping a mug to move it or to help someone to drink from it may require rather different grasps. Once a good grasp position has been identified the robot has to plan how to achieve the grasp taking into account many constraints such as kinematic constraints and potential collisions with the environment.

Common strategies up until recently included sampling many different grasps and evaluating them according to some grasp quality metric, identify parts of objects for which grasp positions are known or to simplify the geometry of the object so to only generate a few plausible grasps.

Today much of the work is aimed at machine learning techniques. One of the more talked about experiments was performed by Google where 14 arms were used to collect data to learn how to grasp objects many objects.

Large-scale data collection with an array of robots Links to an external site.Large-scale data collection with an array of robots

The following video shows a more recent exampleDex-Net 4.0 Links to an external site.Dex-Net 4.0

 

How to grasp and the ability to grasp is heavily dependent on what you grasp with, ie the gripper. In industry the vast majority of the robots are equipped with parallell yaw grippers, often customised for the object to grasp.

Robohand G100 Parallel Gripper Links to an external site.Robohand G100 Parallel Gripper

More advanced hands do exist but so far their use are largely constrained to research

Shadow Dexterous Hand Links to an external site.Shadow Dexterous Hand

Getting a good trade off between strength and ability to grasp objects robustly/easily is important, something which is optimised for by the Pisa/IIT SoftHand

Adaptive Synergies for the Design and Control of the Pisa/IIT SoftHand Links to an external site.Adaptive Synergies for the Design and Control of the Pisa/IIT SoftHand

 

 

Why not make your own gripper?

DIY Soft Robotic Gripper Links to an external site.DIY Soft Robotic Gripper

 

 

Some references

 

Physical Human Robot Interaction (pHRI) 

Thanks to Christian Smith at RPL@KTH for this material

Safety:

The most important task for physical human-robot interaction is to guarantee safety for the human part. Mostly, this is done by not letting the robot and human occupy the same space. The following video for some illustrations of potential dangers with industrial robots, and explains why humans and robots are mostly kept separated for safety.

Human-Robot Collision Study Links to an external site.Human-Robot Collision Study

Note (starting at 1:01) the robot passing through a singularity as it collides. The contact forces in this particular experiment was measured at up to 2000 N, for a robot rated for a 14 kg (~140 N) payload.

 

The following video demonstrates impedance control to reduce injuries:

Safe Human-Robot Interaktion Links to an external site.Safe Human-Robot Interaktion

 

Details are published in the following two papers:

The video attachment for the second paper can be found here:
https://ieeexplore.ieee.org/ielx5/4534525/4543169/4543389/1175.zip?tp=&arnumber=4543389 Links to an external site.

 

Collaboration:

Assuming that safety has been taken care of, other major problems to solve for pHRI is making the robot understand what the human wants to to, so that the two of them can perform meaningful tasks together. Here is a video illustrating a collaborative task, where the robot follows the human lead in moving a table (as sensed through force sensors, while at the same time performing the secondary vision-based task of keeping a ball from rolling of the table:

Collaborative Human-Robot Ball-on-table Carrying Links to an external site.Collaborative Human-Robot Ball-on-table Carrying

 

A prerequisite for performing collaboration is that the robot has some understanding of the human. The following video demonstrates how tactile sensing can be used to estimate the kinematic constraints that a human imposes on a jointly manipulated object:

https://ieeexplore.ieee.org/ielx7/6679723/6696319/6697059/0576.MM.zip?tp=&arnumber=6697059 Links to an external site.

which accompanies the following paper

"Online kinematics estimation for active human-robot manipulation of jointly held objects", Yiannis Karayiannidis, Christian Smith, Francisco E. Viña and Danica Kragic, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, (DiVa Links to an external site. Links to an external site.ieeexplore link) Links to an external site.

Exoskeletons

A special case of physical interaction between humans and robots are exoskeletons. It is still an open problem how to share control between the human and the machine, and research is ongoing. Current state of the art is summarized here:

https://ieeexplore.ieee.org/abstract/document/7393837 Links to an external site.

Another problem with exoskeletons are that they are often bulky and rigid. One attempt to solve this with soft components is presented here:

https://wyss.harvard.edu/technology/soft-exosuit Links to an external site.