A hand that can ‘see’ and reach out to hold objects automatically has been developed by a team of bioengineers—it operates faster than existing prosthetics. The invention constitutes renewed hope for amputees. The findings are published in the Journal of Neural Engineering.
A hand that ‘sees’, from a new ‘line’ of prosthetic limbs, operates much like a real hand: its wearer can reach out for objects automatically without thinking, and without complicated maneuvers. It does not literally bear eyes, of course, but it comes with a camera (the ‘eyes’!) that takes pictures of objects, and assesses their shape and size, and then activates a chain of movements in the hand; this is in contrast to current prosthetic systems which require the user himself to view an object to physically trigger arm muscles in order to cause a desired movement.
The bionic hand is the invention of a team of biomedical engineers from Newcastle University, UK, working in collaboration with researchers from Tyne Hospitals NHS Foundation Trust. The new technology has been tried on a group of amputees in the latter’s endeavour to gift patients with “hands with eyes”.
Otherwise, the development of prosthetic limbs has remained static for around a century: while design and materials have improved, the mode of operation has not changed. The new one works in a completely different way: co-author Dr Kianoush Nazarpour explains that their bionic hand has been made—using computer vision—to respond automatically in a similar manner as a real hand does, that is, a user will only need a quick glance in the direction of the desired object, and then he will be able to reach out and pick it. This is a huge step in the field of prosthetics as responsiveness has been a major hurdle in past ones which accounted for slow and cumbersome prosthetics. On the other hand, our new bionic hand ‘sees’ and responds in a fast and fluid manner.
Dr Nazarpour, thus, describes their hand as being “intuitive”.
How did the team make the hand so intelligent? Lead author Ghazal Ghazaei explains that they have used neural networks, and input several images of the same objects to a computer to teach it to identify the type of grip required for different types of materials.
“We would show the computer a picture of, for example, a stick. But not just one picture, many images of the same stick from different angles and orientations, even in different light and against different backgrounds and eventually the computer learns what grasp it needs to pick that stick up,” says Ghazaei.
In this way, the computer could recognise objects and categorise them as per the “grasp type”. This is what allows the bionic hand to perform an accurate evaluation of objects it has never seen before.
But, does this mean that the hand needs to be fed the images of all the objects it is to grasp? No, as explains Dr Nazarpour, because the system is flexible enough to allow the hand to pick up new objects—which makes it very practical for daily life.