Artificial Intelligence (AI) and Prosthetics: The New Frontier

Russell Yearwood
6 min readMar 13, 2021
Nerve-controlled Prosthetic arm
Image Source here

Hi there and welcome to my first ever post!

If you’ve found your way to this little part of the internet — then you obviously either love robotics, have played way too much Deus Ex: Human Revolution, or are just an avid technophile like myself who is fascinated by science, computers and how they are forging new paths toward a bionic future.

Personally, I have always been intrigued by the possibility of a completely integrated neural-prosthetic — one that could move and operate as effortlessly as a natural limb in real-time. Unfortunately, we aren’t quite there yet, but with new increasingly efficient Machine Learning (ML) Algorithms, and Deep-Learning (DL) Neural Networks, that dream is now closer to grasp than ever before! Sit down, buckle up, and let’s take a deep (possibly neural?) dive into how Machine Learning has led to some truly monumental breakthroughs within the Robotic Prosthetics field.

Robotic Prosthetics as They Currently Stand

Modern bionics currently utilize EMG (Electromyography ) as signal input for the prosthetic, which is then digitized and used for computations resulting in user-directed movement.

While this is already an amazing achievement, a monumental obstacle exists that prevents precise movements in real-time. Current EMG interface technology is unable to capture either the scale or sheer volume of nerve signals involved in general muscle movements. As such, a vast amount of data is lost from input, inevitably limiting the capabilities of the prosthetic for precise movement and reaction speeds.

This singular problem is the focus of mounting research as it presents an opportunity for a major breakthrough in being able to provide qualitative improvements in the lives of over 40 million amputees worldwide.

Machine Learning Applications in Modern Prosthetics

As outlined above, the key challenge in the development of fully-functional, nerve-integrated prosthetics has plateaued due to inefficient capture and translation of nerve signals sent from the brain, into data that can be accurately utilized by an Artificial Intelligence (AI) engine.

At this point, you are probably wondering how this is a ML problem and not a hardware (signal reception) problem? — And you are absolutely correct, and absolutely wrong.

Data Scientists typically see a problem and believe that the answer can be found with more data. This has been the mindset applied to building a Prosthetic using Artificial Intelligence and it has led us this far:

Better EMG signal = better output right?…… right?

To better understand the nature of the obstacle, I will first explain how Neural Networks operate the prosthetic, then we will see the new advances made towards more dynamic breakthroughs in the field.

The implementation of an Artificial Neural Network (ANN) is needed to process the complex information required to move the prosthetic. ANN’s apply two unique subsets of AI machine algorithms known as Supervised (The AI knows exactly what to look for/perform and adjusts its parameters to get as close as possible to correct goal) and Unsupervised learning (The AI does not know what the correct output is, but can identify generally distinct motions). This enables the model to mimic an actual human brain with both static and dynamic memory components.

This allows the ANN to better compute, solve and navigate data using pattern recognition, matching, clustering and classification techniques. A basic diagram of how an ANN receives and generates output data can be seen below.

Image Source here
  • The Input layer in the above model represents the EMG signals generated by a user’s brain
  • This data is then fed into a machine learning algorithm to be processed/encoded and passed into the 1st and 2nd Hidden Layers
  • It is inside these 2 (or more) Hidden layers that the ANN learns patterns and behaviors expected for performance such as gripping an object or turning a doorknob (This is the area where the missing EMG signal loss becomes a major hindrance as the ANN does not have enough reference data to make incredibly precise decisions).

Once again, this is a very broad perspective of the general application of AI and ML in prosthetics that is currently available on the market. But, back to our original question:

How do we tackle the EMG signal loss problem from a Data Science perspective?

Dawn of the New Age: CNN Applications

One talented and clever pair of students in 2018 brought forward an award-winning idea for a prosthetic arm that had a visual sensor built directly into the hand itself. Instead of relying solely on EMG signals which lack the input data to govern fine motor precision skills such as grip strength, their model would be able to use image recognition to identify an object or task and automatically determine the level of precision required. A simple, but spectacularly effective idea won the SmartArm Team their well-deserved grand-prize at Microsoft’s Imagine Cup competition

Not only is the idea financially feasible from an engineering standpoint, but is also easily implemented within existing ANN’s by simply adding a CNN (Convolutional Neural Network — This is another type of Neural network that is commonly used for image recognition) alongside the existing Network.

AI’s and CNN’s have now made their homes in the minds of talented prosthetics researchers in every country on Earth.
The industry has changed drastically in a very short time period and while there was once a single understood method to controlling an artificial limb through EMG reception, now there exists multiple, new, untrodden paths waiting to be explored as more advancements are made in Deep Neural Network development — all being engineered by young, eager Data Scientists around the world. Advancements have even begun to improve the EMG reception signal issue through the use of a direct brain-interface chip that receives the raw EMG signals at its source.

While this is considered cutting edge and the initial tests have shown mind-blowingly promising results, there are many caveats to this specific use of interface — one such that it requires complicated surgery to implant and connect the chip. Another regards the ethical and moral dilemmas surrounding the modification of the human body with technologically advanced equipment.

The New Frontier

All in all, this was but a brief overture into the field of prosthetics and how AI and Machine Learning are driving positive change for those less fortunate.

A brand-new framework in the ML field known as Reinforcement Learning is already on course to rock the industry and is considered state-of-the-art for its ability to train an AI in real-time in an ever-changing environment by implementing a reward/punishment system. This encourages the AI to make the correct adjustments consistently over time resulting in completely optimized results and is quickly becoming the focal point of many robotics ventures.

For me, this is one of the most exciting fields for Machine Learning Applications and I can’t wait to see where we will be in a few years. It was a very short time ago that we were, by comparison, crawling forward in research. Now, we are full-tilt sprinting towards a future, a place I like to imagine as the New Frontier — a place where Technology, Data and Science all come together to create the once unimaginable.

What an exciting and wonderful time to be alive!

Thank you for reading and if you enjoyed it, please leave a comment and share something that you are passionate about as well!

--

--

Russell Yearwood

Robotics Enthusiast, Technophile, Snowboarding Addict and Data Scientist