In recent years, the field of machine learning has seen a resurgence in popularity and application. This
has led to classifiers with state-of-the-art performance across various related fields, such as
facial expression analysis, skeleton-based human action recognition, and sign language gesture recognition.
In conjunction, the advancement of GPUs has made it possible to vastly increase data throughput, which is
required to train AI models through large quantities of data nets over a shorter period of time and do inference in
Machine learning models have been applied to EMG signals in the past, most recently, deep learning architectures. And while state of the art models show great performance, inference is done on high-end GPUs, not ideal for prosthetics or assistive robots applications. This project takes a step further by replicating and optimizing of a Neural Net architecture from a research paper and succesfully implementing it on an GPU-based embedded platform, the Jetson TX2. This requires designing the architecture and its different layers, deployment of model onto the system, testing for requirements and how the data is processed. Ultimately, this is a test to see if the inference ability of neural net algorithm on an embedded platform meets the real-team requirements and high levels of accuracy of other machine learning algorithms. The motivation behind this project is to advance the pattern recognition of EMG based systems.
The data pipeline design, training and testing is done in a host computer, an ICE lab computer with a GTX 2080 Nvidia graphics card. The csl-hdemg dataset. Once the AI model meets viable accuracy levels, it is then deployed onto the Jetson platform to test the real-time requirement. This is done via TensorRT, Nvidia’s software for deep learning hardware acceleration. The data pipeline are as follows:
There are certain constrains for this project.
Experiments and tests were set at different stages of this project:
Results: Overall, a succesful implementation with real-time constraints of noticeable delay and high accuracy (comparable to current papers) were met. Total time from data acquired to gesture infered: ~75ms. In addition, parallelization of data preprocessing and training approximate from 1.5-2 hours to 30 minutes