Skip to content

Preprocessing and classify EMG signals, using Tensorflow and Tensorflow Lite to deploy an AI model in a ESP32C3

Notifications You must be signed in to change notification settings

kaviles22/EMG_SignalClassification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EMG Signal Classification

➡️ When using this resource, please cite the original publication:

✅ Abstract:

About 8% of the Ecuadorian population suffers some type of amputation of upper or lower limbs. Due to the high cost of a prosthesis and the fact that the salary of an average worker in the country reached 248 USD in August 2021, they experience a great labor disadvantage and only 17% of them are employed. Thanks to advances in 3D printing and the accessibility of bioelectric sensors, it is now possible to create economically accessible proposals. This work proposes the design of a hand prosthesis that uses electromyography (EMG) signals and neural networks for real-time control. The integrated system has a mechanical and electronic design, and the latter integrates artificial intelligence for control. To train the algorithm, an experimental methodology was developed to record muscle activity in upper extremities associated with specific tasks, using three EMG surface sensors. These data were used to train a five-layer neural network. the trained model was compressed and exported using TensorflowLite. The prosthesis consisted of a gripper and a pivot base, which were designed in Fusion 360 considering the movement restrictions and the maximum loads. It was actuated in real time thanks to the design of an electronic circuit that used an ESP32 development board, which was responsible for recording, processing and classifying the EMG signals associated with a motor intention, and to actuate the hand prosthesis. As a result of this work, a database with 60 electromyographic activity records from three tasks was released. The classification algorithm was able to detect the three muscle tasks with an accuracy of 78.67% and a response time of 80 ms. Finally, the 3D printed prosthesis was able to support a weight of 500 g with a safety factor equal to 15.

Introduction

This project aims to preprocess EMG signals and classify them into three classes using AI techniques to control a 3d printed prosthesis.

Tools:

Data collection

src/data_collection

For this stage, a computer, running a python script and an Esp32c3, communicated using bluetooth low energy (BLE). The computer was in charge of indicating the tests subject when to start performing a certain muscular action. The esp32c3 was in charge of recording the emg data using 3 MyoWare sensors, labeling it and saving it on an SD card.

Note: the ideal would have been to send the data via bluetooth to the computer, however we were unable to find a way to do this quickly without affecting the data recording process which was done at a frequency of 1kHz.

Screen visuals

The python script shows visual signs on the screen, representing each one of the actions. Between each recording interval, a resting sign appears, which means the test subject can stop performing the previous action. Screen visuals

Preprocessing

Preprocessing data is an important step, specially when working with biological signals.

Filtering

To filter out the noise we used the RMS Envelope technique, which was calculated in a 50 ms wide window.

Normalization

The peak dynamic method was used, which consists on representing each signal as a ratio of the peak value of that time window. Values were kept between [0, 1].

equation1

Feature extraction

src/feature_extraction/feature_extraction.ipynb
Two approaches were analyzed:

  1. Statistical features
  2. RMS in time windows: extracting the rms value in subwindows.

Training and converting the TF model to a TF Lite model

src/train_model/train_test_model.py
A tensorflow model was trained and then converted into a TF Lite model.

Deploying model and testing it on real-time.

The whole pipeline was deployed on a ESP-C3 development board. The code was written in C++ and the model output was used to activate 2 servo motors to move the 3d printed prosthesis. Real time testing