Skip to content

yash21saraf/ActivityRecognition

Repository files navigation

Activity_Recognition

The code will be updated very soon, the code requires better abstraction for reading data and processing.

Activity Recognition has been studied using Human Pose Estimation. The details of the code have and how to run them have been added to the respective README.

The first approach has been done using the Light Weight Pose Estimation by Edvard Hua. Details for the pose estimation can be found here.

The poses extracted using Pose Estimation have been fed to the LSTM network with 2 cells.

The details and code is available in PoseToAction Directory.

These are the results of the model as of now.

Results

After working with Berkeley MHAD and DRONES lab dataset, we realized that the pose estimation being used for the models was not very accurate and needed to be tested on a more clean dataset.

So, NTU RGB-D Dataset provided with ample training data which has been recorded in a controlled environment in colored in a well lit room.

So the Pose estimation on the data was performed, and the values were saved as the JSON file. For starters, we only considered 4 classes i.e. throw, kick, jump, and salute. The results for the same can be found here.

Following is the link to the Youtube Video

Model Performance

Following is the representation for the Confusion Matrix- image

Following is the training curve -

image

In total 40 subjects were part of the dataset creation. So to make sure the testing has been done on completely unseen dataset, 10 subjects were seperated out. For training and validation rest of the dataset was used.

About

Using Human Pose Estimation for the Task of Activity Recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages