Skip to content

reshalfahsi/action-recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

Action Recognition Using CNN + Bidirectional RNN

colab

Given a video, we can undergo recognition or analysis to decide what action occurred in the clip. By nature, videos are a sequence of frames. Consequently, performing action recognition on video deals with processing spatio-temporal data. Here, we can make use of the HMDB51 dataset, consisting of 6k+ clips of 51 actions. This dataset has three separate train/test splits. Striving for simplicity, this project utilizes the first training split as the training set, the second testing split as the validation set, and the third testing split as the testing set. Regarding the action recognition model, CNN is customarily adopted to extract spatial information. Thus, a CNN architecture, MnasNet, is put into use. Next, to handle the temporal information, bidirectional RNN is employed. Succinctly, the action recognition model in this project is composed of CNN and bidirectional RNN.

Experiment

Please take a look at this notebook to see the recognition in action.

Result

Quantitative Result

The following table conveys the quantitative performance of the model.

Test Metric Score
Loss 0.753
Accuracy 88.39%

Loss and Accuracy Curve

loss_curve
The loss curve on the (first) training split and the validation set (the second testing split) of the CNN + Bidirectional RNN model.

acc_curve
The accuracy curve on the (first) training split and the validation set (the second testing split) of the CNN + Bidirectional RNN model.

Qualitative Result`

Here is a compilation of several video clips with an in-frame caption tailored to their predicted and actual actions.

qualitative
The action recognition results of the CNN + Bidirectional RNN model. Several actions are shown in the compilation video: brush hair, throw, dive, ride bike, and swing baseball.

Credit