Skip to content

3huo/MVDI

Repository files navigation

Action recognition for depth video using multi-view dynamic images

this is the implementation of the paper accepted by Science Information. Journal.

avatar the code is consists of there pattern:

  1. multi-view dynamic images generation.
  2. multi-view CNN training.
  3. faster rcnn based human motion detection

requirement

Data Download

The Dataset (such as NTU RGBD) and the pretrained model imagenet-vgg-f

Environmet

libnear library for matlab LIBLEANER is used to generate the dynamic images.

matconvnet-1.0-bata23 MatConvNet is used for the CNN training stage.

Caffe build for Faster R-CNN If you are using Windows, you may download a compiled mex file by running fetch_data/fetch_caffe_mex_windows_vs2013_cuda65.m

usage

Multi View Dynamic Images Generation

Prepare your data path then run the function file called 'MVDI_Generation/dynamic_mutil_general_NTU.m'. All the involved sub-function is contained in the first folder.

CNN Training

After the MVDI data is generated by the former step, then feed the MVDI data to the CNN. 'View_Shared_CNN_Training/train_depth_share_ntu_view_DMM/cnn_dicnn.m' will be easy to run if your data path setting is correct.

Faster-RCNN based human detection

For human detection, we construct the training sample by the human skeleton information, which described in the subfunction 'get_bounding_box_skelen.m'.

'create_output.m & create_train_test.m' is used to create the train and test sample for training.

'script_faster_rcnn_demo_testall_ntu.m' is the final training function.

More detailed faster-rcnn usage can be refered to faster_rcnn.

Citation

Please cite the following paper if you use this repository in your research.


	@article{xiao2018action,
  		title={Action Recognition for Depth Video using Multi-view Dynamic Images},
  		author={Xiao, Yang and Chen, Jun and Wang, Yancheng and Cao, Zhiguo and Zhou, Joey Tianyi and Bai, Xiang},
  		journal={Information Sciences},
  		year={2018},
  		publisher={Elsevier}
	}

contact

For any question, feel free to contact

Yancheng Wang : yancheng_wang@hust.edu.cn

Yang Xiao :Yang_Xiao@hust.edu.cn

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages