Skip to content

Latest commit

 

History

History
56 lines (41 loc) · 9.13 KB

README.md

File metadata and controls

56 lines (41 loc) · 9.13 KB

CTRGCN

Abstract

Graph convolutional networks (GCNs) have been widely used and achieved remarkable results in skeleton-based action recognition. In GCNs, graph topology dominates feature aggregation and therefore is the key to extracting representative features. In this work, we propose a novel Channel-wise Topology Refinement Graph Convolution (CTR-GC) to dynamically learn different topologies and effectively aggregate joint features in different channels for skeleton-based action recognition. The proposed CTR-GC models channel-wise topologies through learning a shared topology as a generic prior for all channels and refining it with channel-specific correlations for each channel. Our refinement method introduces few extra parameters and significantly reduces the difficulty of modeling channel-wise topologies. Furthermore, via reformulating graph convolutions into a unified form, we find that CTR-GC relaxes strict constraints of graph convolutions, leading to stronger representation capability. Combining CTR-GC with temporal modeling modules, we develop a powerful graph convolutional network named CTR-GCN which notably outperforms state-of-the-art methods on the NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.

Citation

@inproceedings{chen2021channel,
  title={Channel-wise topology refinement graph convolution for skeleton-based action recognition},
  author={Chen, Yuxin and Zhang, Ziqi and Yuan, Chunfeng and Li, Bing and Deng, Ying and Hu, Weiming},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={13359--13368},
  year={2021}
}

Model Zoo

We release numerous checkpoints trained with various modalities, annotations on NTURGB+D and NTURGB+D 120. The accuracy of each modality links to the weight file.

Dataset Annotation GPUs Joint Top1 Bone Top1 Joint Motion Top1 Bone-Motion Top1 Two-Stream Top1 Four Stream Top1
NTURGB+D XSub Official 3D Skeleton 8 joint_config: 89.6 bone_config: 90.0 joint_motion_config: 88.0 bone_motion_config: 87.5 91.5 92.1
NTURGB+D XSub HRNet 2D Skeleton 8 joint_config: 90.6 bone_config: 92.7 joint_motion_config: 89.4 bone_motion_config: 90.3 93.3 93.6
NTURGB+D XView Official 3D Skeleton 8 joint_config: 95.6 bone_config: 95.4 joint_motion_config: 94.4 bone_motion_config: 93.6 96.6 97.0
NTURGB+D XView HRNet 2D Skeleton 8 joint_config: 96.9 bone_config: 97.6 joint_motion_config: 94.8 bone_motion_config: 95.6 98.4 98.4
NTURGB+D 120 XSub Official 3D Skeleton 8 joint_config: 84.0 bone_config: 85.9 joint_motion_config: 81.1 bone_motion_config: 82.2 87.5 88.1
NTURGB+D 120 XSub HRNet 2D Skeleton 8 joint_config: 82.2 bone_config: 84.6 joint_motion_config: 82.3 bone_motion_config: 82.1 85.8 86.6
NTURGB+D 120 XSet Official 3D Skeleton 8 joint_config: 85.9 bone_config: 87.4 joint_motion_config: 84.1 bone_motion_config: 83.9 89.2 89.9
NTURGB+D 120 XSet HRNet 2D Skeleton 8 joint_config: 84.5 bone_config: 88.6 joint_motion_config: 85.6 bone_motion_config: 85.6 89.0 90.1

Note

  1. We use the linear-scaling learning rate (Initial LR ∝ Batch Size). If you change the training batch size, remember to change the initial LR proportionally.
  2. For Two-Stream results, we adopt the 1 (Joint):1 (Bone) fusion. For Four-Stream results, we adopt the 2 (Joint):2 (Bone):1 (Joint Motion):1 (Bone Motion) fusion.

Training & Testing

You can use the following command to train a model.

bash tools/dist_train.sh ${CONFIG_FILE} ${NUM_GPUS} [optional arguments]
# For example: train CTRGCN on NTURGB+D XSub (3D skeleton, Joint Modality) with 8 GPUs, with validation, and test the last and the best (with best validation metric) checkpoint.
bash tools/dist_train.sh configs/ctrgcn/ctrgcn_pyskl_ntu60_xsub_3dkp/j.py 8 --validate --test-last --test-best

You can use the following command to test a model.

bash tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${NUM_GPUS} [optional arguments]
# For example: test CTRGCN on NTURGB+D XSub (3D skeleton, Joint Modality) with metrics `top_k_accuracy`, and dump the result to `result.pkl`.
bash tools/dist_test.sh configs/ctrgcn/ctrgcn_pyskl_ntu60_xsub_3dkp/j.py checkpoints/SOME_CHECKPOINT.pth 8 --eval top_k_accuracy --out result.pkl