This repository implements the training and testing of ADLD for Unconstrained Facial Action Unit Detection via Latent Feature Domain. The repository offers the implementation of the paper in PyTorch
- This code was tested with PyTorch 0.4.0 and Python 2.7
- Clone this repo:
git clone https://github.com/ZhiwenShao/ADLD
cd ADLD
Put BP4D and EmotioNet into the folder "dataset" following the paths shown in the list files of the folder "data/list"
- Conduct similarity transformation for face images:
- We provide the landmarks annotated using OpenPose for EmotioNet here. Each line in the landmark annotation file corresponds to 49 facial landmark locations (x1,y1,x2,y2...). Put these annotation files into the folder "dataset"
- An example of processed image can be found in the folder "data/imgs/EmotioNet/optimization_set/N_0000000001/"
cd dataset python face_transform.py
- Compute the weight of the loss of each AU in the BP4D training set:
- The AU annoatation files should be in the folder "data/list"
cd dataset python write_AU_weight.py
- Train a model without using target-domain pseudo AU labels:
python train.py --mode='weak'
- Train a model using target-domain pseudo AU labels:
python train.py --mode='full'
- Test a model trained without using target-domain pseudo AU labels:
python test.py --mode='weak'
- Test a model trained using target-domain pseudo AU labels:
python test.py --mode='full'
If you use this code for your research, please cite our paper:
@article{shao2021unconstrained,
title={Unconstrained Facial Action Unit Detection via Latent Feature Domain},
author={Shao, Zhiwen and Cai, Jianfei and Cham, Tat-Jen and Lu, Xuequan and Ma, Lizhuang},
journal={IEEE Transactions on Affective Computing},
year={2021},
publisher={IEEE}
}
Should you have any questions, just contact with us through email zhiwen_shao@cumt.edu.cn