A PyTorch implementation of the DAN, pre-trained models are available for deployment.
- Download pre-trained model of MSCeleb and move the file to
./models - Download RAF-DB dataset and extract the
raf-basicdir to./datasets - Download AffectNet dataset and extract the
AffectNetdir to./datasets - Run
python ./utils/convert_affectnet.pyto store a split version of AffectNet dataset.
We provide the training code for AffectNet and RAF-DB.
For AffectNet-8 dataset, run:
CUDA_VISIBLE_DEVICES=0 python affectnet.py --epochs 10 --num_class 8
For AffectNet-7 dataset, run:
CUDA_VISIBLE_DEVICES=0 python affectnet.py --epochs 10 --num_class 7
For RAF-DB dataset, run:
CUDA_VISIBLE_DEVICES=0 python rafdb.py
Pre-trained models can be downloaded for evaluation as following:
| task | epochs | accuracy | link |
|---|---|---|---|
| AffectNet-8 | 5 | 62.09 | download |
| AffectNet-7 | 6 | 65.69 | download |
| RAF-DB | 21 | 89.70 | download |
Also, you can find these files on Baidu Driver with key 0000
Our experiment of grad cam++ was based on the package grad-cam 1.3.1, which could be pulled by:
pip install grad-cam==1.3.1
Then, run the following code to dump the visual results. (Need to replace several variables manually.)
python run_grad_cam.py
There is a simple demo to invoke DAN model for a emotion inference:
CUDA_VISIBLE_DEVICES=0 python demo.py --image test_image_path
