Skip to content

Code for "It’s LeVAsa not LevioSA! Latent Encodings for Valence-Arousal Structure Alignment" [CoDS-CoMAD'21]

Notifications You must be signed in to change notification settings

vishaal27/LeVAsa

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LeVAsa

This repository contains the code for the paper It’s LeVAsa not LevioSA! Latent Encodings for Valence-Arousal Structure Alignment. The code has been tested on Pytorch 1.3.1 and Python 3.6.8.

Abstract

In recent years, great strides have been made in the field of affective computing. Several models have been developed to represent and quantify emotions. Two popular ones include (i) categorical models which represent emotions as discrete labels, and (ii) dimensional models which represent emotions in a Valence-Arousal (VA) circumplex domain. However, there is no standard for annotation mapping between the two labelling methods. We build a novel algorithm for mapping categorical and dimensional model labels using annotation transfer across affective facial image datasets. Further, we utilize the transferred annotations to learn rich and interpretable data representations using a variational autoencoder (VAE). We present “LeVAsa”, a VAE model that learns implicit structure by aligning the latent space with the VA space. We evaluate the efficacy of LeVAsa by comparing performance with the Vanilla VAE using quantitative and qualitative analysis on two benchmark affective image datasets. Our results reveal that LeVAsa achieves high latent-circumplex alignment which leads to improved downstream categorical emotion prediction. The work also demonstrates the trade-off between degree of alignment and quality of reconstructions.

Methods

Annotation Transfer Algorithm

Algorithm

Architecture

Architecture

Datasets

  • AFEW emotional database: annotated with discrete VA values (between -10 and 10)
  • Affectnet database: annotated with continuous VA values (between -1 and 1) and 11 discrete emotional labels
  • IMFDB database: annotated with only 6 discrete emotion labels

Experiments

The results in the paper can be reproduced using Code Notebooks/Experiments.ipynb. Saved weights can be obtained from here. IMFDB data images should be placed inside a subfolder within final_images/.

Annotations

For transferring VA annotations, we use Affectnet as the anchor dataset and leverage various transfer sampling strategies which can be found in the Code Notebooks/Annotations_Transfer.ipynb notebook. The transferred annotations are saved as .json files, which can be found inside the Annotations folder.

Training

The Code Notebooks/Models_Training_1.ipynb and Code Notebooks/Models_Training_2.ipynb notebooks contain the model classes and training scripts including the code to save checkpoints.

Contact:

About

Code for "It’s LeVAsa not LevioSA! Latent Encodings for Valence-Arousal Structure Alignment" [CoDS-CoMAD'21]

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published