Skip to content

[IN PROGRESS] Multimodal feature extraction modules for ease of doing research and reproducibility.

Notifications You must be signed in to change notification settings

gangeshwark/multimodal_feature_extractors

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multimodal Feature Extractors

This repo contains a collection of feature extractors for multimodal emotion recognition.

Setup

Clone this repository:

$ git clone --recurse-submodules https://github.com/gangeshwark/multimodal_feature_extractors.git

  1. Install FFMPEG and OpenCV from source.
  2. Install the packages as specified in requirements.txt

Currently, these modalities are covered:

  1. Video

Video

OpenFace + Face VGG:

This feature extractor contains uses Openface to extract and align faces and uses Face VGG to extract facial features from every frame.

Module: from src.video.models import OpenFace_VGG in your data processing code.


Tasks:

  • Video feature extractor.
  • Add text feature extractor.
  • Add audio feature extractor.
  • Code cleanup.

Credits:

  1. Soujanya Poria for his invaluable inputs.
  2. Authors of caffe-tensorflow, openface

About

[IN PROGRESS] Multimodal feature extraction modules for ease of doing research and reproducibility.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages