Skip to content
/ mexca Public

Multimodal Emotion eXpression Capture Amsterdam. Pipeline for capturing emotion expressions from multiple modalities (video, audio, text) in the wild.

License

Notifications You must be signed in to change notification settings

mexca/mexca

Repository files navigation

Multimodal Emotion Expression Capture Amsterdam

github license badge RSD read the docs badge fair-software badge workflow scq badge workflow scc badge build cffconvert markdown-link-check DOI docker hub badge docker build badge black code style badge

mexca is an open-source Python package which aims to capture human emotion expressions from videos in a single pipeline.

Check out our preprint:

Lüken, M., Moodley, K., Viviani, E., Pipal, C., & Schumacher, G. (2024, January 18). MEXCA - A simple and robust pipeline for capturing emotion expressions in faces, vocalization, and speech. PsyArXiv. https://doi.org/10.31234/osf.io/56svb

How To Use Mexca

mexca implements the customizable yet easy-to-use Multimodal Emotion eXpression Capture Amsterdam (MEXCA) pipeline for extracting emotion expression features from videos. It contains building blocks that can be used to extract features for individual modalities (i.e., facial expressions, voice, and dialogue/spoken text). The blocks can also be integrated into a single pipeline to extract the features from all modalities at once. Next to extracting features, mexca can also identify the speakers shown in the video by clustering speaker and face representations. This allows users to compare emotion expressions across speakers, time, and contexts.

Please cite mexca if you use it for scientific or commercial purposes.

Installation

mexca can be installed on Windows, macOS and Linux. We recommend Windows 10, macOS 12.6.x, or Ubuntu. The base package can be installed from PyPI via pip:

pip install mexca

The dependencies for the additional components can be installed via:

pip install mexca[vid,spe,voi,tra,sen]

or:

pip install mexca[all]

The abbreviations indicate:

  • vid: FaceExtractor
  • spe: SpeakerIdentifier
  • voi: VoiceExtractor
  • tra: AudioTranscriber
  • sen: SentimentExtractor

For details on the requirements and installation procedure, see the Quick Installation and Installation Details sections of our documentation.

Getting Started

If you would like to learn how to use mexca, take a look at our demo notebook and the Getting Started section of our documentation.

Examples and Recipes

In the examples/ folder, we currently provide two Jupyter notebooks (and a short demo):

The recipes/ folder contains two Python scripts that can easily be reused in a new project:

Components

The pipeline components are described here.

Documentation

The documentation of mexca can be found on Read the Docs.

Contributing

If you want to contribute to the development of mexca, have a look at the contribution guidelines.

License

The code is licensed under the Apache 2.0 License. This means that mexca can be used, modified and redistributed for free, even for commercial purposes.

Credits

Mexca is being developed by the Netherlands eScience Center in collaboration with the Hot Politics Lab at the University of Amsterdam.

This package was created with Cookiecutter and the NLeSC/python-template.