Skip to content

mbilalshaikh/pymaivar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyMAiVAR - An open-source python suit for audio-image representation for human action recognition

DOI License: GPL v3 Open in Code Ocean contributions welcome python Code style: black pydocstyle

Abstract

We introduce PyMAiVAR, a preferred toolbox for creating audio-image representations, integrating information about human actions. Our approach to features was inspired by the Spectral Centroid feature, often used for music genre identification and audio classification. The effectiveness of this representation is assessed in the context of multimodal human action recognition, and is found to be on par with other representations and single-modality approaches using the same dataset. We also illustrate additional applications of the toolbox for creating image-based representations. As a tool, PyMAiVAR holds significant value for researchers specializing in multimodal action recognition, as it can enhance performance by harnessing multiple modalities. PyMAiVAR is implemented in Python and is compatible with various operating systems.

Compilation requirements, operating environments, and dependencies

ffmpeg, librosa, matplotlib, numpy, sklearn
python 3.9

Modules

core: Core funtionality of pymaivar
example: Example usage of PyMAiVAR
myutils: Utility functions for PyMAiVAR

Documentation

Documentation for each module is in their respective .md files

core --> core.md
example --> example.md
myutils --> myutils.md

Folder structure

.gitignore
LICENSE.md
README.md
code
|-- core.md
|-- core.py
|-- example.md
|-- example.py
|-- myutils.md
|-- myutils.py
data
|-- sample.wav
requirements.txt
results
|-- chrom-data.png
|-- mfcc-data.png
|-- mfccs-data.png
|-- sc-data.png
|-- specrolloff-data.png
|-- specshow1-data.png
|-- specshow2-data.png
|-- wp-data.png

An example can be found in example.py

Live Demo

Open in Code Ocean

Sample Outputs

Cite the following reference if you use the code implementation

@INPROCEEDINGS{pymaivar2022shaikh,
	author={Shaikh, Muhammad Bilal and Chai, Douglas and Islam, Syed Mohammed Shamsul and Akhtar, Naveed},
	booktitle={2022 IEEE International Conference on Visual Communications and Image Processing (VCIP)}, 
	title={MAiVAR: Multimodal Audio-Image and Video Action Recognizer}, 
	year={2022},
	pages={1-5},
	doi={10.1109/VCIP56404.2022.10008833}}

Acknowledgements

This research is jointly supported by Edith Cowan University (ECU) and Higher Education Commission (HEC) of Pakistan under Project #PM/HRDI-UESTPs/UETs-I/Phase-1/Batch-VI/2018. Dr. Akhtar is a recipient of Office of National Intelligence National Intelligence Postdoctoral Grant # NIPG-2021–001 funded by the Australian Government.