Skip to content

implementation of te IEEE paper "Representation Learning and Nature Encoded Fusion Technique for Heterogeneous Sensor Networks "

Notifications You must be signed in to change notification settings

anishLearnsToCode/nature-encoded-fusion

Repository files navigation

Representation Learning and Nature Encoded Fusion Technique for Heterogeneous Sensor Networks

This project is an implementation of the IEEE paper on Representation Learning and Nature Encoded Fusion Technique for Heterogeneous Sensor Networks paper number 10.1109/ACCESS.2019.2907256

The paper can be looked up here

Running the Program

Run the file driver.m that will internally call auxiliary functions and the beleif_propogation.m file to run and display the output.

The program will extract audio features, specifically the mfcc from a sample audio that has been attached with the project and features from the frame of a video that has also been attached with this project.

Output

Here the plots are on a grid of 3X3 and plot (x,y) refers to plot on the xth row and yth column starting the index from 1 and from the top left.

Plot (1, 2) represents the cross correlation between the probability distribution functions of the audio and video features.

Plot(1, 3) represents the sum of the cross correlation and the self correlation where the self correlation is the correlation of the audio probability distribution function with itself.

Plot(2, 1) represents the belief propagation algorithm applied on the audio features

Plot(2, 2) represents the belief propagation algorithm applied to the video features

Plot(2, 3) represents the belief propagation of the log likelihood series [1, -2, -1, -2, -1]

Plot(3, 1) represents the belief propagation of the log likelihood series [1, -1, -2, -5, -6]

About

implementation of te IEEE paper "Representation Learning and Nature Encoded Fusion Technique for Heterogeneous Sensor Networks "

Topics

Resources

Stars

Watchers

Forks

Languages