Skip to content

Worked for the first time with audio data where I learned how the features is extracted from audio data using MFCC — Mel-Frequency Cepstral Coefficients, and then using these features I trained a sequential model with 6 hidden layers.

niranjainjain022/Gender-Prediction-using-Audio-Data

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

Gender-Prediction-using-Audio-Data

Worked for the first time with audio data where I learned how the features is extracted from audio data using MFCC — Mel-Frequency Cepstral Coefficients, and then using these features I trained a sequential model with 6 hidden layers.

Dataset can be downloaded from : https://drive.google.com/file/d/1g64EswaS5PtwIg-Y0ZmWwvSK1DgYvUuc/view?usp=sharing Download the .ipynb notebook to implement the code only run the cells which:

  1. installs the libraries
  2. imports the libraries
  3. the get_MFCC function
  4. last 5 cells after the markdown comment "Testing the Model with your own Voice"

Save the notebook in the same folder where male_clips folder and female_clips folder is there. Save the results folder (Which has the model.h5 file) in the same folder where the notebook is stored (i.e. where male_clips folder and female_clips folder is there.)

About

Worked for the first time with audio data where I learned how the features is extracted from audio data using MFCC — Mel-Frequency Cepstral Coefficients, and then using these features I trained a sequential model with 6 hidden layers.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published