New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
you would need to add code that records audio and then extracts the acoustic properties from the audio. #3
Comments
i found this lib: A Core Graphics-based audio waveform plot capable of visualizing any float array as a buffer or rolling plot. But when i test the audio waveform float array by use your ios demo such as but test result is female not male Probability spoken by a male: 0.970215% |
It's not enough to convert the audio to floats, you need to extract things like the mean frequency from this data. The blog post links to source code that does this. Unfortunately it is in the R language, which is a little weird. |
HI~ |
Yes. The code is in the file sound.R. |
can we get this R code in python?? |
HI~
Great blog here.
http://machinethink.net/blog/tensorflow-on-ios/?utm_source=Swift_Developments&utm_medium=email&utm_campaign=Swift_Developments_Issue_79
And how to get a mp3 file's properties like your maleExample data ? as your said 'you would first need to convert the audio into these 20 acoustic features.'
The text was updated successfully, but these errors were encountered: