Releases: tryolabs/TLSphinx
Releases · tryolabs/TLSphinx
Fix AVAudioSession setup
Merge pull request #60 from tryolabs/development Fix AVAudioSession category setup
Xcode 10.2 Support
- Xcode 10.2 Support
- Update install instructions
First release
Merge pull request #46 from tryolabs/development 1.0.0
Swift 3 and more
This is a beta version. Please create issues, and PRs ;), if you think that something needs to be addressed.
There are plenty of low hanging fruit so don't hesitate in be active about anything.
Release Notes:
- General code improvements (fix access rights, prefer guard over if, etc) and three important changes:
- New API to add words to the recognition dictionary on runtime. Be aware that new words can't be added while a recognition is in progress. You should add new words before start a recognition process.
The API expect an array of tuples of String with the form:(word: "HELLO", phones: "HH EH L OW")
. The first component is the word in plain English. The second is the pronunciation phones as appear in the cmudict (more here: http://www.speech.cs.cmu.edu/tools/lextool.html) In the future the second component should be calculated - The decode functions now throw exceptions when apply.
- There is a new approach to the live decode logic with
AVAudioConverter
. The idea is read the data in a more appealing format for iOS (float 32, 16000 Hz) and convert it to the Sphinx format (int 16, 16000Hz). AVAudioConverter is only available from iOS 9.0 so the deployment target needs to change. This should address #24 and #33 - Support for Swift 3. Thanks to @cgamache
- Upgrade the CMUSphinx binaries and headers to 5prealpha. Thanks again to @cgamache
macOS compatibility
Fix live decoding functionality to improve memory efficiency and work with macOS. Thanks @ss-pq
Release 0.0.3
Updated module.modulemap
to match new syntax.
Release 0.0.2
Update the code to work on XCode 7.x and Swift 2, thanks to @hewigovens for the changes.
Release 0.0.1
Allow to decode speech from a file or directly form the mic.