Skip to content

πŸ—¨πŸŽ™πŸ“š generate acoustic model adaptation datasets for Custom Speech Service

License

Notifications You must be signed in to change notification settings

noopkat/acoustic-model-machine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

30 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Acoustic Model Machine

This package can help you put together a dataset for training a custom acoustic adaptation model for Microsoft's Custom Speech Service.

It does so by using a video / audio file with matching subtitles to extract short passages of speech. These passages are saved to disk in the correct audio format needed. It'll then generate a transcription text file in the exact format required by Custom Speech Service. See the official Custom Speech Service documentation for more details on acoustic adaptation model datasets.

My favourite captioning service to request subtitle files from is Rev; I highly recommend their amazing people to caption your content!

gif of command line tool working

Installation

Firstly, install NodeJS on your operating system.
You'll also need to have ffmpeg installed. Windows can be tricky in particular; this guide is a good one.

Then, in your favourite terminal application, type the following:

npm install -g acoustic-model-machine

Usage

acoustic-model-machine --source=/path/to/audio.wav --subtitle=/path/to/subtitle.srt --output-dir=mydatasetdir

Options

  • --source required - the path to the source file containing the speech you want to extract.
  • --subtitle required - the path to the subtitle file that matches the source audio file. Requires .srt format.
  • --output-dir optional - the path to where you'd like the dataset saved to. Default is the current working directory.
  • --output-prefix optional - a custom prefix for the audio extraction files. Default is speech. See below section for output example.
  • --verbose optional - see more verbose logging output when generating a dataset.

Output

The completed dataset output will have the following structure:

mydatasetdir/
β”œβ”€β”€ audio
β”‚Β Β  β”œβ”€β”€ speech000001.wav
β”‚Β Β  β”œβ”€β”€ speech000002.wav
β”‚Β Β  β”œβ”€β”€ speech000003.wav
β”‚Β Β  β”œβ”€β”€ speech000004.wav
β”‚Β Β  β”œβ”€β”€ speech000005.wav
β”‚Β Β  β”œβ”€β”€ speech000006.wav
β”‚Β Β  β”œβ”€β”€ speech000007.wav
β”‚Β Β  β”œβ”€β”€ speech000008.wav
β”‚Β Β  β”œβ”€β”€ speech000009.wav
β”‚Β Β  └── speech000010.wav
└── transcription.txt

Importing the acoustic adaptation dataset into Custom Speech Service

  1. Compress all audio files in the dataset into a flat zip file.
  2. Follow the official documentation to import.

Roadmap

  • Importing of subtitles directly embedded in the source. For now, this needs to be performed manually with ffmpeg before using this tool.
  • Notice anything else missing? File an issue πŸ˜„

License

MIT.

About

πŸ—¨πŸŽ™πŸ“š generate acoustic model adaptation datasets for Custom Speech Service

Resources

License

Stars

Watchers

Forks

Packages

No packages published