Skip to content

lingjzhu/probing-TTS-models

Repository files navigation

Probing the phonetic and phonological knowledge of tones in Mandarin TTS models

link to pdf

Update

@gzfffff has provided an updated version of this repo with mulitple bugs corrected (thanks!).

Thanks for those who have pointed out bugs in this repo. I was surprised to find that many of you were interested in this project. As I did not expect that people would run my training code, I am sorry that detailed steps for training the model were not provided and that some common bugs were not fixed. Now I have fixed a bug in this repo.

Another common problem about replication is that training the model from scratch does not result in natural synthesized speech. I encountered the same issue too. So, before training, I initialized the model with the weights from a pre-trained English model (link). With the pre-trained weights initialization, the Chinese model converged very fast and was able to produce natural speech.

The training steps are also updated (see below).

Data

Audio samples can be found here: online demo

All synthesized stimuli can be accessed here.

Traning data can be found here.

Demo

Online Colab demo.

You can directly run the TTS models (Tacotron2 and WaveGlow) on Google Colab (with a powerful GPU).
Open In Colab

Runing locally.

torch == 1.1.0 (latest version will not work!)

  1. Download pre-trained Mandarin models at this folder.
  2. Download pre-trained Chinese BERT (BERT-wwm-ext, Chinese).
  3. Run ``inference_bert.ipynb''
    Or:
    Use the following command line.
python synthesize.py --text ./stimuli/tone3_stimuli --use_bert --bert_folder path_to_bert_folder 
--tacotron_path path_to_pre-trained_tacotron2 --waveglow_path path_to_pre-trained_waveglow 
--out_dir path_output_dir

Note. The current implementation is based on the Nvidia's public implementation of Tacotron2 and Waveglow

Training steps

torch == 1.1.0 (latest version will not work!)

  1. Download the dataset;
  2. Download pre-trained Chinese BERT (BERT-wwm-ext, Chinese).
  3. Run scripts in the preprocessing folder;
    1. partition.py
    2. preprocess_audio.py
    3. preprocess_text.py
    4. extract_bert.py
  4. Run the training script (detailed descriptions of each argument can be found in the source code).

References

This project has benefited immensely from the following works.
Pre-Trained Chinese BERT with Whole Word Masking
Tacotron 2 - PyTorch implementation with faster-than-realtime inference
WaveGlow: a Flow-based Generative Network for Speech Synthesis
A Demo of MTTS Mandarin/Chinese Text to Speech FrontEnd
Open-source mandarin speech synthesis data
只用同一声调的字可以造出哪些句子?