Skip to content

Latest commit

 

History

History
99 lines (82 loc) · 3.83 KB

nvidia_deeplearningexamples_tacotron2.md

File metadata and controls

99 lines (82 loc) · 3.83 KB
layout background-class body-class title summary category image author tags github-link github-id featured_image_1 featured_image_2 accelerator order demo-model-link
hub_detail
hub-background
hub
Tacotron 2
The Tacotron 2 model for generating mel spectrograms from text
researchers
nvidia_logo.png
NVIDIA
audio
NVIDIA/DeepLearningExamples
tacotron2_diagram.png
no-image
cuda
10

Model Description

The Tacotron 2 and WaveGlow model form a text-to-speech system that enables user to synthesise a natural sounding speech from raw transcripts without any additional prosody information. The Tacotron 2 model produces mel spectrograms from input text using encoder-decoder architecture. WaveGlow (also available via torch.hub) is a flow-based model that consumes the mel spectrograms to generate speech.

This implementation of Tacotron 2 model differs from the model described in the paper. Our implementation uses Dropout instead of Zoneout to regularize the LSTM layers.

Example

In the example below:

  • pretrained Tacotron2 and Waveglow models are loaded from torch.hub
  • Given a tensor representation of the input text ("Hello world, I missed you so much"), Tacotron2 generates a Mel spectrogram as shown on the illustration
  • Waveglow generates sound given the mel spectrogram
  • the output sound is saved in an 'audio.wav' file

To run the example you need some extra python packages installed. These are needed for preprocessing the text and audio, as well as for display and input / output.

pip install numpy scipy librosa unidecode inflect librosa
apt-get update
apt-get install -y libsndfile1

Load the Tacotron2 model pre-trained on LJ Speech dataset and prepare it for inference:

import torch
tacotron2 = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tacotron2', model_math='fp16')
tacotron2 = tacotron2.to('cuda')
tacotron2.eval()

Load pretrained WaveGlow model

waveglow = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_waveglow', model_math='fp16')
waveglow = waveglow.remove_weightnorm(waveglow)
waveglow = waveglow.to('cuda')
waveglow.eval()

Now, let's make the model say:

text = "Hello world, I missed you so much."

Format the input using utility methods

utils = torch.hub.load('NVIDIA/DeepLearningExamples:torchhub', 'nvidia_tts_utils')
sequences, lengths = utils.prepare_input_sequence([text])

Run the chained models:

with torch.no_grad():
    mel, _, _ = tacotron2.infer(sequences, lengths)
    audio = waveglow.infer(mel)
audio_numpy = audio[0].data.cpu().numpy()
rate = 22050

You can write it to a file and listen to it

from scipy.io.wavfile import write
write("audio.wav", rate, audio_numpy)

Alternatively, play it right away in a notebook with IPython widgets

from IPython.display import Audio
Audio(audio_numpy, rate=rate)

Details

For detailed information on model input and output, training recipies, inference and performance visit: github and/or NGC

References