Skip to content

Latest commit

 

History

History
33 lines (23 loc) · 1.08 KB

ephrat17.md

File metadata and controls

33 lines (23 loc) · 1.08 KB

Vid2Speech: speech reconstruction from silent video

Ariel Ephrat and Shmuel Peleg

Speechreading is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible acoustic speech signal from silent video frames of a speaking person. The proposed CNN generates sound features for each frame based on its neighboring frames. Waveforms are then synthesized from the learned speech features to produce intelligible speech. We show that by leveraging the automatic feature learning capabilities of a CNN, we can obtain state-of-the-art word intelligibility on the GRID dataset, and show promising results for learning out-of-vocabulary (OOV) words.

Source Code

Dataset

Evaluation

  • subjective

Further Information