"LipNet: End-to-End Sentence-level Lipreading" in PyTorch
-
Updated
Sep 9, 2019 - Python
"LipNet: End-to-End Sentence-level Lipreading" in PyTorch
Python toolkit for Visual Speech Recognition
Speaker-Independent Speech Recognition using Visual Features
Visual speech recognition with face inputs: code and models for F&G 2020 paper "Can We Read Speech Beyond the Lips? Rethinking RoI Selection for Deep Visual Speech Recognition"
Strong Gateway using Speech Processing ,3D Vision and Language processing . Deployed using Django
In this repository, I try to use k2, icefall and Lhotse for lip reading. I will modify it for the lip reading task. Many different lip-reading datasets should be added. -_-
Implementation of "Combining Residual Networks with LSTMs for Lipreading" in Keras and Tensorflow2.0
LipReadingITA: Keras implementation of the method described in the paper 'LipNet: End-to-End Sentence-level Lipreading'. Research project for University of Salerno.
Visual Speech Recognition for Multiple Languages
EMOLIPS: TWO-LEVEL APPROACH FOR LIP-READING EMOTIONAL SPEECH
Deep Visual Speech Recognition in arabic words
Deep Visual Speech Recognition in arabic words
Online Knowledge Distillation using LipNet and an Italian dataset. Master's Thesis Project.
A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.
Visual Speech Recognition using deep learing methods
Auto-AVSR: Lip-Reading Sentences Project
Add a description, image, and links to the visual-speech-recognition topic page so that developers can more easily learn about it.
To associate your repository with the visual-speech-recognition topic, visit your repo's landing page and select "manage topics."