AR based android application using image processing and machine learning techniques, that makes a still images look like they are talking with audio generation and lip movements synced over that audio
-
Updated
Apr 28, 2020
AR based android application using image processing and machine learning techniques, that makes a still images look like they are talking with audio generation and lip movements synced over that audio
A package for simple, expressive, and customizable text-to-speech with an animated face.
Lip Language Video Data
Adventure Game Studio (AGS) module for lip sync
Create deepfake video by just uploading the original video and specifying the text the character will read
Revolutionize virtual interactions with a Unity-based chatbot combining GPT-generated dialogue, Oculus Lip Sync, and Google Cloud Speech Recognition for lifelike conversations. See running version on the Upwork Page.
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
Audio-Visual Lip Synthesis via Intermediate Landmark Representation
AI Talking Head: create video from plain text or audio file in minutes, support up to 100+ languages and 350+ voice models.
YerFace! A stupid facial performance capture engine for cartoon animation.
Keras version of Syncnet, by Joon Son Chung and Andrew Zisserman.
3D Avatar Lip Synchronization from speech (JALI based face-rigging)
This project is a digital human that can talk and listen to you. It uses OpenAI's GPT-3 to generate responses, OpenAI's Whisper to transcript the audio, Eleven Labs to generate voice and Rhubarb Lip Sync to generate the lip sync.
Add a description, image, and links to the lip-sync topic page so that developers can more easily learn about it.
To associate your repository with the lip-sync topic, visit your repo's landing page and select "manage topics."