You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project is a digital human that can talk to you and is animated based on your questions. It uses the Nvidia API endpoint Meta llama3-70b to generate responses, Eleven Labs to generate voice and Rhubarb Lip Sync to generate the lip sync.
Zippy Talking Avatar uses Azure Cognitive Services and OpenAI API to generate text and speech. It is built with Next.js and Tailwind CSS. This avatar responds to user input by generating both text and speech, offering a dynamic and immersive user experience
Presenting SIPHI, a revolutionary assistive technology designed to empower the deaf community with the ability to speak English. SIPHI utilizes cutting-edge lip-sync animation and graphical voice representation to provide an immersive and effective learning experience.
The LipSync-Wav2Lip-Project repository is a comprehensive solution for achieving lip synchronization in videos using the Wav2Lip deep learning model. This open-source project includes code that enables users to seamlessly synchronize lip movements with audio tracks.