Skip to content

The LipSync-Wav2Lip-Project repository is a comprehensive solution for achieving lip synchronization in videos using the Wav2Lip deep learning model. This open-source project includes code that enables users to seamlessly synchronize lip movements with audio tracks.

Notifications You must be signed in to change notification settings

Dishantkharkar/LipSync-Wav2Lip-Project

Repository files navigation

LipSync-Wav2Lip-Project

This repository contains code for lip synchronization using Wav2Lip, a deep learning-based model.

How to Use this Code for Lip Synchronization

Step 1: Clone the Repository

git clone https://github.com/Dishantkharkar/LipSync-Wav2Lip-Project.git
cd LipSync-Wav2Lip-Project

Step 2: Install Requirements

pip install -r requirements.txt

Step 3: Download Pretrained Model

Download the pretrained model from s3fd.pth and save it in the face_detection/detection/sfd/ folder.

Step 4: Obtain Additional Weights

Navigate to the official Wav2Lip repository and follow the instructions in the README to obtain additional weights.

Step 5: Add Video and Audio

Place your video and audio files in the folder shown below: Folder Structure

Step 6: Lip Synchronization

Run the following command to perform lip synchronization:

python inference.py --checkpoint_path <path_to_pretrained_model> --face <path_to_face_video> --audio <path_to_audio_file>

Replace <path_to_pretrained_model>, <path_to_face_video>, and <path_to_audio_file> with the appropriate paths.

Example using newscript.txt file:

Example

The result will be stored in the Result folder with the name result_audio. image

you got like this :image

Evaluation

For evaluating the model, you can use the provided evaluation script:

python evaluation/evaluate.py --model_path <path_to_model> --data_path <path_to_evaluation_data>

Replace <path_to_model> and <path_to_evaluation_data> with the paths to your trained model and evaluation dataset, respectively.

Additional Information

For more details and updates, refer to the original Wav2Lip README.

Contributors

About

The LipSync-Wav2Lip-Project repository is a comprehensive solution for achieving lip synchronization in videos using the Wav2Lip deep learning model. This open-source project includes code that enables users to seamlessly synchronize lip movements with audio tracks.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published