Skip to content

Prakruthijainys/Lipsync

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The objective of this project is to create an AI model that is proficient in lip-syncing i.e. synchronizing an audio file with a video file. Your task is to ensure the model is accurately matching the lip movements of the characters in the given video file with the corresponding audio file.

The link to google colab notebook : https://colab.research.google.com/drive/1-j1n9JCn5a4rt089Pvve0BAiVbCFiruC

The input files used are:

Used Wav2Lip to lipsync using pretrained model downloaded using https://github.com/Rudrabha/Wav2Lip.

Prerequisites:

  1. Python 3.6
  2. Install necessary packages using pip install -r requirements.txt packages installed are:
    1. librosa==0.8.0
    2. numpy==1.17.1
    3. --upgrade pip
    4. opencv-python
    5. opencv-contrib-python
    6. torch==1.1.0
    7. torchvision==0.3.0 8.tqdm==4.45.0
    8. numba==0.48

Lip-syncing videos using the pre-trained models (Inference) : The result is saved in results/result_voice.mp4.

The link to google colab notebook : https://colab.research.google.com/drive/1-j1n9JCn5a4rt089Pvve0BAiVbCFiruC