Skip to content

chiachun/UVA

Repository files navigation

UVA - Unsupervised Video Analysis

Required model files

You need to download these files and specify their path in your config.ini to run UVA.

Haarcascade model

VGG CNN model

PCA model

You can also produce your own PCA object using the script run_pca.py. You need to extract feautres of your photos by running caffe before running this script. The same PCA object could apply for films of similar quality (similar resolution) and human races, if you generate it with large enough statistics. How wide the application range is needs more investigation. The PCA object I offered was generated by films similar to this.

Film example

The film example used in the tutorial can be find here

Prerequisite packages for UVA

numpy, pandas, caffe, pickle, ConfigParser, opencv 3.1, imutils, matplotlib, logging, sklearn

Start UVA

  • Edit config.ini. to set path of inputs and outputs.
  • Turn on switches in run_uva.py
  • Start a python shell. Execute the script by typing execfile('run_uva.py')

Outputs

Output csv and html files are named after the scheme "prefix_$num_part_$i.csv (or .html)".

  • photo_$num_part_$i.html displays clustering result
  • time_$num_part_$i.html summarizes speaking sessions of speakers.

You need style.css to get the html files finely displayed.

About

Applying pre-trained CNN and clustering algorithms on recognizing people in videos

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published