This project lets you search concepts in a video via a web interface. If first detects the shots by using the Twin Comparison Algorithm. Then, the frame in the middle of each shot is chosen as keyframe. Afterwards, each keyframe is feeded to the VGG16 CNN to extract the concepts (for each keyframe we extract the concept with the highest confidence). In the end, the resutls are persisted in a MySQL database so that they can be browsed by the user using the web-based UI.
The following setup has been tested on MBP running macOS Catalina using the V3C1 dataset.
conda env create -f environment.yml
conda activate video-search-python
To update the environment use conda env update -f environment.yml --prune
.
- Install brew
brew install mysql
mysql.server start
cp .env.example .env
.- Set the
.env
file accordingly. python keyframe_detection.py --input='path-to-videos'
python app.py