Skip to content

vp-cap/architecture

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VP-CAP Design and Architecture

VP-CAP (Video Platform with Content based Ad Placement)

  • When users upload videos, the video processing pipeline get the relevant objects and their positions in the video.
  • Businesses can upload banner ads and choose which object their ads corresponds to the most.
  • When streaming a video, based on the objects in the videos, the ads get matched and are displayed are appropriate positions in the video (where the object was detected and found to be relevant).

Refer repositories for the components for details on each:

Currently, MongoDB stores 3 tables: for Video, Ad and Video Inference (result after processing by handler). The video itself is stored in IPFS cluster which connects to the IPFS network and thus the client can directly request a video from it (this acts like a CDN).

Result

Check Usage.md for details on how to run.

Improvements/Problems

  • For this project, a library which seemed to meet the requirement (detecting objects from video) was used. Object detection from videos using ML can be improved or creation of complex pipelines to extract other relevant information like from the audio, using context of multiple video frames, etc.

  • DB may become a bottleneck as all service would send their requests. Possible solutions: increase cluster size of db or multiple/partitioned db?

  • Videos submitted for processing can be indefinitely stuck in the current design - if the handler fails abruptly without updating the status Have another service to regularly check if the processing time has crossed a threshold, and resubmit for processing

About

VP-CAP architecture and setup

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages