Skip to content

The model features on the system are all handled by hand gestures. A deep-learning model is used to track the hand and fingers. The tracked fingers then will be made use to generate the click signs and access the other functionalities of the system.

License

Hrushi11/AI_Visual_Stream

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI_Visual_Stream

The model features on the system are all handled by hand gestures. A deep-learning model is used to track the hand and fingers. The tracked fingers then will be made use to generate the click signs and access the other functionalities of the system.

Once the hands are tracked, there are series of functionalities that will be implemented. Some of them are:

  1. Loading images (like diagrams, use cases and workflows)
  2. ‘3-Dimensional graphics‘ to visualize the advanced curves that are hard to display on 2-D surfaces.
  3. Result analysis - Most of the video conferencing platforms don’t provide this, The system shows the result analysis of how each student performed and display them graphically with charts

IMG

IMG

Mobile View:

IMG

About

The model features on the system are all handled by hand gestures. A deep-learning model is used to track the hand and fingers. The tracked fingers then will be made use to generate the click signs and access the other functionalities of the system.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published