Skip to content

This is a web application that takes different kind of inputs(real-time, image, video) from the user and display the emotion based on the facial expressions.

License

PrudhvirajuChekuri/EmoViz

Repository files navigation

EmoViz(Facial Emotion Detector)

🙂😀😮😤😒😔😨

pred_my_video.mp4

7Emotions

Jump over to the Installation section using the table of contents if you don't want to know much about the project.

Table of Contents
  1. Description
  2. About the Models
  3. Installation
  4. Inference
  5. License
  6. Credits

📝Description

Facial Emotion Detection is one of the useful and toughest Machine Learning tasks because of the intra-class variation in expressions among the people. The best usecase of FED is in human-machine interaction. EmoViz is a facial emotion detection system built using TensorFlow, which takes image input and display one of the seven emotions(Neutral, Happy, Surprise, Angry, Disgust, Sad, Fear).

(back to top)

🤖About the Models

The system employs two models: one for face detection and another for facial emotion classification.

Face Detector

  • Face detection model is built using TensorFlow 2.0 object detection API, where I used SSD MOBILENET V2 as the pretrained model.
  • To built a face detection model first get the data(There will be many resources out there or else take some random pictures from internet and the number is dependent, in my case it's 50 for training and 10 for validation) to train the model.
  • And then follow this documentation to train a face detector on the gathered data. If u want a video tutorial check this.
  • I'd used colab to train the model.

Facial Emotion Classifier

  • Facial Emotion Classification model is built using TensorFlow 2.0, where I trained a deep convolutional neural network on RAF-DB. It's a generalised modelwith a training accuracy of 90% and test accuracy of 83%.
  • I'll create a separate repo on how to create a model like this, so if you're here then check my repos.
  • RAF-DB is a private database and to download it you need to send an email as shown on their website.
  • First I'd started training the model on my local machine(GPU - GTX 1660ti), but when I started incresing the complexity of the model it took a lot of time to train and also started crying like a baby.
  • I can't experiment fastly, So I shifted to kaggle to train the model. I've choosen kaggle over colab because of it's more gpu time and resources and it made experimentation faster.

How it works

The input image is first given to the face detector to detect the face and then this is given to the facial emotion classifier to classify the emotion.

(back to top)

🖥Installation

🛠Requirements

  • Python 3.7+
  • Other requirement are mentioned in requirements.txt and can be installed as shown below.

⚙️Setup

  1. First create a virtual environment using venv. You can use powershell, go to the path you want and type the below commands.
python -m venv EViz
  1. Activate it.
EViz\Scripts\activate

Then you can use pip list to see what are all the packages installed and there will be pip and settup tools(in my case).

  1. Upgrade pip to the latest version.
python -m pip install --upgrade pip

Clone the EmoViz repo and change the directory to EmoViz and run the below command.

  1. Install the required packages.
python -m pip install -r requirements.txt

That's it you're set to go.

(back to top)

🎯Inference

python app.py

(back to top)

⚖License

Distributed under the MIT License. See LICENSE.txt for more information.

(back to top)

Credits

(back to top)

About

This is a web application that takes different kind of inputs(real-time, image, video) from the user and display the emotion based on the facial expressions.

https://emoviz.biz (link inactive)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published