Skip to content

roskzhu/Empa

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

    Empa

Your Emotional Intelligence Partner

🏅 1st Place SheHacks Winner




image

DEMO VIDEO (turn captions on!): https://youtu.be/p0rzJLvPM_A

Empa is a full-stack web application that leverages computer vision and machine learning to analyze facial expressions and translate them into recognizable emotions. It's designed to assist individuals with communication disorders in social interactions, help those on the autism spectrum understand emotional cues, and enhance empathy in diverse, cross-cultural communications.

Features

  • Real-time emotion recognition from live facial facial footage using our custom-trained model.
  • Radar chart showing emotion metrics (Measured with the confidence level of each emotion).
  • Recommended responses from the detected emotion using transcribed audio to text (i.e. if you say a phrase expressing anger, the app shows ways you can say to soothe that person).

Built with

  • Flask backend
  • Python + Jupyter notebook to train the model
  • Vanilla React frontend styled with Tailwind CSS

How we trained the model

  • landmarking.ipynb -> Downloaded FER2013 dataset (imgs), and used Mediapipe to landmark 463 facial points, writing this to a CSV (fer2013_landmarks_nopathsfixed.csv)
  • x is the emotion (label) for the image
  • y are facial coordinates per image
  • landmarking_model -> Trained custom model using Tensorflow and Sci-kit Learn, using landmarked data from the CSV

Architecture Overview

Empa drawio

Getting Started

Prerequisites

  1. Before you begin, ensure you have met the following requirements:
  1. Install required dependencies in root folder and both frontend and backend folders
npm install
  1. Create a .env file in this folder with the following variables:
OPENAI_API_KEY={YOUR_API_KEY}

Starting the server

(127.0.0.1:5000 by default)

  1. cd server
  2. python3 -m venv venv
  3. source venv/bin/activate (MacOS)
  4. venv\Scripts\activate (Windows Powershell)
  5. pip install -r requirements.txt
  6. python3 app.py

Starting the app

(localhost:3000 by default)

  1. cd client
  2. npm install
  3. npm start

Sneak Peak

image

Next Steps

  • Radar chart updating live on real-time footage
  • Adding detection for emotions detected in vocal tone & body language for improved conversation suggestions
  • Deployment
  • Demo Video

About

Increasing empathy leveraging computer vision to translate facial expressions into recognizable emotions.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •