Skip to content

alre4436/Emotion-recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Emotion-recognition

CNN to identify Human Facial Expression Recognition By Abdulaziz Alresidi

Abstract Explore and compare artificial intelligence and machine learning techniques to understand accuracy in classifying human facial expressions from images. Dataset The primary dataset used for this project was taken from the National Institute of Standards and Technology (NIST) Face Projects (https://www.nist.gov/programs-projects/face-projects).
This database consisted of hundreds of faces showing a range of expressions. From this set, the group selected a subset and manually categorized them into one of 5 emotions: anger, neutral/casual, joy/happy, sorrow, and surprise. The final size of our dataset consisted of ~260 images.

1

Tools | Methods Tools: Python (packages described throughout the Analysis section) Pycharm Jupyter Notebook

Models:

VGG16 (transfer learning)

2.Development process and Data The idea of this project is to construct a CNN model and using transfer learning that can predict the probability that a specific human facial Happy ,casual , sorrow,anger,and suprise .

2.1 Data:

Each class contian with 50 images in each folder. So we have 203 images for training and 55 for testing

1- Sample images of Anger/Saad:

s026_002_00000007 2- Sample images of Casual:

s010_005_00000006 3- Sample images of Joy:

s022_003_00000026 4- Sample images of sorrow:

s011_002_00000016 5- Sample images of sorrow:

s014_001_00000022

2.2 Preprocessing:

The following preprocessing tasks are developed for each image:

Visual inspection to detect images with low quality or not representative Image resizing: Transform images to 50x50x3 Crop images: Automatic or manual Crop Other to define later in order to improve model quality

2.3 CNN Model:

The idea is to develop a simple CNN model from scratch, and evaluate the performance to set a baseline. The following steps to improve the model are:

Data augmentation: Rotations, noising, scaling to avoid overfitting Transferred Learning: Using a pre-trained network construct some additional layer at the end to fine tuning our model. (VGG-16, or other) Full training of VGG-16 + additional layer.

2.4 Model Evaluation:

To evaluate the different models we will use ROC Curves and AUC score. To choose the correct model we will evaluate the precision and accuracy to set the threshold level that represent a good tradeoff between TPR and FPR. Result First Model: CNN from scratch, no data augmentation

Simple Convolutional Neural Network with 3*3 layers. The results obtained until now can be shown on the ROC curve presented below: 1

Classification Report VGG16 From scratch, CV Folder.

Model_name = vgg16.hdf5 50 epochs. AUC: 100% 1 Vgg16 confusion_matrix 1 Class0 for anger_sad class1 forCasual class2 for Happy class3 for Sorrow class 4 for Surprise

1

Plotting Vgg16 model Performance for Accuracy and Loss

loss_visualize

Accuracy

model accurcy

Model2 Using CNN. Keras-ImageDataGenerator

This model contains a modified version of Keras ImageDataGenerator. It generate batches of tensor with real-time data augmentation. This generator is implemented for foreground segmentation or semantic segmentation. Please refer to https://keras.io/preprocessing/image/#image-preprocessing for more details

1

MODEL Accuracy 90% plot the training loss and accuracy

1

1

1

Show a summary of the model. Check the number of trainable parameters

1

Classification report

1

1