https://github.com/ambco-iscte/pyfer

Automated facial expression emotion recognition in Python.

https://github.com/ambco-iscte/pyfer

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: ieee.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.6%) to scientific vocabulary

Keywords

computer-vision opencv-python tensorflow ultralytics yolov8
Last synced: 6 months ago · JSON representation

Repository

Automated facial expression emotion recognition in Python.

Basic Info
  • Host: GitHub
  • Owner: ambco-iscte
  • License: mit
  • Language: PureBasic
  • Default Branch: master
  • Homepage:
  • Size: 846 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
computer-vision opencv-python tensorflow ultralytics yolov8
Created over 2 years ago · Last pushed about 2 years ago
Metadata Files
Readme License

README.md



# PyFER **Automated Facial Expression Emotion Recognition Using Chained Neural Networks in Python** [![PDF - Check out our project report!](https://img.shields.io/badge/PDF-Check_out_our_project_report!-3172c8?logo=Adobe)](report.pdf)

Context

This library was developed as the final project for an elective Deep Learning for Computer Vision course as part of the Master's (MSc) in Computer Engineering programme at Iscte-IUL.

  • Grade: 20/20


How to Use

PyFER relies on two separate neural network models to automatically detect and classify facial expressions in any given image: 1. A FaceDetector model; 2. An EmotionClassifier model.

A FaceDetector object requires an Ultralytics model (these are built on PyTorch) to be created.

An EmotionClassifier object, in turn, requires: 1. A TensorFlow model; (Would, ideally, be PyTorch; TensorFlow is easier.) 2. The path to a yaml configuration file containing an emotions mapping between integers and emotion names.

Specifically, a PyFER model could be instantiated as follows. ```python

Load detector and classifier models

detector = FaceDetector(ultralyticsmodel) classifier = EmotionClassifier( classifier=tensorflowmodel, configfilepath='path/to/classifier/config/file.yaml' )

Instantiate PyFER model

pyfer = PyFER(detector, classifier) ```

And the following is an example of an EmotionClassifier configuration file. yaml emotions: 0: 'Neutral' 1: 'Happiness' 2: 'Surprise' 3: 'Sadness' 4: 'Anger' 5: 'Disgust' 6: 'Fear' 7: 'Contempt'

Using the Pre-Trained Models

If you download this repository, you'll find a ready-to-use YOLOv8-based face detection model stored in the trained-models folder. It can be used by simply instantiating FaceDetector with it as an argument. ```python detector = FaceDetector(YOLO('trained-models/detector.pt')) classifier = ...

pyfer = PyFER(detector, classifier) ```

In the same folder you can additionally find 6 (six) pre-trained facial expression emotion classification models, along with configuration files for the AffectNet and FER models. ```python from keras.models import load_model

detector = ... classifier = classifier = EmotionClassifier(load_model("trained-models/..."), "trained-models/config.yaml")

pyfer = PyFER(detector, classifier) ``` Try using each and see which one works best for you!

Creating your own Models

Object Detection: - Please refer to the Ultralytics documentation. :) - Make sure to train your model to only detect faces! - Check out the detector.py file to see how we did this.

Facial Expression Classification: - Construct any TensorFlow/Keras model that receives an image as input and, using Softmax or any similar activation at the output layer, outputs the probabilities of that image belonging to each given facial expression emotion class; - Assign an integer value to each of your considered emotions, and one-hot encode target labels using that mapping. - Check out the training.py file to see how we did this.


Example

Applying PyFER to a single image

The following is an example of applying PyFER to a single image.

```python import cv2 as cv

from ultralytics import YOLO

from model.detector.bounds import annotated from model.detector.detector import FaceDetector from model.pyfer import PyFER from model.classifier.classifier import EmotionClassifier from keras.models import load_model

Load detector and classifier models

detector = FaceDetector(YOLO('trained-models/detector.pt')) classifier = EmotionClassifier( classifier=loadmodel('path/to/model'), configfile_path='path/to/yaml/config' )

Instantiate PyFER model

pyfer = PyFER(detector, classifier)

Load image and convert to RGB

image = cv.cvtColor(cv.imread('path/to/image.png'), cv.COLOR_BGR2RGB)

Detect and classify faces

detections = pyfer.apply(image) imageprocessed = annotated(image, detections) cv.imshow('PyFER Image', cv.cvtColor(imageprocessed, cv.COLOR_RGB2BGR))

cv.waitKey(0) cv.destroyAllWindows() ```

Applying PyFER to webcam feed

If the models making up PyFER can be executed on the GPU and their execution is fast enough, PyFER can be applied to the frames of a webcam feed to automatically detect and classify the emotions of people in that feed in close to real-time!

The following is an example of this.

```python import cv2 as cv import torch from ultralytics import YOLO

from model.detector.bounds import annotated from model.detector.detector import FaceDetector from model.pyfer import PyFER from model.classifier.classifier import EmotionClassifier from keras.models import load_model

Set PyTorch to use GPU (big speedup for YOLO if CUDA is installed)

torch.cuda.set_device(0)

Load detector and classifier models

detector = FaceDetector(YOLO('trained-models/detector.pt')) classifier = EmotionClassifier( classifier=loadmodel('path/to/model'), configfile_path='path/to/yaml/config' )

Instantiate PyFER model

pyfer = PyFER(detector, classifier)

Start video capture

video = cv.VideoCapture(0) # Might need to adjust this number

while (True): if not video.isOpened(): break

# Read frame from webcam video capture
ret, frame = video.read()

frame = cv.cvtColor(frame, cv.COLOR_BGR2RGB)

# Apply PyFER to this frame
detections = pyfer.apply(frame)
frame_processed = annotated(frame, detections)

# Display the annotated frame
cv.imshow('PyFER Webcam Capture', cv.cvtColor(frame_processed, cv.COLOR_RGB2BGR))

# Quit when user presses the Q key
if cv.waitKey(1) and 0xFF == ord('q'):
    break

video.release() cv.destroyAllWindows() ```


Acknowledgements

We kindly thank Dr. Mohammad H. Mahoor, Professor of Electrical and Computer Engineering at the University of Denver, and M. Mehdi Hosseini, Ph.D. Student of Electrical and Computer Engineering at the University of Denver, for providing us with the AffectNet dataset to aid in the development of our facial expression classification model.

We kindly thank Dr. Jeffrey Cohn and Megan Ritter from the University of Pittsburgh for providing us with the Cohn-Kanade dataset and its extended version to aid in the development of our facial expression classification model. While we ended up not utilizing this dataset to train our model, we appreciate being provided with it!


Credit

Credit for all the code present in this repository goes to Afonso Canio and Samuel Correia, authors and sole contributors to the project and this repository, unless otherwise explicitly stated.

Owner

  • Name: Afonso Caniço
  • Login: ambco-iscte
  • Kind: user
  • Location: Lisbon, Portugal
  • Company: Iscte - University Institute of Lisbon

Master's Student & Invited Teaching Assistant @ Iscte-IUL

GitHub Events

Total
Last Year