dual-emonet

This work demonstrates that through synergistic multi-component optimization, it is possible to effectively address the inherent trade-offs in FER regarding granularity, efficiency, and data imbalance, providing a robust solution for developing high-performance and easily deployable affective computing systems for resource-constrained environments.

https://github.com/romanceyu/dual-emonet

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (2.0%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

This work demonstrates that through synergistic multi-component optimization, it is possible to effectively address the inherent trade-offs in FER regarding granularity, efficiency, and data imbalance, providing a robust solution for developing high-performance and easily deployable affective computing systems for resource-constrained environments.

Basic Info
  • Host: GitHub
  • Owner: romanceyu
  • License: mit
  • Language: Python
  • Default Branch: main
  • Size: 396 KB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 9 months ago · Last pushed 9 months ago
Metadata Files
Readme License Citation

README.md

Dual-EmoNet

This work demonstrates that through synergistic multi-component optimization, it is possible to effectively address the inherent trade-offs in FER regarding granularity, efficiency, and data imbalance, providing a robust solution for developing high-performance and easily deployable affective computing systems for resource-constrained environments.

Owner

  • Login: romanceyu
  • Kind: user

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: >-
  ResEmoteNet: Bridging Accuracy and Loss Reduction in
  Facial Emotion Recognition
message: >-
  ResEmoteNet: Bridging Accuracy and Loss Reduction in
  Facial Emotion Recognition
type: software
authors:
  - given-names: Arnab Kumar Roy
    email: arnabroy770@gmail.com
    affiliation: Sikkim Manipal Institute of Technology
    orcid: 'https://orcid.org/0009-0001-9988-4779'
  - given-names: Hemant Kumar Kathania
    email: hemant.ece@nitsikkim.ac.in
    affiliation: National Institute of Technology Sikkim
  - given-names: Adhitiya Sharma
    email: b180078@nitsikkim.ac.in
    affiliation: National Institute of Technology Sikkim
  - given-names: Abhishek Dey
    email: abhishek@kaliberlabs.com
    affiliation: >-
      Bay Area Advanced Analytics India (P) Ltd, A
      Kaliber.AI
  - given-names: Md. Sarfaraj Alam Ansari
    email: sarfaraj@nitsikkim.ac.in
    affiliation: National Institute of Technology Sikkim
repository-code: 'https://github.com/ArnabKumarRoy02/ResEmoteNet/'
abstract: >-
  The human face is a silent communicator, expressing
  emotions and thoughts through its facial expressions. With
  the advancements in computer vision in recent years,
  facial emotion recognition technology has made significant
  strides, enabling machines to decode the intricacies of
  facial cues. In this work, we propose ResEmoteNet, a novel
  deep learning architecture for facial emotion recognition
  designed with the combination of Convolutional,
  Squeeze-Excitation (SE) and Residual Networks. The
  inclusion of SE block selectively focuses on the important
  features of the human face, enhances the feature
  representation and suppresses the less relevant ones. This
  helps in reducing the loss and enhancing the overall model
  performance. We also integrate the SE block with three
  residual blocks that help in learning more complex
  representation of the data through deeper layers. We
  evaluated ResEmoteNet on three open-source databases:
  FER2013, RAF-DB, and AffectNet, achieving accuracies of
  79.79%, 94.76%, and 72.39%, respectively. The proposed
  network outperforms state-of-the-art models across all
  three databases. The source code for ResEmoteNet is
  available at
  https://github.com/ArnabKumarRoy02/ResEmoteNet
keywords:
  - Facial Emotion Recognition
  - Convolutional Neural Network
  - Squeeze and Excitation Network
  - Residual Network
license: MIT

GitHub Events

Total
  • Push event: 2
Last Year
  • Push event: 2

Dependencies

requirements.txt pypi
  • Pillow ==10.3.0
  • dlib ==19.24.2
  • matplotlib ==3.8.3
  • numpy ==1.26.4
  • opencv_python ==4.9.0.80
  • pandas ==2.2.2
  • retina_face ==0.0.14
  • seaborn ==0.13.2
  • torch ==2.1.2
  • torchvision ==0.16.2
  • tqdm ==4.66.1
  • urllib3 ==2.2.1