tinyemo

"TinyEmo: Scaling down Emotional Reasoning via Metric Projection" ACMCV 2024

https://github.com/ggcr/tinyemo

Science Score: 41.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.0%) to scientific vocabulary

Keywords

affective-computing visual-emotion-recognition visual-sentiment-analysis
Last synced: 6 months ago · JSON representation ·

Repository

"TinyEmo: Scaling down Emotional Reasoning via Metric Projection" ACMCV 2024

Basic Info
  • Host: GitHub
  • Owner: ggcr
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 4.21 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 1
  • Releases: 0
Topics
affective-computing visual-emotion-recognition visual-sentiment-analysis
Created over 1 year ago · Last pushed about 1 year ago
Metadata Files
Readme Citation

README.md

TinyEmo

[Paper]

[Metric Projector Card] [TinyEmo MM-LLM Card]

[Dataset card]

TinyEmo is a family of small multi-modal language models for emotional reasoning and classification. Our approach features: (1) a synthetic emotional instruct dataset for both pre-training and fine-tuning stages, (2) a Metric Projector that delegates classification from the language model allowing for more efficient training and inference, (3) a multi-modal large language model (MM-LLM) for emotional reasoning, and (4) a semi-automated framework for bias detection. TinyEmo is able to perform emotion classification and emotional reasoning, all while using substantially fewer parameters than comparable models. This efficiency allows us to freely incorporate more diverse emotional datasets, enabling strong performance on classification tasks, with our smallest model (700M parameters) outperforming larger state-of-the-art models based on general-purpose MM-LLMs with over 7B parameters. Additionally, the Metric Projector allows for interpretability and indirect bias detection in large models without additional training, offering an approach to understand and improve AI systems.

Installation and Requirements

Metric Projector (Classification)

  1. Clone this repository and navigate to the root of the project: git clone https://github.com/ggcr/TinyEmo.git cd TinyEmo

  2. Create an environment and install dependencies: conda create -n projector_mps python=3.10 -y conda activate projector_mps pip install --upgrade pip # enable PEP 660 support pip install -e projector_mps/.

MM-LLM (Reasoning)

Refer to the TinyLLaVA installation section.

Quickstart

Metric Projector inference

We provide precomputed CLIP features for the Emotion6 dataset, and you can evaluate them using two methods:

Our Projectors from Hugging Face

To evaluate the projectors from Hugging Face, use the scripts/eval.sh script:

bash conda activate projector_mps bash projector_mps/scripts/eval.sh

The Zero-shot Accuracy in the table below is the average accuracy across multiple datasets, including Emotion6, FI, ArtPhoto, Abstract, and UnbiasedEmo.

| Model Architecture | Parameters | Zero-shot Accuracy | HuggingFace Link | |----------------------------------------| ---------- |--------------------|----------------------------------------------------------------------| | CLIP ViT-L/14 + OpenELM-270M-I | 0.70B | 57.87% | HF Projector 0.70B Link | | CLIP ViT-L/14 + OpenELM-450M-I | 0.88B | 55.24% | HF Projector 0.88B Link | | CLIP ViT-L/14 + TinyLLaMA 1.1 | 1.53B | 56.13% | HF Projector 1.53B Link | | CLIP ViT-L/14 + Microsoft Phi 2 | 3.21B | 56.28% | HF Projector 3.21B Link |

A more extensive eval of the results can be seen on Table VIII from the paper:

Custom Projectors with Local Weights

To use custom local weights or models, run the following:

bash conda activate projector_mps bash projector_mps/scripts/eval_custom.sh

This allows you to specify different vision encoders, language models, and loss functions, as well as use your own projector weights.

Acknowledgement

The Metric Projector was built from the foundations of CLIP-E paper!

Our codebase for the MM-LLM is forked from the TinyLLaVA project.

Citation

@mastersthesis{gutierrez2024tinyemo, title = {TinyEmo: Scaling down Emotional Reasoning via Metric Projection}, author = {Cristian Gutierrez}, year = 2024, month = {September}, address = {Barcelona, Spain}, note = {Available at \url{https://arxiv.org/abs/2410.07062}}, school = {Universitat Autonoma de Barcelona (UAB)}, type = {Master's thesis in Computer Vision} }

Citation (CITATION.bib)

@mastersthesis{gutierrez2024tinyemo,
  title        = {TinyEmo: Scaling down Emotional Reasoning via Metric Projection},
  author       = {Cristian Gutierrez},
  year         = 2024,
  month        = {September},
  address      = {Barcelona, Spain},
  note         = {Available at \url{https://ddd.uab.cat/record/301610?ln=en}},
  school       = {Universitat Autonoma de Barcelona (UAB)},
  type         = {Master's thesis in Computer Vision}
}

GitHub Events

Total
  • Issues event: 1
  • Watch event: 4
  • Public event: 1
  • Push event: 1
  • Fork event: 2
Last Year
  • Issues event: 1
  • Watch event: 4
  • Public event: 1
  • Push event: 1
  • Fork event: 2

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 22
  • Total Committers: 1
  • Avg Commits per committer: 22.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 22
  • Committers: 1
  • Avg Commits per committer: 22.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
ggcr g****n@i****m 22

Issues and Pull Requests

Last synced: 6 months ago