mmaffben
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (6.8%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: lzw108
- License: apache-2.0
- Language: Python
- Default Branch: main
- Size: 6.36 MB
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
MMAFFBen: A Multilingual and Multimodal Affective Analysis Benchmark for Evaluating LLMs and VLMs
This is an extensive open-source benchmark for multilingual multimodal affective analysis.
Paper link: MMAFFBen
Datasets
Model
Usage
Fine-tune your model based on MMAFFIn
Download the train datasets (MMAFFIn) to the data folder.
python
bash run_sft_stream.sh
Evaluate your model on MMAFFBen
Download MMAFFBen data to the data folder.
python
bash run_inference.sh
This code is based on LLaMA-Factory. The current version supports the Qwen-VL series. Adjust the code for your own model according to the guidelines according to LLaMA-Factory
After getting the predicted results in the predicts folder, follow the steps in the evaluation.ipynb to obtain the scores of each subdataset.
Citation
If you use MMAFFBen in your work, please cite our paper:
bibtex
@article{liu2025mmaffben,
title={MMAFFBen: A Multilingual and Multimodal Affective Analysis Benchmark for Evaluating LLMs and VLMs},
author={Liu, Zhiwei and Qian, Lingfei and Xie, Qianqian and Huang, Jimin and Yang, Kailai and Ananiadou, Sophia},
journal={arXiv preprint arXiv:2505.24423},
year={2025}
}
Owner
- Name: Liu Zhiwei
- Login: lzw108
- Kind: user
- Location: Manchester, UK
- Repositories: 1
- Profile: https://github.com/lzw108
Citation (CITATION.cff)
cff-version: 1.2.0
date-released: 2024-03
message: "If you use this software, please cite it as below."
authors:
- family-names: "Zheng"
given-names: "Yaowei"
- family-names: "Zhang"
given-names: "Richong"
- family-names: "Zhang"
given-names: "Junhao"
- family-names: "Ye"
given-names: "Yanhan"
- family-names: "Luo"
given-names: "Zheyan"
- family-names: "Feng"
given-names: "Zhangchi"
- family-names: "Ma"
given-names: "Yongqiang"
title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models"
url: "https://arxiv.org/abs/2403.13372"
preferred-citation:
type: conference-paper
conference:
name: "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)"
authors:
- family-names: "Zheng"
given-names: "Yaowei"
- family-names: "Zhang"
given-names: "Richong"
- family-names: "Zhang"
given-names: "Junhao"
- family-names: "Ye"
given-names: "Yanhan"
- family-names: "Luo"
given-names: "Zheyan"
- family-names: "Feng"
given-names: "Zhangchi"
- family-names: "Ma"
given-names: "Yongqiang"
title: "LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models"
url: "https://arxiv.org/abs/2403.13372"
year: 2024
publisher: "Association for Computational Linguistics"
address: "Bangkok, Thailand"
GitHub Events
Total
- Push event: 1
- Public event: 1
Last Year
- Push event: 1
- Public event: 1
Dependencies
- ${BASE_IMAGE} latest build
- ascendai/cann 8.0.rc1-910b-ubuntu22.04-py3.8 build
- hardandheavy/transformers-rocm 2.2.0 build
- accelerate >=0.34.0,<=1.2.1
- av *
- datasets >=2.16.0,<=3.2.0
- einops *
- fastapi *
- fire *
- gradio >=4.38.0,<=5.12.0
- librosa *
- matplotlib >=3.7.0
- numpy <2.0.0
- packaging *
- pandas >=2.0.0
- peft >=0.11.1,<=0.12.0
- protobuf *
- pydantic *
- pyyaml *
- scipy *
- sentencepiece *
- sse-starlette *
- tiktoken *
- tokenizers >=0.19.0,<=0.21.0
- transformers >=4.41.2,<=4.49.0,
- trl >=0.8.6,<=0.9.6
- tyro <0.9.0
- uvicorn *
- accelerate <=1.2.1,>=0.34.0
- adam-mini *
- apollo-torch *
- aqlm >=1.1.0
- auto-gptq >=0.5.0
- autoawq *
- av *
- badam >=1.2.1
- bitsandbytes >=0.39.0
- datasets <=3.2.0,>=2.16.0
- decorator *
- deepspeed <=0.16.2,>=0.10.0
- eetq *
- einops *
- fastapi *
- fire *
- galore-torch *
- gradio <=5.12.0,>=4.38.0
- hqq *
- jieba *
- jsonschema_specifications *
- librosa *
- liger-kernel *
- matplotlib >=3.7.0
- modelscope *
- msgpack *
- nltk *
- numpy <2.0.0
- openmind *
- optimum >=1.17.0
- packaging *
- pandas >=2.0.0
- peft <=0.12.0,>=0.11.1
- pre-commit *
- protobuf *
- pydantic *
- pytest *
- pyyaml *
- referencing *
- rouge-chinese *
- ruff *
- scipy *
- sentencepiece *
- soundfile *
- sse-starlette *
- swanlab *
- tiktoken *
- tokenizers <=0.21.0,>=0.19.0
- torch >=1.13.1
- torch ==2.1.0
- torch-npu ==2.1.0.post3
- torchaudio *
- torchvision *
- transformers *
- transformers_stream_generator *
- trl <=0.9.6,>=0.8.6
- tyro <0.9.0
- uvicorn *
- vector_quantize_pytorch *
- vllm <=0.7.2,>=0.4.3
- vocos *