https://github.com/cosmaadrian/nli-stress-test
Official repository for the EMNLP 2024 paper "How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics"
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: scholar.google -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.7%) to scientific vocabulary
Keywords
Repository
Official repository for the EMNLP 2024 paper "How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics"
Basic Info
Statistics
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
How Hard is this Test Set? NLI Characterization by Exploiting Training Dynamics
Accepted at "The 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP 2024)"
📘 Abstract
Natural Language Inference (NLI) evaluation is crucial for assessing language understanding models; however, popular datasets suffer from systematic spurious correlations that artificially inflate actual model performance. To address this, we propose a method for the automated creation of a challenging test set without relying on the manual construction of artificial and unrealistic examples. We categorize the test set of popular NLI datasets into three difficulty levels by leveraging methods that exploit training dynamics. This categorization significantly reduces spurious correlation measures, with examples labeled as having the highest difficulty showing markedly decreased performance and encompassing more realistic and diverse linguistic phenomena. When our characterization method is applied to the training set, models trained with only a fraction of the data achieve comparable performance to those trained on the full dataset, surpassing other dataset characterization techniques. Our research addresses limitations in NLI dataset construction, providing a more authentic evaluation of model performance with implications for diverse NLU applications.
⚒️ Usage
TBD
📖 Citation
If you found our work useful, please cite our paper:
@inproceedings{cosma2024hard,
title = "How Hard is this Test Set? {NLI} Characterization by Exploiting Training Dynamics",
author = "Cosma, Adrian and
Ruseti, Stefan and
Dascalu, Mihai and
Caragea, Cornelia",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.175/",
doi = "10.18653/v1/2024.emnlp-main.175",
pages = "2990--3001"
}
📝 License
This work is protected by Attribution-NonCommercial 4.0 International
Owner
- Name: Adrian Cosma
- Login: cosmaadrian
- Kind: user
- Location: Bucharest, Romania
- Company: University Politehnica of Bucharest
- Repositories: 21
- Profile: https://github.com/cosmaadrian
Mercenary Researcher
GitHub Events
Total
- Push event: 1
Last Year
- Push event: 1