milu

MILU (Multi-task Indic Language Understanding Benchmark) is a comprehensive evaluation dataset designed to assess the performance of LLMs across 11 Indic languages.

https://github.com/ai4bharat/milu

Science Score: 41.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.2%) to scientific vocabulary

Keywords

ai4bharat evaluation indic-languages llm-evaluation
Last synced: 6 months ago · JSON representation ·

Repository

MILU (Multi-task Indic Language Understanding Benchmark) is a comprehensive evaluation dataset designed to assess the performance of LLMs across 11 Indic languages.

Basic Info
Statistics
  • Stars: 11
  • Watchers: 8
  • Forks: 4
  • Open Issues: 0
  • Releases: 0
Topics
ai4bharat evaluation indic-languages llm-evaluation
Created over 1 year ago · Last pushed about 1 year ago
Metadata Files
Readme Contributing License Citation Codeowners

README.md

MILU: A Multi-task Indic Language Understanding Benchmark

ArXiv Hugging Face CC BY 4.0

Overview

MILU (Multi-task Indic Language Understanding Benchmark) is a comprehensive evaluation dataset designed to assess the performance of Large Language Models (LLMs) across 11 Indic languages. It spans 8 domains and 41 subjects, reflecting both general and culturally specific knowledge from India.

This repository contains code for evaluating language models on the MILU benchmark using the lm-eval-harness framework.

Usage

Prerequisites
  • Python 3.7+
  • lm-eval-harness library
  • HuggingFace Transformers
  • vLLM (optional, for faster inference)
  1. Clone this repository:

bash git clone --depth 1 https://github.com/AI4Bharat/MILU.git cd MILU pip install -e .

  1. Request access to the HuggingFace 🤗 dataset here.

  2. Set up your environment variables:

bash export HF_HOME=/path/to/HF_CACHE/if/needed export HF_TOKEN=YOUR_HUGGINGFACE_TOKEN

The following languages are supported for MILU: - Bengali - English - Gujarati - Hindi - Kannada - Malayalam - Marathi - Odia - Punjabi - Tamil - Telugu

HuggingFace Evaluation

For HuggingFace models, you may use the following sample command:

bash lm_eval --model hf \ --model_args 'pretrained=google/gemma-2-27b-it,temperature=0.0,top_p=1.0,parallelize=True' \ --tasks milu \ --batch_size auto:40 \ --log_samples \ --output_path $EVAL_OUTPUT_PATH \ --max_batch_size 64 \ --num_fewshot 5 \ --apply_chat_template

vLLM Evaluation

For vLLM-compatible models, you may use the following sample command:

bash lm_eval --model vllm \ --model_args 'pretrained=meta-llama/Llama-3.2-3B,tensor_parallel_size=$N_GPUS' \ --gen_kwargs 'temperature=0.0,top_p=1.0' \ --tasks milu \ --batch_size auto \ --log_samples \ --output_path $EVAL_OUTPUT_PATH

Single Language Evaluation

To evaluate your Model on a specific language, modify the --tasks parameter:

bash --tasks milu_English

Replace English with the available language (e.g., Odia, Hindi, etc.).

Evaluation Tips & Observations

  1. Make sure to use --apply_chat_template for Instruction-fine-tuned models, to format the prompt correctly.
  2. vLLM generally works better with Llama models, while Gemma models work better with HuggingFace.
  3. If vLLM encounters out-of-memory errors, try reducing max_gpu_utilization else switch to HuggingFace.
  4. For HuggingFace, use --batch_size=auto:<n_batch_resize_tries> to re-select the batch size multiple times.
  5. When using vLLM, pass generation kwargs in the --gen_kwargs flag. For HuggingFace, include them in model_args.

Key Features

  • 11 Indian Languages: Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Odia, Punjabi, Tamil, Telugu, and English
  • Domains: 8 diverse domains including Arts & Humanities, Social Sciences, STEM, and more
  • Subjects: 41 subjects covering a wide range of topics
  • Questions: ~80,000 multiple-choice questions
  • Cultural Relevance: Incorporates India-specific knowledge from regional and state-level examinations

Dataset Statistics

| Language | Total Questions | Translated Questions | Avg Words Per Question | |------------|----------------|----------------------|------------------------| | Bengali | 6638 | 1601 | 15.12 | | Gujarati | 4827 | 2755 | 16.12 | | Hindi | 14837 | 115 | 20.61 | | Kannada | 6234 | 1522 | 12.42 | | Malayalam | 4321 | 3354 | 12.39 | | Marathi | 6924 | 1235 | 18.76 | | Odia | 4525 | 3100 | 14.96 | | Punjabi | 4099 | 3411 | 19.26 | | Tamil | 6372 | 1524 | 13.14 | | Telugu | 7304 | 1298 | 15.71 | | English | 13536 | - | 22.07 | | Total | 79617 | 19915 | 16.41 (avg) |

Dataset Structure

Test Set

The test set consists of the MILU (Multi-task Indic Language Understanding) benchmark, which contains approximately 85,000 multiple-choice questions across 11 Indic languages.

Validation Set

The dataset includes a separate validation set of 8,933 samples that can be used for few-shot examples during evaluation. This validation set was created by sampling questions from each of the 41 subjects.

Subjects spanning MILU

| Domain | Subjects | |--------|----------| | Arts & Humanities | Architecture and Design, Arts and Culture, Education, History, Language Studies, Literature and Linguistics, Media and Communication, Music and Performing Arts, Religion and Spirituality | | Business Studies | Business and Management, Economics, Finance and Investment | | Engineering & Tech | Energy and Power, Engineering, Information Technology, Materials Science, Technology and Innovation, Transportation and Logistics | | Environmental Sciences | Agriculture, Earth Sciences, Environmental Science, Geography | | Health & Medicine | Food Science, Health and Medicine | | Law & Governance | Defense and Security, Ethics and Human Rights, Law and Ethics, Politics and Governance | | Math and Sciences | Astronomy and Astrophysics, Biology, Chemistry, Computer Science, Logical Reasoning, Physics | | Social Sciences | Anthropology, International Relations, Psychology, Public Administration, Social Welfare and Development, Sociology, Sports and Recreation |

Citation

If you use MILU in your work, please cite us:

bibtex @article{verma2024milu, title = {MILU: A Multi-task Indic Language Understanding Benchmark}, author = {Sshubam Verma and Mohammed Safi Ur Rahman Khan and Vishwajeet Kumar and Rudra Murthy and Jaydeep Sen}, year = {2024}, journal = {arXiv preprint arXiv: 2411.02538} }

License

This dataset is released under the CC BY 4.0.

Contact

For any questions or feedback, please contact: - Sshubam Verma (sshubamverma@ai4bharat.org) - Mohammed Safi Ur Rahman Khan (safikhan@ai4bharat.org) - Rudra Murthy (rmurthyv@in.ibm.com) - Vishwajeet Kumar (vishk024@in.ibm.com)

Links

Owner

  • Name: AI4Bhārat
  • Login: AI4Bharat
  • Kind: organization
  • Email: opensource@ai4bharat.org
  • Location: India

Artificial-Intelligence-For-Bhārat : Building open-source AI solutions for India!

Citation (CITATION.bib)

@misc{eval-harness,
  author       = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},
  title        = {A framework for few-shot language model evaluation},
  month        = 12,
  year         = 2023,
  publisher    = {Zenodo},
  version      = {v0.4.0},
  doi          = {10.5281/zenodo.10256836},
  url          = {https://zenodo.org/records/10256836}
}

GitHub Events

Total
  • Issues event: 3
  • Watch event: 17
  • Issue comment event: 5
  • Push event: 1
  • Fork event: 4
Last Year
  • Issues event: 3
  • Watch event: 17
  • Issue comment event: 5
  • Push event: 1
  • Fork event: 4

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 0
  • Average time to close issues: 15 days
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 1.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: 15 days
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 1.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • nishkalavallabhi (1)
  • abhinand5 (1)
Pull Request Authors
Top Labels
Issue Labels
question (1)
Pull Request Labels

Dependencies

.github/workflows/new_tasks.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • tj-actions/changed-files v44.5.2 composite
.github/workflows/publish.yml actions
  • actions/checkout v4 composite
  • actions/download-artifact v3 composite
  • actions/setup-python v4 composite
  • actions/upload-artifact v3 composite
  • pypa/gh-action-pypi-publish release/v1 composite
.github/workflows/unit_tests.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
  • actions/upload-artifact v3 composite
  • pre-commit/action v3.0.1 composite
pyproject.toml pypi
  • accelerate >=0.26.0
  • datasets >=2.16.0
  • dill *
  • evaluate >=0.4.0
  • evaluate *
  • jsonlines *
  • more_itertools *
  • numexpr *
  • peft >=0.2.0
  • pybind11 >=2.6.2
  • pytablewriter *
  • rouge-score >=0.0.4
  • sacrebleu >=1.5.0
  • scikit-learn >=0.24.1
  • sqlitedict *
  • torch >=1.8
  • tqdm-multiprocess *
  • transformers >=4.1
  • word2number *
  • zstandard *
requirements.txt pypi
setup.py pypi