https://github.com/ai4bharat/indicinstruct

Code repository for "Introducing Airavata: Hindi Instruction-tuned LLM"

https://github.com/ai4bharat/indicinstruct

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.3%) to scientific vocabulary

Keywords

indic-languages instruction-tuning multilingual
Last synced: 6 months ago · JSON representation

Repository

Code repository for "Introducing Airavata: Hindi Instruction-tuned LLM"

Basic Info
Statistics
  • Stars: 50
  • Watchers: 0
  • Forks: 6
  • Open Issues: 0
  • Releases: 0
Fork of allenai/open-instruct
Topics
indic-languages instruction-tuning multilingual
Created about 2 years ago · Last pushed over 1 year ago

https://github.com/AI4Bharat/IndicInstruct/blob/main/

# Airavata

[ Paper](https://arxiv.org/abs/2401.15006) | [ Blogpost](https://ai4bharat.github.io/airavata) | [ HF Model](https://huggingface.co/ai4bharat/airavata) [ HF Dataset](https://huggingface.co/datasets/ai4bharat/indic-instruct-data-v0.1) | [ HF Benchmarks](https://huggingface.co/collections/ai4bharat/airavata-evaluation-suite-65b13b7b68165de71ba0b333)

We release Airavata v0.1, a Hindi chat model instruction finetuned on SarvamAI's OpenHathi. Please refer to our [official blogspot](https://ai4bharat.github.io/airavata/) for our model details, dataset creation and evaluation process.

Airavta is an Hindi instruction-tuned model based on the IndicInstruct datasets. Cover image is generated by DALL-E 3

Cover image is generated by DALL-E 3

This repo was forked from [allenai/open-instruct](https://github.com/allenai/open-instruct), an open-source initiative for instruction-tuning widely used pretrained language models. More instructions about the codebase can be found there. ## Setup To run training, evaluation, or inference for our finetuned models, you need to install the required packages by running the following command (after installing pytorch): ```bash pip install -r requirements.txt ``` If you just want the dependencies for the weight diff script, use: ```bash pip install -r weight-diff-requirements.txt ``` ## Training In general, any model from the Hugging Face Hub may be loaded for training. You may train a model from scratch or finetune an existing model. The following code snippet shows our command for training SarvamAI's OpenHathi base model. #### Training a model from scratch ``` # install additional package pip install lm-dataformat # preprocess and tokenize the datasets before you perform training from scratch python3 scripts/tokenize_dataset.py \ --tokenizer_path \ --data_path \ --save_path \ --max_seq_length \ --max_examples --max_tokens bash scripts/train_with_accelerate.sh ``` #### Finetuning an exisiting model ``` bash scripts/finetune_lora_with_accelerate.sh ``` Please check the scripts for more information on the arguments. ## Dataset Preparation We cover various instruction datasets to train our chat model. The collection consists of: * Anudesh * wikiHow * Flan v2 (67k sample subset) * Dolly * Anthropic-HHH (5k sample subset) * OpenAssistant v1 * LymSys-Chat (50k sample subset) We have put together the above datasets and it can be accessed from [Hugging Face](https://huggingface.co/datasets/ai4bharat/indic-instruct-data-v0.1) ``` python3 reformat_indic_instruct_data.py --output_dir ``` ## Evaluation We have evaluated on standard Indic and English benchmarks to assess the capabilities of our model. The benchmarks are: * Indic NLU and Commonsense Reasoning tasks * IndicSentiment * IndicCOPA * IndicXNLI * IndicXParaphrase * Indic NLG * IndicQA * IndicHeadlineGeneration * IndicWikiBio * English NLU and Commonsense Reasoning tasks * MMLU * BoolQ * ARC Easy (Both Easy and Challenge subsets) * Hellaswag * English-Hindi Translation * Flores * IN22-Gen In addition, we also evaluate on the Indic benchmarks using the translate-test approach ie. evaluating the Hindi language benchmarks by translating it to English on an English model. For this, we use the LLaMA-2 7B chat model. Note that OpenHathi base itself was finetuned on LLaMA-2 7B base, hence it was appropriate to compare our model against its English counterpart. Similarly, we translate the English benchmarks to Hindi using [IndicTrans2](https://github.com/AI4Bharat/IndicTrans2) and evaluate them on our model. Note that both OpenHathi and Airavata were trained bilingually on English and Hindi, so they support generation in both languages. You would have to request access to use the LLaMA variants and log in to the huggingface hub (or pass a token). This process is detailed in the [Hugging Face documentation](https://huggingface.co/docs/transformers/model_doc/llama). Most of the aforementioned datasets can be directly loaded from Huggingface Hub. However, please the extract the MMLU (English and Hindi Translated) variant provided in the `data/eval` directory for evaluations. ### Example The evaluation scripts for the benchmarks listed can be found at `scripts/indic_eval/name_of_the_task.sh` for Indic benchmarks and `scripts/translate_test_eval/name_of_the_task.sh` for translate-test. The following command shows how you can evaluate on IndicSentiment ```bash # Evaluation on IndicSentiment (Hindi) on a 5-shot setting python3 -m eval.indicsentiment.run_eval \ --ntrain 5 \ --save_dir "results/indicsentiment/airavata-5shot" \ --model_name_or_path ai4bharat/airavata \ --tokenizer_name_or_path ai4bharat/airavata \ --eval_batch_size 4 # Evaluation on IndicSentiment (Translate-test) on a 5-shot setting python3 -m eval.indicsentiment.run_translate_test_eval \ --ntrain 5 \ --save_dir "results/translate_test/indicsentiment/llama2-chat-5shot" \ --model_name_or_path meta-llama/Llama-2-7b-chat-hf \ --tokenizer_name_or_path meta-llama/Llama-2-7b-chat-hf \ --eval_batch_size 4 ``` ## Released Checkpoint(s) Our chat model is made available on [Hugging Face](https://huggingface.co/ai4bharat/models/airavata). The model is licensed under the [original Llama license](https://github.com/facebookresearch/llama/blob/main/LICENSE). As for the amount of parameters, LLaMA-2 7B was used to train the OpenHathi (and subsequently Airavata), hence it comprises of **7 billion parameters** in total. ## Questions In case of any queries or issues, we recommend you to open issues on the GitHub repo directly. ## Citation If you used this repository or our models, please cite our work: ```bibtex @article{gala2024airavata, title = {Airavata: Introducing Hindi Instruction-tuned LLM}, author = {Jay Gala and Thanmay Jayakumar and Jaavid Aktar Husain and Aswanth Kumar M and Mohammed Safi Ur Rahman Khan and Diptesh Kanojia and Ratish Puduppully and Mitesh M. Khapra and Raj Dabre and Rudra Murthy and Anoop Kunchukuttan}, year = {2024}, journal = {arXiv preprint arXiv: 2401.15006} } ```

Owner

  • Name: AI4Bhārat
  • Login: AI4Bharat
  • Kind: organization
  • Email: opensource@ai4bharat.org
  • Location: India

Artificial-Intelligence-For-Bhārat : Building open-source AI solutions for India!

GitHub Events

Total
  • Watch event: 7
  • Push event: 3
  • Fork event: 2
Last Year
  • Watch event: 7
  • Push event: 3
  • Fork event: 2

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 0
  • Total pull requests: 8
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 minute
  • Total issue authors: 0
  • Total pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.25
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • manishiitg (6)
  • trajore (2)
Top Labels
Issue Labels
Pull Request Labels