https://github.com/betswish/cross-lingual-consistency

Easy-to-use framework for evaluating cross-lingual consistency of factual knowledge (Supported LLaMA, BLOOM, mT5, RoBERTa, etc.) Paper here: https://aclanthology.org/2023.emnlp-main.658/

https://github.com/betswish/cross-lingual-consistency

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
  • Committers with academic emails
    1 of 2 committers (50.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.7%) to scientific vocabulary

Keywords

bloom knowledge llama mt5 roberta transformers
Last synced: 5 months ago · JSON representation

Repository

Easy-to-use framework for evaluating cross-lingual consistency of factual knowledge (Supported LLaMA, BLOOM, mT5, RoBERTa, etc.) Paper here: https://aclanthology.org/2023.emnlp-main.658/

Basic Info
  • Host: GitHub
  • Owner: Betswish
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 16 MB
Statistics
  • Stars: 25
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
bloom knowledge llama mt5 roberta transformers
Created over 2 years ago · Last pushed 7 months ago
Metadata Files
Readme License

README.md

Cross-Lingual Consistency (CLC) of Factual Knowledge in Multilingual Language Models

Easy-to-use framework for evaluating cross-lingual consistency of factual knowledge (Supported LLaMA, BLOOM, mT5, RoBERTa, etc.).

Our paper was selected for the Outstanding Paper Award in the Multilinguality and Linguistic Diversity track of EMNLP 2023 and the Best Data Award of GenBench Workshop.

Authors: Jirui QiRaquel FernándezArianna Bisazza

Pipeline

Abstract: Multilingual large-scale Pretrained Language Models (PLMs) have been shown to store considerable amounts of factual knowledge, but large variations are observed across languages. With the ultimate goal of ensuring that users with different language backgrounds obtain consistent feedback from the same model, we study the cross-lingual consistency (CLC) of factual knowledge in various multilingual PLMs. To this end, we propose a Ranking-based Consistency (RankC) metric to evaluate knowledge consistency across languages independently from accuracy. Using this metric, we conduct an in-depth analysis of the determining factors for CLC, both at model level and at language-pair level. Among other results, we find that increasing model size leads to higher factual probing accuracy in most languages, but does not improve cross-lingual consistency. Finally, we conduct a case study on CLC when new factual associations are inserted in the PLMs via model editing. Results on a small sample of facts inserted in English reveal a clear pattern whereby the new piece of knowledge transfers only to languages with which English has a high RankC score.

If you find the paper helpful and use the content, we kindly suggest you cite through: bibtex @inproceedings{qi-etal-2023-cross, title = "Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models", author = "Qi, Jirui and Fern{\'a}ndez, Raquel and Bisazza, Arianna", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.658", doi = "10.18653/v1/2023.emnlp-main.658", pages = "10650--10666", abstract = "Multilingual large-scale Pretrained Language Models (PLMs) have been shown to store considerable amounts of factual knowledge, but large variations are observed across languages. With the ultimate goal of ensuring that users with different language backgrounds obtain consistent feedback from the same model, we study the cross-lingual consistency (CLC) of factual knowledge in various multilingual PLMs. To this end, we propose a Ranking-based Consistency (RankC) metric to evaluate knowledge consistency across languages independently from accuracy. Using this metric, we conduct an in-depth analysis of the determining factors for CLC, both at model level and at language-pair level. Among other results, we find that increasing model size leads to higher factual probing accuracy in most languages, but does not improve cross-lingual consistency. Finally, we conduct a case study on CLC when new factual associations are inserted in the PLMs via model editing. Results on a small sample of facts inserted in English reveal a clear pattern whereby the new piece of knowledge transfers only to languages with which English has a high RankC score. All code and data are released at https://github.com/Betswish/Cross-Lingual-Consistency.", }

Environment:

Python: 3.11

Packages: pip install -r requirements.txt

Quick Start

For a quick start, you only need to run the following two lines to get the CLC of two languages in a PLM: bash cd 1_easyrun bash easyrun.sh

You may also like to modify the variables in easyrun.sh - mname: The model currently supports LLaMA, BLOOM, BLOOMZ, mT5, RoBERTa. Use the full model name on Huggingface! Like bigscience/bloom-3b. - lang1 & lang2: Abbreviation of languages in ISO 639-1 format. See the tables below for details. - mini: yes for using BMLAMA-17 and no for using BMLAMA-53. - weight: Weight metric for RankC, select among softmax, norm1, and norm2.

Supported languages of BMLAMA-17

| Language | ISO 639-1 | Language | ISO 639-1 | Language | ISO 639-1 | | --------- | --------- | --------- | --------- | ---------- | --------- | | English | en | French | fr | Dutch | nl | | Spanish | es | Russian | ru | Japanese | ja | | Chinese | zh | Korean | ko | Vietnamese | vi | | Greek | el | Hungarian | hu | Hebrew | he | | Turkish | tr | Catalan | ca | Arabic | ar | | Ukrainian | uk | Persian | fa | | |

Supported languages of BMLAMA-53

| Language | ISO 639-1 | Language | ISO 639-1 | Language | ISO 639-1 | | ---------- | --------- | ----------- | --------- | ---------- | --------- | | Catalan | ca | Azerbaijani | az | English | en | | Arabic | ar | Ukrainian | uk | Persian | fa | | Turkish | tr | Italian | it | Greek | el | | Russian | ru | Croatian | hr | Hindi | hi | | Swedish | sv | Albanian | sq | French | fr | | Irish | ga | Basque | eu | German | de | | Dutch | nl | Estonian | et | Hebrew | he | | Spanish | es | Bengali | bn | Malay | ms | | Serbian | sr | Armenian | hy | Urdu | ur | | Hungarian | hu | Latin | la | Slovenian | sl | | Czech | cs | Afrikaans | af | Galician | gl | | Finnish | fi | Romanian | ro | Korean | ko | | Welsh | cy | Thai | th | Belarusian | be | | Indonesian | id | Portuguese | pt | Vietnamese | vi | | Georgian | ka | Japanese | ja | Danish | da | | Bulgarian | bg | Chinese | zh | Polish | pl | | Latvian | lv | Slovak | sk | Lithuanian | lt | | amil | ta | Cebuano | ceb | | |

Owner

  • Login: Betswish
  • Kind: user

GitHub Events

Total
  • Issues event: 1
  • Watch event: 3
  • Issue comment event: 1
  • Push event: 7
Last Year
  • Issues event: 1
  • Watch event: 3
  • Issue comment event: 1
  • Push event: 7

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 70
  • Total Committers: 2
  • Avg Commits per committer: 35.0
  • Development Distribution Score (DDS): 0.029
Past Year
  • Commits: 8
  • Committers: 2
  • Avg Commits per committer: 4.0
  • Development Distribution Score (DDS): 0.125
Top Committers
Name Email Commits
Jirui Qi 1****h@u****m 68
Betswish j****i@r****l 2
Committer Domains (Top 20 + Academic)
rug.nl: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 2
  • Total pull requests: 0
  • Average time to close issues: 4 days
  • Average time to close pull requests: N/A
  • Total issue authors: 2
  • Total pull request authors: 0
  • Average comments per issue: 2.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: 8 days
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 1.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • Eureka-Maggie (1)
  • tarun360 (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • Cython0.29.34 *
  • Jinja23.1.2 *
  • MarkupSafe2.1.2 *
  • Pillow9.5.0 *
  • PyYAML6.0 *
  • Pygments2.15.1 *
  • accelerate0.19.0 *
  • aiohttp3.8.4 *
  • aiosignal1.3.1 *
  • antlr4-python3-runtime4.9.3 *
  • async-timeout4.0.2 *
  • attrs23.1.0 *
  • bitsandbytes-cuda1130.30.1 *
  • bitsandbytes0.39.0 *
  • cachetools5.3.1 *
  • captum0.6.0 *
  • certifi2023.5.7 *
  • chardet3.0.4 *
  • charset-normalizer3.1.0 *
  • click8.1.3 *
  • cmake3.26.3 *
  • colorama0.4.6 *
  • commonmark0.9.1 *
  • contourpy1.0.7 *
  • cycler0.11.0 *
  • datasets2.12.0 *
  • dill0.3.6 *
  • filelock3.12.0 *
  • fonttools4.39.4 *
  • frozenlist1.3.3 *
  • fsspec2023.5.0 *
  • google-api-core2.11.0 *
  • google-auth2.19.1 *
  • google-cloud-core2.3.2 *
  • google-cloud-translate3.11.1 *
  • googleapis-common-protos1.59.0 *
  • googletrans4.0.0rc1 *
  • grpcio-status1.54.2 *
  • grpcio1.54.2 *
  • h110.9.0 *
  • h23.2.0 *
  • higher0.2.1 *
  • hpack3.0.0 *
  • hstspreload2023.1.1 *
  • httpcore0.9.1 *
  • httpx0.13.3 *
  • huggingface-hub0.14.1 *
  • hydra-core1.3.2 *
  • hyperframe5.2.0 *
  • idna2.10 *
  • inquirerpy0.3.4 *
  • inseq0.5.0.dev0 *
  • joblib1.2.0 *
  • jsonschema4.17.3 *
  • kiwisolver1.4.4 *
  • lang2vec1.1.6 *
  • lit16.0.5 *
  • matplotlib3.5.3 *
  • mpmath1.3.0 *
  • multidict6.0.4 *
  • multiprocess0.70.14 *
  • mypy-extensions1.0.0 *
  • networkx3.1 *
  • numpy1.24.3 *
  • nvidia-cublas-cu1111.10.3.66 *
  • nvidia-cuda-cupti-cu1111.7.101 *
  • nvidia-cuda-nvrtc-cu1111.7.99 *
  • nvidia-cuda-runtime-cu1111.7.99 *
  • nvidia-cudnn-cu118.5.0.96 *
  • nvidia-cufft-cu1110.9.0.58 *
  • nvidia-curand-cu1110.2.10.91 *
  • nvidia-cusolver-cu1111.4.0.1 *
  • nvidia-cusparse-cu1111.7.4.91 *
  • nvidia-nccl-cu112.14.3 *
  • nvidia-nvtx-cu1111.7.91 *
  • omegaconf2.3.0 *
  • overrides7.3.1 *
  • packaging23.1 *
  • pandas2.0.1 *
  • pastel0.2.1 *
  • pfzy0.3.4 *
  • pip23.0.1 *
  • poethepoet0.13.1 *
  • prompt-toolkit3.0.38 *
  • proto-plus1.22.2 *
  • protobuf3.20.3 *
  • psutil5.9.5 *
  • pyarrow12.0.0 *
  • pyasn1-modules0.3.0 *
  • pyasn10.5.0 *
  • pyparsing3.0.9 *
  • pyproject-toml0.0.10 *
  • pyre-extensions0.0.23 *
  • pyrsistent0.19.3 *
  • python-dateutil2.8.2 *
  • pytz2023.3 *
  • regex2023.5.5 *
  • requests2.30.0 *
  • responses0.18.0 *
  • rfc39861.5.0 *
  • rich10.16.2 *
  • rsa4.9 *
  • sacremoses0.0.53 *
  • safetensors0.3.1 *
  • scikit-learn1.2.2 *
  • scipy1.10.1 *
  • sentencepiece0.1.99 *
  • setuptools66.0.0 *
  • six1.16.0 *
  • sniffio1.3.0 *
  • sympy1.12 *
  • threadpoolctl3.1.0 *
  • tokenizers0.13.3 *
  • toml0.10.2 *
  • tomli2.0.1 *
  • torch2.0.1 *
  • torchtyping0.1.4 *
  • tqdm4.65.0 *
  • transformers4.30.0.dev0 *
  • triton2.0.0 *
  • typeguard2.13.3 *
  • typing-inspect0.8.0 *
  • typing_extensions4.5.0 *
  • tzdata2023.3 *
  • unicodecsv0.14.1 *
  • unimorph-inflect0.0.1 *
  • urllib31.26.16 *
  • wcwidth0.2.6 *
  • wheel0.38.4 *
  • xformers0.0.16 *
  • xxhash3.2.0 *
  • yarl1.9.2 *