https://github.com/centre-for-humanities-computing/danish-ner-bias

Investigating bias in Danish language models in Named Entity Recognition (NER). Code from the paper titled "Detecting intersectionality in NER models: A data-driven approach."

https://github.com/centre-for-humanities-computing/danish-ner-bias

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.5%) to scientific vocabulary

Keywords

language-models named-entity-recognition nlp
Last synced: 5 months ago · JSON representation

Repository

Investigating bias in Danish language models in Named Entity Recognition (NER). Code from the paper titled "Detecting intersectionality in NER models: A data-driven approach."

Basic Info
  • Host: GitHub
  • Owner: centre-for-humanities-computing
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 3.85 MB
Statistics
  • Stars: 2
  • Watchers: 0
  • Forks: 1
  • Open Issues: 1
  • Releases: 0
Topics
language-models named-entity-recognition nlp
Created about 3 years ago · Last pushed almost 3 years ago
Metadata Files
Readme License

README.md

Detecting intersectionality in NER models: A data-driven approach

This repository contains the code used to produce the results in the paper "Detecting intersectionality in NER models: A data-driven approach" by Lassen et al. (2023).

The project investigates the effect of intersectional biases in Danish language models (see list) used for Named Entity Recognition (NER). This is achieved by applying a data augmentation technique, namely augmenting all names in the DaNe testset on gender-divided name lists for both majority and minority names.

For instructions on how to reproduce the results, please refer to the Pipeline section.

Project structure

The repository has the following directory structure: |

| Description | |---------|:-----------| | name_lists | Contains name lists used for data augmentation| | requirements | Requirements file for all models, and a seperate file for polyglot | | results | Results from all model evaluations saved as CSV files| | src | Contains scripts for evaluating all models (evaluate_XX.py). Also has helper modules for preprocessing name lists (process_names), importing models (apply_fns), and augmenting names + evaluating models (evaluate_fns).| | utils-R | Utils for running results.md| | results.md | Rmarkdown for producing results table from the paper (Table 2) | | polyglot.sh | Installs necessary tools and packages and runs the evaluation of polyglot | | setup.sh | Installs necessary packages for running evaluation of all models except polyglot| | run-models.sh | Runs the evaluation of all models except polyglot|

Danish language models

The following models are evaluated: * ScandiNER * DaCy models * DaCY large (dadacylargetrf-0.1.0) * DaCy medium (dadacymediumtrf-0.1.0) * DaCY small (dadacysmalltrf-0.1.0) * DaNLP BERT * Flair * NERDA * Spacy models * SpaCy large (dacorenewslg-3.4.0) * SpaCy medium (dacorenewsmd-3.4.0) * SpaCy small (dacorenewssm-3.4.0) * Polyglot

Pipeline

The pipeline has been built on Ubuntu (UCloud).

For all models except polyglot, first run the setup.sh bash setup.sh This will create a virtual environment (env) and install the necessary packages.

To then evaluate all models, run: bash run-models.sh

Polyglot

To setup and evaluate polyglot, run: sudo bash polyglot.sh NB! Notice that it is necessary to run this code with sudo as the setup requires certain devtools that will not be installed otherwise. Run at own risk!

The polyglot.sh script will both install devtools, packages and run the evaluation of the model in a seperately created environment called polyenv.

Acknowledgements

The name augmentation was performed using augmenty and model evaluation was performed using the DaCy framework.

Owner

  • Name: Center for Humanities Computing Aarhus
  • Login: centre-for-humanities-computing
  • Kind: organization
  • Email: chcaa@cas.au.dk
  • Location: Aarhus, Denmark

GitHub Events

Total
Last Year

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 85
  • Total Committers: 3
  • Avg Commits per committer: 28.333
  • Development Distribution Score (DDS): 0.388
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
MinaAlmasi m****0@g****m 52
MinaAlmasi m****0@g****m 32
Kenneth Enevoldsen k****n@g****m 1

Issues and Pull Requests

Last synced: 7 months ago

All Time
  • Total issues: 0
  • Total pull requests: 4
  • Average time to close issues: N/A
  • Average time to close pull requests: less than a minute
  • Total issue authors: 0
  • Total pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • MinaAlmasi (3)
  • KennethEnevoldsen (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements/requirements-polyglot.txt pypi
  • Jinja2 ==3.1.2
  • MarkupSafe ==2.1.2
  • Morfessor ==2.0.6
  • PyYAML ==6.0
  • augmenty ==1.3.1
  • blis ==0.7.9
  • catalogue ==2.0.8
  • certifi ==2022.12.7
  • charset-normalizer ==3.0.1
  • click ==8.1.3
  • confection ==0.0.4
  • cymem ==2.0.7
  • dacy ==2.3.1
  • easybuild-easyblocks ==4.6.2
  • easybuild-easyconfigs ==4.6.2
  • easybuild-framework ==4.6.2
  • filelock ==3.9.0
  • gensim ==3.8.1
  • huggingface-hub ==0.12.0
  • idna ==3.4
  • langcodes ==3.3.0
  • murmurhash ==1.0.9
  • numpy ==1.24.1
  • nvidia-cublas-cu11 ==11.10.3.66
  • nvidia-cuda-nvrtc-cu11 ==11.7.99
  • nvidia-cuda-runtime-cu11 ==11.7.99
  • nvidia-cudnn-cu11 ==8.5.0.96
  • packaging ==23.0
  • pandas ==1.5.3
  • pathy ==0.10.1
  • preshed ==3.0.8
  • pydantic ==1.10.4
  • python-dateutil ==2.8.2
  • pytz ==2022.7.1
  • regex ==2022.10.31
  • requests ==2.28.2
  • scipy ==1.10.0
  • sentencepiece ==0.1.97
  • six ==1.16.0
  • smart-open ==6.3.0
  • spacy ==3.4.4
  • spacy-alignments ==0.9.0
  • spacy-legacy ==3.0.12
  • spacy-loggers ==1.0.4
  • spacy-transformers ==1.1.9
  • spacy-wrap ==1.2.1
  • srsly ==2.4.5
  • thinc ==8.1.7
  • tokenizers ==0.13.2
  • torch ==1.13.1
  • tqdm ==4.61.2
  • transformers ==4.25.1
  • typer ==0.7.0
  • typing_extensions ==4.4.0
  • urllib3 ==1.26.14
  • wasabi ==0.10.1
requirements/requirements.txt pypi
  • NERDA ==1.0.0
  • augmenty ==1.3.1
  • dacy ==2.3.2
  • danlp ==0.1.2
  • flair ==0.5.1
  • gensim ==3.8.1
  • nltk ==3.8.1
  • pandas ==1.5.3
  • protobuf ==3.20.3
  • spacy ==3.4.4
  • torch ==1.13.1