paper-nestedner-icdar23-code
All the material (code, dataset, results) of our Benchmark of Nested NER approaches accepted at ICDAR 2023
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 10 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, springer.com, zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.0%) to scientific vocabulary
Keywords
Repository
All the material (code, dataset, results) of our Benchmark of Nested NER approaches accepted at ICDAR 2023
Basic Info
Statistics
- Stars: 1
- Watchers: 4
- Forks: 0
- Open Issues: 0
- Releases: 3
Topics
Metadata Files
README.md
Code and Data for the paper "A Benchmark of Nested NER Approaches in Historical Structured Documents" presented at ICDAR 2023
Abstract
Named Entity Recognition (NER) is a key step in the creation of structured data from digitised historical documents. Traditional NER approaches deal with flat named entities, whereas entities often are nested. For example, a postal address might contain a street name and a number. This work compares three nested NER approaches, including two state-of-the-art approaches using Transformer-based architectures. We introduce a new Transformer-based approach based on joint labelling and semantic weighting of errors, evaluated on a collection of 19th-century Paris trade directories. We evaluate approaches regarding the impact of supervised fine-tuning, unsupervised pre-training with noisy texts, and variation of IOB tagging formats. Our results show that while nested NER approaches enable extracting structured data directly, they do not benefit from the extra knowledge provided during training and reach a performance similar to the base approach on flat entities. Even though all 3 approaches perform well in terms of F1 scores, joint labelling is most suitable for hierarchically structured data. Finally, our experiments reveal the superiority of the IO tagging format on such data.

Sources documents
- Paper pre-print (PDF) :
&
- Final paper (Springer edition) :
- Full dataset (images and transcripted texts) :
Code
Installation
Download code last stable released HERE
pip install --requirement requirements.txt
Models
- Source models CamemBERT NER and CamemBERT NER pretrained on French trade directories are shared on Huggingface Hub
- Paper's models and ready-to-load datasets are shared on Huggingface Hub
Project Structure
Structure of this repository:
├── dataset <- Data used for training and validation (except dataset_full.json)
│ ├── 10-ner_ref <- Full ground-truth dataset
│ ├── 31-ner_align_pero <- Full Pero-OCR dataset
│ ├── 41-ner_ref_from_pero <- GT entries subset which have corresponding valid Pero OCR equivalent.
| ├── qualitative_analysis <- Test and entries for qualitative analysis
| ├── dataset_full.json <- Published data
|
├── img <- Images
│
├── src <- Jupyter notebooks and Python scripts.
│ ├── m0_flat_ner <- Flat NER approach notebook and scripts
│ ├── m1_independant_ner_layers <- M1 approach notebook and scripts
| ├── m2_joint-labelling_for_ner <- M2 approach notebook and scripts
│ ├── m3_hierarchical_ner <- M3 approach notebook and scripts
│ ├── t1_dataset_tools <- Scripts to format dataset
│ ├── t2_metrics <- Benchmark results tables
| |── requirements.txt
│
└── README.md
Please note that for each approach, the qualitative analysis notebook and the demo notebook can be run without preparing the source data neather training models.
Reference
If you use this software, please cite it as below.
@inproceedings{nner_benchmark_2023,
title = {A Benchmark of Nested Named Entity Recognition Approaches in Historical Structured Documents},
author = {Tual, Solenn and Abadie, Nathalie and Carlinet, Edwin and Chazalon, Joseph and Duménieu, Bertrand},
booktitle = {Proceedings of the 17th International Conference on Document Analysis and Recognition (ICDAR'23)},
year = {2023},
month = aug,
address = {San José, California, USA},
url = {https://hal.science/hal-03994759},
doi = {https://doi.org/10.1007/978-3-031-41682-8_8}
}
Acknowledgment
This work is supported by the French National Research Agency (ANR), as part of the SODUCO project (grant ANR-18-CE38-0013).
Owner
- Name: SoDUCo
- Login: soduco
- Kind: organization
- Website: https://soduco.github.io
- Repositories: 47
- Profile: https://github.com/soduco
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: >-
[Code] A Benchmark of Nested NER Approaches in Historical
Structured Documents
message: 'If you use this software, please cite it as below.'
type: software
authors:
- given-names: Solenn
family-names: Tual
email: solenn.tual@ign.fr
affiliation: 'LASTIG, Univ. Gustave Eiffel, IGN-ENSG'
orcid: 'https://orcid.org/0000-0001-8549-7949'
- family-names: Abadie
given-names: Nathalie
email: nathalie-f.abadie@ign.fr
affiliation: 'LASTIG, Univ. Gustave Eiffel, IGN-ENSG'
orcid: 'https://orcid.org/0000-0001-8741-2398'
- given-names: Joseph
family-names: Chazalon
email: joseph.chazalon@lre.epita.fr
affiliation: 'LRE, EPITA'
orcid: 'https://orcid.org/0000-0002-3757-074X'
- family-names: Duménieu
given-names: Bertrand
orcid: 'https://orcid.org/0000-0002-2517-2058'
email: bertrand.dumenieu@ehess.fr
affiliation: 'CRH, EHESS'
- given-names: Edwin
family-names: Carlinet
orcid: 'https://orcid.org/0000-0001-5737-5266'
email: edwin.carlinet@lre.epita.fr
affiliation: 'LRE, EPITA'
identifiers:
- type: doi
value: 10.5281/zenodo.7997437
repository-code: 'https://github.com/soduco/paper-nestedner-icdar23-code'
abstract: >-
Code and materials of the paper "A Benchmark of Nested NER
Approaches in Historical Structured Documents" to be
published at ICDAR 2023.
license: MIT
preferred-citation:
type: "conference-paper"
authors:
- family-names: Tual
given-names: Solenn
orcid: "https://orcid.org/0000-0001-8549-7949"
- family-names: Abadie
given-names: Nathalie
orcid: "https://orcid.org/0000-0001-8741-2398"
- family-names: Chazalon
given-names: Joseph
orcid: "https://orcid.org/0000-0002-3757-074X"
- family-names: Duménieu
given-names: Bertrand
orcid: "https://orcid.org/0000-0002-2517-2058"
- family-names: Carlinet
given-names: Edwin
orcid: "https://orcid.org/0000-0001-5737-5266"
title: "A Benchmark of Nested NER Approaches in Historical Structured Documents"
year: 2023
collection-title: "Proceedings of the 17th International Conference on Document Analysis and Recognition"
conference:
- name: "17th International Conference on Document Analysis and Recognition"
place: "San José, California, USA"
date-start: 2023-08-21
date-end: 2023-08-23
GitHub Events
Total
Last Year
Committers
Last synced: about 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| solenn-tl | s****l@g****m | 41 |
| Joseph Chazalon | j****n@l****r | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: almost 2 years ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- Babel ==2.10.1
- Cython ==0.29.30
- GitPython ==3.1.29
- Jinja2 ==3.0.3
- Keras-Preprocessing ==1.1.2
- Markdown ==3.3.7
- MarkupSafe ==2.0.1
- Pillow ==9.1.1
- PyYAML ==6.0
- Pygments ==2.12.0
- QtPy ==2.1.0
- Send2Trash ==1.8.0
- Unidecode ==1.3.6
- Werkzeug ==2.1.2
- absl-py ==1.0.0
- aiohttp ==3.8.1
- aiosignal ==1.2.0
- anyio ==3.6.1
- argon2-cffi ==21.3.0
- argon2-cffi-bindings ==21.2.0
- asttokens ==2.0.5
- astunparse ==1.6.3
- async-timeout ==4.0.2
- attrs ==21.4.0
- backcall ==0.2.0
- beautifulsoup4 ==4.11.1
- bleach ==5.0.0
- blis ==0.7.9
- boto3 ==1.23.5
- botocore ==1.26.5
- cachetools ==5.1.0
- catalogue ==2.0.7
- certifi ==2021.10.8
- cffi ==1.15.0
- charset-normalizer ==2.0.10
- click ==8.0.3
- cmake ==3.22.5
- commonmark ==0.9.1
- confection ==0.0.4
- config ==0.5.1
- cuda-python ==11.7.1
- cycler ==0.11.0
- cymem ==2.0.6
- dataclasses ==0.6
- datasets ==2.8.0
- debugpy ==1.6.0
- decorator ==5.1.1
- defusedxml ==0.7.1
- dill ==0.3.4
- docker-pycreds ==0.4.0
- docopt ==0.6.2
- einops ==0.6.0
- entrypoints ==0.4
- evaluate ==0.4.0
- executing ==0.8.3
- fastjsonschema ==2.15.3
- filelock ==3.4.2
- fire ==0.4.0
- flatbuffers ==23.1.4
- fonttools ==4.33.3
- frozenlist ==1.2.0
- fsspec ==2022.1.0
- future ==0.18.2
- gast ==0.4.0
- gitdb ==4.0.10
- google-auth ==2.6.6
- google-auth-oauthlib ==0.4.6
- google-pasta ==0.2.0
- grpcio ==1.46.3
- h5py ==3.6.0
- huggingface-hub ==0.11.1
- idna ==3.3
- intel-openmp ==2022.1.0
- ipykernel ==6.13.0
- ipython ==8.3.0
- ipython-genutils ==0.2.0
- ipywidgets ==7.7.0
- jedi ==0.18.1
- jmespath ==1.0.0
- joblib ==1.1.0
- json5 ==0.9.8
- jsonschema ==4.5.1
- jupyter ==1.0.0
- jupyter-client ==7.3.1
- jupyter-console ==6.4.3
- jupyter-core ==4.10.0
- jupyter-server ==1.17.0
- jupyterlab ==3.5.0
- jupyterlab-pygments ==0.2.2
- jupyterlab-server ==2.14.0
- jupyterlab-widgets ==1.1.0
- keras ==2.11.0
- kiwisolver ==1.4.2
- langcodes ==3.3.0
- libclang ==14.0.1
- lightning-utilities ==0.3.0
- lxml ==4.9.0
- matplotlib ==3.5.2
- matplotlib-inline ==0.1.3
- mistune ==0.8.4
- mkl ==2022.1.0
- mkl-include ==2022.1.0
- multidict ==5.2.0
- multiprocess ==0.70.12.2
- murmurhash ==1.0.6
- nbclassic ==0.3.7
- nbclient ==0.6.3
- nbconvert ==6.5.0
- nbformat ==5.4.0
- nest-asyncio ==1.5.5
- ninja ==1.10.2.3
- nltk ==3.6.7
- notebook ==6.4.11
- notebook-shim ==0.1.0
- numpy ==1.22.0
- oauthlib ==3.2.0
- opt-einsum ==3.3.0
- packaging ==21.3
- pandas ==1.3.5
- pandocfilters ==1.5.0
- parse ==1.19.0
- parso ==0.8.3
- pathtools ==0.1.2
- pathy ==0.6.1
- pexpect ==4.8.0
- pickleshare ==0.7.5
- pipreqs ==0.4.11
- preshed ==3.0.6
- prometheus-client ==0.14.1
- promise ==2.3
- prompt-toolkit ==3.0.29
- protobuf ==3.19.3
- psutil ==5.9.0
- ptyprocess ==0.7.0
- pure-eval ==0.2.2
- pyarrow ==6.0.1
- pyasn1 ==0.4.8
- pyasn1-modules ==0.2.8
- pybind11 ==2.10.3
- pycparser ==2.21
- pydantic ==1.8.2
- pyparsing ==3.0.6
- pyrsistent ==0.18.1
- python-dateutil ==2.8.2
- pytorch-lightning ==1.8.3.post1
- pytz ==2021.3
- pyzmq ==23.0.0
- qtconsole ==5.3.0
- regex ==2020.11.13
- requests ==2.27.1
- requests-oauthlib ==1.3.1
- responses ==0.18.0
- rich ==12.6.0
- rich-logger ==0.3.0
- rsa ==4.8
- s3transfer ==0.5.2
- sacremoses ==0.0.47
- scikit-learn ==1.0.2
- scipy ==1.7.3
- seaborn ==0.11.2
- sentencepiece ==0.1.96
- sentry-sdk ==1.11.1
- seqeval ==1.2.2
- setproctitle ==1.3.2
- shortuuid ==1.0.11
- six ==1.16.0
- smart-open ==5.2.1
- smmap ==5.0.0
- sniffio ==1.2.0
- soupsieve ==2.3.2.post1
- spacy ==3.3.0
- spacy-experimental ==0.6.1
- spacy-legacy ==3.0.11
- spacy-loggers ==1.0.2
- srsly ==2.4.3
- stack-data ==0.2.0
- tbb ==2021.6.0
- tensorflow ==2.11.0
- tensorflow-estimator ==2.11.0
- tensorflow-gpu ==2.9.0
- tensorflow-io-gcs-filesystem ==0.26.0
- termcolor ==1.1.0
- terminado ==0.15.0
- thinc ==8.0.17
- threadpoolctl ==3.1.0
- tinycss2 ==1.1.1
- tokenizers ==0.12.1
- tomli ==2.0.1
- torch ==1.12.0
- torchmetrics ==0.11.0
- torchvision ==0.13.0
- tornado ==6.1
- tqdm ==4.64.0
- traitlets ==5.2.1.post0
- transformers ==4.25.1
- typer ==0.4.1
- typing_extensions ==4.2.0
- urllib3 ==1.26.13
- wasabi ==0.9.1
- wcwidth ==0.2.5
- webencodings ==0.5.1
- websocket-client ==1.3.2
- widgetsnbextension ==3.6.0
- wrapt ==1.14.1
- xxhash ==3.0.0
- yarg ==0.1.9
- yarl ==1.7.2