nederlab-pipeline

Linguistic enrichment pipeline for historical dutch, as used in the Nederlab project

https://github.com/proycon/nederlab-pipeline

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.3%) to scientific vocabulary

Keywords

dutch historical-dutch historical-linguistics natural-language-processing nederlab nextflow nlp workflow
Last synced: 7 months ago · JSON representation

Repository

Linguistic enrichment pipeline for historical dutch, as used in the Nederlab project

Basic Info
  • Host: GitHub
  • Owner: proycon
  • License: other
  • Language: Groovy
  • Default Branch: master
  • Size: 276 KB
Statistics
  • Stars: 7
  • Watchers: 3
  • Forks: 1
  • Open Issues: 0
  • Releases: 13
Topics
dutch historical-dutch historical-linguistics natural-language-processing nederlab nextflow nlp workflow
Created almost 7 years ago · Last pushed over 3 years ago
Metadata Files
Readme License Codemeta

README.md

Language Machines Badge Build Status GitHub release (latest by date)

Project Status: Inactive – The project has reached a stable, usable state but is no longer being actively developed; support/maintenance will be provided as time allows.

Nederlab Pipeline

Introduction

This repository contains the NLP pipeline for the linguistic enrichment of historical dutch, as developed in the scope of the Nederlab project. This repository covers only the pipeline logic, powered by Nextflow, not the individual components. It depends on the following tools:

  • ucto for tokenisation.
  • Frog for PoS-tagging, lemmatisation and Named Entity Recognition for Dutch, Middle Dutch, and Early New Dutch (vroegnieuwnederlands)
  • FoLiA-utils for:
    • FoLiA-wordtranslate - Implements Erik Tjong Kim Sang's word-by-word modernisation method. This is a reimplementation of his initial prototype, with some improvements of my own.
  • Colibri Utils for:
    • colibri-lang - Language Identification (including models for Middle Dutch and Early new Dutch)
  • FoLiA Tools for:
    • foliavalidator - Validation
    • foliaupgrade - Upgrades to FoLiA v2
    • tei2folia - Conversion from a subset of TEI to FoLiA.
    • foliamerge - Merges annotations between two FoLiA documents.
  • wikiente for Named Entity Recognition and Linking using DBPedia Spotlight

Format

All tools in this pipeline take and produce documents in the FoLiA XML format (version 2). Provenance information of all the tools is recorded in the documents themselves. Please take note of the FoLiA Guidelines if you work with this pipeline or any documents produced by it.

The following linguistic enrichments can be performed, note that different FoLiA (tag)sets can be produced, even at the same time, based on what methodology was choosen and what time period the document covers:

  • Modernisation of 17th century dutch
    • Produces text annotation with the class contemporary, e.g. <t class="contemporary">
  • Part-of-Speech tagging
    • Produces part-of-speech annotation in one or more the following sets:
      • http://ilk.uvt.nl/folia/sets/frog-mbpos-nl - Part-of-Speech tags as produced by Frog by default for contemporary dutch.
      • http://rdf.ivdnt.org/pos/cgn-bab - A CGN-like tagset, but converted from another tagset used for the Brieven als Buit corpus (early new dutch)
      • http://rdf.ivdnt.org/pos/cgn-mnl - A CGN-like tagset, but converted from another tagset used for Corpys Gysseling and Corpus Reenen Mulder (middle dutch)
  • Language Identification
    • Produces language annotation in the following set:
      • http://raw.github.com/proycon/folia/master/setdefinitions/iso639_3.foliaset - ISO-639-3 language codes
  • Lemmatisation
    • Produces lemma annotation in the following sets:
      • http://ilk.uvt.nl/folia/sets/frog-mblem-nl - Lemmas as produced by Frog by default for contemporary dutch.
      • http://rdf.ivdnt.org/lemma/corpus-brieven-als-buit - Lemmas from Brieven als Buit (early new dutch/vroegnieuwnederlands)
      • http://rdf.ivdnt.org/lemma/corpus-gysseling - Lemmas from Corpus Gysseling and Corpus Reenen Mulder (middle dutch/middelnederlands)
      • https://raw.githubusercontent.com/proycon/folia/master/setdefinitions/intlemmaidwithcompounds.foliaset.ttl - Lemma IDs from the INT Historical Lexicon, with compound lemmas.
      • https://raw.githubusercontent.com/proycon/folia/master/setdefinitions/intlemmatextwithcompounds.foliaset.ttl - Lemma (words) from the INT Historical Lexicon, with compound lemmas.
  • Named Entity Recognition
    • Produces entity annotation in the following sets:
      • http://ilk.uvt.nl/folia/sets/frog-ner-nl - Broad named entity classes as produced by Frog (per,loc,org, etc..)
      • https://raw.githubusercontent.com/proycon/folia/master/setdefinitions/spotlight/dbpedia.foliaset.ttl - Links directly to individual DBPedia resources (class is a full URI), produced by WikiEnte

In addition to the linguistic annotations, the tei2folia converter produces a wide variety of structural annotations and also markup annotations, as it's objective is to retain all information from the original TEI source.

Changes from older versions

As there are documents produced with previous versions of this pipeline, it is important to be aware of the biggest changes:

  • 1) Older versions of this pipeline incorporated foliaentity instead of wikiente, which performed entity linking separate from entity recognition and encoded it in the FoLiA documents as alignments (now called relation annotation since FoliA v2). This is something to be aware of when you are interested in the linking information and are processing documents (always FoLiA v1.4 or v1.5) produced by predecessors of this pipeline.

  • 2) Older versions of this pipeline used Erik Tjong Kim Sangs's TEI to FoLiA converter for converting DBNL documents. This converter was deemed too fragile and hard to maintain and was replaced by the new tei2folia in FoLiA tools. Older versions can be recognised as they predate FoLiA v2. Older documents also miss a lot of metadata as this was not really handled by the previous converter.

  • 3) Older versions lack provenance information

  • 4) Older DBNL versions were split, in the sense that independent titles (onzelfstandige titels), were separate documents. The current TEI-to-FoLiA converter no longer does this, but each independent title is clearer marked using FoLiA's submetadata mechanism.

This pipeline itself used to be part of PICCL, but was split-off for maintainability and clarity.

Installation

The pipeline and all components on which it depends is shipped as a part of LaMachine, which comes in various flavours (Virtual Machine, Docker container, local installation, etc..).

Usage

Inside LaMachine, you can invoke the workflow as follows:

$ nederlab.nf

or:

$ nextflow run $(which nederlab.nf)

For instructions, run nederlab.nf --help.

You can also let Nextflow manage Docker and LaMachine for you, but we won't go into that here.

Fix and split pipeline

There was a problem with the DBNL collection as delivered in 2019 (described in internal issue TT-709). Also, it was decided that it was better to split the independent titles after all. A Nextflow script has been written to handle this.

Put the collection you want to process in some input directory, create an output directory, and run something like:

$ dbnl_fix_and_split.nf --inputdir input/ --outputdir output/ --datadir /path/to/nederlab-linguistic-enrichment

The data directory should point to where you checked out the nederlab-linguistic-enrichment repository (a private repository by INT).

Note: pass --extension folia.xml.gz if the input files are compressed. The script will compress all output files by default too.

Resources

Resources for Erik Tjong Kim Sang's modernisation method are included in this repository:

  • preservation2010.txt - Preservation lexicon
  • rules.machine - Rewrite rules
  • lexicon.1637-2010.250.lexserv.vandale.tsv - Automatically extracted translation lexicon (from Statenbijbel) for use in modernisation procedure (disabled due to too many errors, use of INT Historical Lexicon is preferred)

Not included is the INT Historical Lexicon, as it is copyrighted material.

Owner

  • Name: Maarten van Gompel
  • Login: proycon
  • Kind: user
  • Location: Eindhoven, the Netherlands
  • Company: KNAW Humanities Cluster & CLST, Radboud University

Research software engineer - NLP - AI - 🐧 Linux & open-source enthusiast - 🐍 Python/ 🌊C/C++ / 🦀 Rust / 🐚 Shell - 🔐 InfoSec - https://git.sr.ht/~proycon

CodeMeta (codemeta.json)

{
  "@context": [
    "https://doi.org/10.5063/schema/codemeta-2.0",
    "http://schema.org",
    {
      "entryPoints": {
        "@reverse": "schema:actionApplication"
      },
      "interfaceType": {
        "@id": "codemeta:interfaceType"
      }
    }
  ],
  "@type": "SoftwareSourceCode",
  "identifier": "nederlab-pipeline",
  "name": "Nederlab Pipeline",
  "version": "0.8.0",
  "description": "A set of workflows for linguistic enrichment of historical dutch",
  "license": "https://spdx.org/licenses/GPL-3.0",
  "url": "https://github.com/proycon/nederlab-pipeline",
  "producer": {
    "@id": "https://www.ru.nl/clst",
    "@type": "Organization",
    "name": "Centre for Language and Speech Technology",
    "url": "https://www.ru.nl/clst",
    "parentOrganization": {
      "@id": "https://www.ru.nl/cls",
      "@type": "Organization",
      "name": "Centre for Language Studies",
      "url": "https://www.ru.nl/cls",
      "parentOrganization": {
        "@id": "https://www.ru.nl",
        "name": "Radboud University",
        "@type": "Organization",
        "url": "https://www.ru.nl",
        "location": {
          "@type": "Place",
          "name": "Nijmegen"
        }
      }
    }
  },
  "author": [
    {
      "@id": "https://orcid.org/0000-0002-1046-0006",
      "@type": "Person",
      "givenName": "Maarten",
      "familyName": "van Gompel",
      "email": "proycon@anaproy.nl",
      "affiliation": {
        "@id": "https://www.ru.nl/clst"
      }
    }
  ],
  "sourceOrganization": {
    "@id": "https://www.ru.nl/clst"
  },
  "programmingLanguage": {
    "@type": "ComputerLanguage",
    "identifier": "nextflow",
    "name": "Nextflow"
  },
  "operatingSystem": "POSIX",
  "codeRepository": "https://github.com/proycon/nederlab-pipeline",
  "softwareRequirements": [
    {
      "@type": "SoftwareApplication",
      "identifier": "nextflow",
      "name": "Nextflow"
    },
    {
      "@type": "SoftwareApplication",
      "identifier": "frog",
      "name": "Frog"
    },
    {
      "@type": "SoftwareApplication",
      "identifier": "ucto",
      "name": "Ucto"
    },
    {
      "@type": "SoftwareApplication",
      "identifier": "foliautils",
      "name": "FoLiA utilities"
    }
  ],
  "funder": [
    {
      "@type": "Organization",
      "name": "Nederlab",
      "url": "http://www.nederlab.nl"
    }
  ],
  "readme": "https://github.com/proycon/nederlab-pipeline/blob/master/README.md",
  "issueTracker": "https://github.com/proycon/nederlab-pipeline/issues",
  "contIntegration": "https://travis-ci.org/proycon/nederlab-pipeline",
  "releaseNotes": "https://github.com/proycon/nederlab-pipeline/releases",
  "developmentStatus": "inactive",
  "keywords": [
    "nlp",
    "natural language processing"
  ],
  "dateCreated": "2017",
  "entryPoints": [
    {
      "@type": "EntryPoint",
      "urlTemplate": "file:///nederlab.nf",
      "name": "Nederlab",
      "description": "Nederlab linguistic enrichment of historical texts pipeline",
      "interfaceType": "CLI"
    }
  ]
}

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: 12 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels