https://github.com/centrefordigitalhumanities/ianalyzer-readers
Pre-processing functionality used in I-analyzer
https://github.com/centrefordigitalhumanities/ianalyzer-readers
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (17.7%) to scientific vocabulary
Keywords
Repository
Pre-processing functionality used in I-analyzer
Basic Info
Statistics
- Stars: 0
- Watchers: 4
- Forks: 0
- Open Issues: 5
- Releases: 8
Topics
Metadata Files
README.md
I-analyzer Readers
ianalyzer-readers is a python module to extract data from XML, HTML, CSV, JSON, XLSX or RDF (Linked Data) files.
This module was originally created for I-analyzer, a web application that extracts data from a variety of datasets, indexes them and presents a search interface. To do this, we wanted a way to extract data from source files without having to write a new script "from scratch" for each dataset, and an API that would work the same regardless of the source file type.
The basic usage is that you will use the utilities in this package to create a "reader" class. You specify what your data looks like, and then call the documents() method of the reader to get an iterator of documents - where each document is a flat dictionary of key/value pairs.
Prerequisites
Requires Python 3.9 or later.
Contents
ianalyzer_readers contains the source code for the package. tests contains unit tests.
When to use this package
This package is not a replacement for more general-purpose libraries like csv or Beautiful Soup - it is a high-level interface on top of those libraries.
Our primary use for this package is to pre-process data for I-analyzer, but you may find other uses for it.
Using this package makes sense if you want to extract data in the shape that it is designed for (i.e., a list of flat dictionaries).
What we find especially useful is that all subclasses of Reader have the same interface - regardless of whether they are processing CSV, JSON, XML, HTML, RDF or XLSX data. That common interface is crucial in an application that needs to process corpora from different source types, like I-analyzer.
Usage
Typical usage of this package would be to make a custom Python class for a dataset from which you want to extract a list of documents. We call this a Reader. This package provides the base classes to structure readers, and provides extraction utilities for several file types.
For detailed usage documention and examples, visit ianalyzer-readers.readthedocs.io
If this site is unavailable, you can also generate the documentation site locally; see the contributing guide for insttructions.
Licence
This code is shared under an MIT licence. See LICENSE for more information.
Owner
- Name: Centre for Digital Humanities
- Login: CentreForDigitalHumanities
- Kind: organization
- Email: cdh@uu.nl
- Location: Netherlands
- Website: https://cdh.uu.nl/
- Repositories: 39
- Profile: https://github.com/CentreForDigitalHumanities
Interdisciplinary centre for research and education in computational and data-driven methods in the humanities.
GitHub Events
Total
- Create event: 8
- Release event: 2
- Issues event: 5
- Delete event: 5
- Issue comment event: 3
- Member event: 1
- Push event: 19
- Pull request event: 9
- Pull request review comment event: 14
- Pull request review event: 20
Last Year
- Create event: 8
- Release event: 2
- Issues event: 5
- Delete event: 5
- Issue comment event: 3
- Member event: 1
- Push event: 19
- Pull request event: 9
- Pull request review comment event: 14
- Pull request review event: 20
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 5
- Total pull requests: 11
- Average time to close issues: 11 months
- Average time to close pull requests: 27 days
- Total issue authors: 3
- Total pull request authors: 3
- Average comments per issue: 0.6
- Average comments per pull request: 0.36
- Merged pull requests: 9
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 2
- Pull requests: 11
- Average time to close issues: about 1 month
- Average time to close pull requests: 27 days
- Issue authors: 1
- Pull request authors: 3
- Average comments per issue: 0.5
- Average comments per pull request: 0.36
- Merged pull requests: 9
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- lukavdplas (2)
- JeltevanBoheemen (1)
Pull Request Authors
- lukavdplas (6)
- BeritJanssen (4)
- bbonf (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 146 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 8
- Total maintainers: 1
pypi.org: ianalyzer-readers
Utilities for extracting XML, HTML, CSV, XLSX, and RDF data with a common interface
- Documentation: https://ianalyzer-readers.readthedocs.io/
- License: MIT
-
Latest release: 0.3.1
published 8 months ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v3 composite
- actions/setup-python v3 composite
- beautifulsoup4 *
- lxml *
- openpyxl *
- beautifulsoup4 ==4.12.3
- click ==8.1.7
- colorama ==0.4.6
- et-xmlfile ==1.1.0
- exceptiongroup ==1.2.0
- ghp-import ==2.1.0
- griffe ==0.42.0
- iniconfig ==2.0.0
- jinja2 ==3.1.3
- lxml ==5.1.0
- markdown ==3.5.2
- markupsafe ==2.1.5
- mergedeep ==1.3.4
- mkdocs ==1.5.3
- mkdocs-autorefs ==1.0.1
- mkdocstrings ==0.24.1
- mkdocstrings-python ==1.9.0
- openpyxl ==3.1.2
- packaging ==24.0
- pathspec ==0.12.1
- platformdirs ==4.2.0
- pluggy ==1.4.0
- pymdown-extensions ==10.7.1
- pytest ==8.1.1
- python-dateutil ==2.9.0.post0
- pyyaml ==6.0.1
- pyyaml-env-tag ==0.1
- six ==1.16.0
- soupsieve ==2.5
- tomli ==2.0.1
- watchdog ==4.0.0