ER-Evaluation

ER-Evaluation: End-to-End Evaluation of Entity Resolution Systems - Published in JOSS (2023)

https://github.com/OlivierBinette/er-evaluation

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, joss.theoj.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.9%) to scientific vocabulary

Keywords

author-name-disambiguation data-science deduplication disambiguation duplicate-detection entity-resolution evaluation fuzzy-matching inventor-name-disambiguation matching ml-evaluation ml-testing record-linkage statistics

Scientific Fields

Artificial Intelligence and Machine Learning Computer Science - 40% confidence
Last synced: 4 months ago · JSON representation ·

Repository

An End-to-End Evaluation Framework for Entity Resolution Systems

Basic Info
Statistics
  • Stars: 31
  • Watchers: 2
  • Forks: 10
  • Open Issues: 4
  • Releases: 5
Topics
author-name-disambiguation data-science deduplication disambiguation duplicate-detection entity-resolution evaluation fuzzy-matching inventor-name-disambiguation matching ml-evaluation ml-testing record-linkage statistics
Created about 3 years ago · Last pushed about 2 years ago
Metadata Files
Readme Changelog Contributing License Code of conduct Citation Authors

README.rst

.. image:: https://github.com/Valires/er-evaluation/actions/workflows/python-package.yaml/badge.svg
        :target: https://github.com/Valires/er-evaluation/actions/workflows/python-package.yaml
        :alt: Github Action workflow status and link.

.. image:: https://badge.fury.io/py/er-evaluation.svg
        :target: https://badge.fury.io/py/er-evaluation
        :alt: PyPI release badge and link.

.. image:: https://readthedocs.org/projects/er-evaluation/badge/?version=latest
        :target: https://er-evaluation.readthedocs.io/en/latest/?version=latest
        :alt: Documentation status badge and link.

.. image:: https://joss.theoj.org/papers/10.21105/joss.05619/status.svg
       :target: https://doi.org/10.21105/joss.05619
       :alt: Journal of Open Source Software publication badge and link.

🔍 ER-Evaluation: An End-to-End Evaluation Framework for Entity Resolution Systems
===================================================================================

`ER-Evaluation `_ is a Python package for the evaluation of entity resolution (ER) systems.

It provides an **entity-centric** approach to evaluation. Given a sample of resolved entities, it provides: 

* **summary statistics**, such as average cluster size, matching rate, homonymy rate, and name variation rate.
* **comparison statistics** between entity resolutions, such as proportion of links from one which is also in the other, and vice-versa.
* **performance estimates** with uncertainty quantification, such as precision, recall, and F1 score estimates, as well as B-cubed and cluster metric estimates.
* **error analysis**, such as cluster-level error metrics and analysis tools to find root cause of errors.
* convenience **visualization tools**.

For more information on how to resolve a sample of entities for evaluation and model training, please refer to our `data labeling guide `_.

Installation
---------------

Install the released version from PyPI using:

.. code:: bash

    pip install er-evaluation

Or install the development version using:
.. code:: bash

    pip install git+https://github.com/Valires/er-evaluation.git


Documentation
----------------

Please refer to the documentation website `er-evaluation.readthedocs.io `_.

Usage Examples
-----------------

Please refer to the `User Guide `_ or our `Visualization Examples `_ for a complete usage guide.

In summary, here's how you might use the package.

1. Import your predicted disambiguations and reference benchmark dataset. The benchmark dataset should contain a sample of disambiguated entities.

.. code::

        import er_evaluation as ee

        predictions, reference = ee.load_pv_disambiguations()

2. Plot `summary statistics `_ and compare disambiguations.

.. code::

        ee.plot_summaries(predictions)

.. image:: media/plot_summaries.png
   :width: 400

.. code::

        ee.plot_comparison(predictions)

.. image:: media/plot_comparison.png
   :width: 400

3. Define sampling weights and `estimate performance metrics `_.

.. code::

        ee.plot_estimates(predictions, {"sample":reference, "weights":"cluster_size"})

.. image:: media/plot_estimates.png
   :width: 400

4. Perform `error analysis `_ using cluster-level explanatory features and cluster error metrics.

.. code::

        ee.make_dt_regressor_plot(
                y,
                weights,
                features_df,
                numerical_features,
                categorical_features,
                max_depth=3,
                type="sunburst"
        )

.. image:: media/plot_decisiontree.png
   :width: 400

Development Philosophy
-------------------------

**ER-Evaluation** is designed to be a unified source of evaluation tools for entity resolution systems, adhering to the Unix philosophy of simplicity, modularity, and composability. The package contains Python functions that take standard data structures such as pandas Series and DataFrames as input, making it easy to integrate into existing workflows. By importing the necessary functions and calling them on your data, you can easily use ER-Evaluation to evaluate your entity resolution system without worrying about custom data structures or complex architectures.

Citation
-----------

Please acknowledge the publications below if you use ER-Evaluation:

- Binette, Olivier. (2022). ER-Evaluation: An End-to-End Evaluation Framework for Entity Resolution Systems. Available online at `github.com/Valires/ER-Evaluation `_
- Binette, Olivier, Sokhna A York, Emma Hickerson, Youngsoo Baek, Sarvo Madhavan, Christina Jones. (2022). Estimating the Performance of Entity Resolution Algorithms: Lessons Learned Through PatentsView.org. arXiv e-prints: `arxiv:2210.01230 `_
- Upcoming: "An End-to-End Framework for the Evaluation of Entity Resolution Systems With Application to Inventor Name Disambiguation"

Public License
--------------

* `GNU Affero General Public License v3 `_

Owner

  • Name: Olivier Binette
  • Login: OlivierBinette
  • Kind: user
  • Location: Durham, NC
  • Company: Duke University

Research Scientist @ Upstart // Duke Statistical Science PhD

Citation (CITATION.cff)

cff-version: "1.2.0"
authors:
- family-names: Binette
  given-names: Olivier
  orcid: "https://orcid.org/0000-0001-6009-5206"
- family-names: Reiter
  given-names: Jerome P.
  orcid: "https://orcid.org/0000-0002-8374-3832"
contact:
- family-names: Binette
  given-names: Olivier
  orcid: "https://orcid.org/0000-0001-6009-5206"
doi: 10.5281/zenodo.10086102
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Binette
    given-names: Olivier
    orcid: "https://orcid.org/0000-0001-6009-5206"
  - family-names: Reiter
    given-names: Jerome P.
    orcid: "https://orcid.org/0000-0002-8374-3832"
  date-published: 2023-11-11
  doi: 10.21105/joss.05619
  issn: 2475-9066
  issue: 91
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 5619
  title: "ER-Evaluation: End-to-End Evaluation of Entity Resolution
    Systems"
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.05619"
  volume: 8
title: "ER-Evaluation: End-to-End Evaluation of Entity Resolution
  Systems"

GitHub Events

Total
  • Issues event: 2
  • Watch event: 1
  • Pull request event: 1
Last Year
  • Issues event: 2
  • Watch event: 1
  • Pull request event: 1

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 173
  • Total Committers: 2
  • Avg Commits per committer: 86.5
  • Development Distribution Score (DDS): 0.017
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
OlivierBinette o****e@g****m 170
allcontributors[bot] 4****] 3

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 6
  • Total pull requests: 5
  • Average time to close issues: 4 months
  • Average time to close pull requests: 4 days
  • Total issue authors: 4
  • Total pull request authors: 2
  • Average comments per issue: 1.67
  • Average comments per pull request: 0.0
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 3
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • daidoji (3)
  • OlivierBinette (1)
  • ThomasHepworth (1)
  • osorensen (1)
Pull Request Authors
  • OlivierBinette (3)
  • daidoji (2)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

.github/workflows/python-package.yaml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
environment.yml conda
  • pip >=22
  • python 3.10.*