evaluate
๐ค Evaluate: A library for easily evaluating machine learning models and datasets.
Science Score: 23.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
โCITATION.cff file
-
โcodemeta.json file
Found codemeta.json file -
โ.zenodo.json file
-
โDOI references
-
โAcademic publication links
-
โCommitters with academic emails
3 of 129 committers (2.3%) from academic institutions -
โInstitutional organization owner
-
โJOSS paper metadata
-
โScientific vocabulary similarity
Low similarity (16.5%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
๐ค Evaluate: A library for easily evaluating machine learning models and datasets.
Basic Info
- Host: GitHub
- Owner: huggingface
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://huggingface.co/docs/evaluate
- Size: 2.04 MB
Statistics
- Stars: 2,308
- Watchers: 43
- Forks: 290
- Open Issues: 246
- Releases: 11
Topics
Metadata Files
README.md
Tip: For more recent evaluation approaches, for example for evaluating LLMs, we recommend our newer and more actively maintained library LightEval.
๐ค Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized.
It currently contains:
- implementations of dozens of popular metrics: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like
accuracy = load("accuracy"), get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX). - comparisons and measurements: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets.
- an easy way of adding new evaluation modules to the ๐ค Hub: you can create new evaluation modules and push them to a dedicated Space in the ๐ค Hub with
evaluate-cli create [metric name], which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.
๐ Find a metric, comparison, measurement on the Hub
๐ Add a new evaluation module
๐ค Evaluate also has lots of useful features like:
- Type checking: the input types are checked to make sure that you are using the right input formats for each metric
- Metric cards: each metrics comes with a card that describes the values, limitations and their ranges, as well as providing examples of their usage and usefulness.
- Community metrics: Metrics live on the Hugging Face Hub and you can easily add your own metrics for your project or to collaborate with others.
Installation
With pip
๐ค Evaluate can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)
bash
pip install evaluate
Usage
๐ค Evaluate's main methods are:
evaluate.list_evaluation_modules()to list the available metrics, comparisons and measurementsevaluate.load(module_name, **kwargs)to instantiate an evaluation moduleresults = module.compute(*kwargs)to compute the result of an evaluation module
Adding a new evaluation module
First install the necessary dependencies to create a new metric with the following command:
bash
pip install evaluate[template]
Then you can get started with the following command which will create a new folder for your metric and display the necessary steps:
bash
evaluate-cli create "Awesome Metric"
See this step-by-step guide in the documentation for detailed instructions.
Credits
Thanks to @marella for letting us use the evaluate namespace on PyPi previously used by his library.
Owner
- Name: Hugging Face
- Login: huggingface
- Kind: organization
- Location: NYC + Paris
- Website: https://huggingface.co/
- Twitter: huggingface
- Repositories: 344
- Profile: https://github.com/huggingface
The AI community building the future.
GitHub Events
Total
- Create event: 5
- Release event: 1
- Issues event: 31
- Watch event: 281
- Delete event: 6
- Member event: 1
- Issue comment event: 49
- Push event: 13
- Pull request review comment event: 5
- Pull request event: 20
- Pull request review event: 13
- Fork event: 37
Last Year
- Create event: 5
- Release event: 1
- Issues event: 31
- Watch event: 281
- Delete event: 6
- Member event: 1
- Issue comment event: 49
- Push event: 13
- Pull request review comment event: 5
- Pull request event: 20
- Pull request review event: 13
- Fork event: 37
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Quentin Lhoest | 4****q | 201 |
| Albert Villanova del Moral | 8****a | 116 |
| Sasha Luccioni | l****s@m****c | 103 |
| Leandro von Werra | l****a | 99 |
| Mario ล aลกko | m****7@g****m | 50 |
| Thomas Wolf | t****f | 29 |
| sashavor | a****a@g****m | 24 |
| leandro | l****a@s****o | 24 |
| sashavor | s****i@h****o | 23 |
| helen | 3****n | 16 |
| Bram Vanroy | B****y@U****e | 13 |
| lewtun | l****l@g****m | 13 |
| Patrick von Platen | p****n@g****m | 12 |
| mathemakitten | h****n@h****o | 9 |
| fxmarty | 9****y | 9 |
| Steven Liu | 5****u | 6 |
| Sylvain Lesage | s****o@r****t | 6 |
| emibaylor | 2****r | 6 |
| meg | 9****e | 6 |
| Simon Brandeis | 3****s | 6 |
| Mishig | d****g@g****m | 4 |
| douwekiela | d****a | 4 |
| Yacine Jernite | y****e | 4 |
| Sylvain Gugger | 3****r | 4 |
| Philipp Schmid | 3****d | 4 |
| Julien Plu | p****n@g****m | 4 |
| Steven | s****u@g****m | 4 |
| Nima Boscarino | n****o@g****m | 3 |
| Ricardo Rei | r****i@u****m | 3 |
| Lysandre Debut | l****e@h****o | 2 |
| and 99 more... | ||
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 200
- Total pull requests: 178
- Average time to close issues: 2 months
- Average time to close pull requests: about 1 month
- Total issue authors: 169
- Total pull request authors: 91
- Average comments per issue: 1.92
- Average comments per pull request: 0.95
- Merged pull requests: 58
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 38
- Pull requests: 50
- Average time to close issues: 2 days
- Average time to close pull requests: 2 days
- Issue authors: 33
- Pull request authors: 16
- Average comments per issue: 0.21
- Average comments per pull request: 0.26
- Merged pull requests: 19
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- albertvillanova (7)
- lvwerra (7)
- daskol (4)
- FlorinAndrei (3)
- shivanraptor (3)
- lewtun (2)
- jpodivin (2)
- trajepl (2)
- BramVanroy (2)
- lowlypalace (2)
- AndreaSottana (2)
- boyleconnor (2)
- mathemakitten (2)
- NightMachinery (2)
- NielsRogge (2)
Pull Request Authors
- lhoestq (21)
- albertvillanova (11)
- MedAhmedKrichen (5)
- shenxiangzhuang (4)
- qubvel (4)
- krishnap25 (4)
- jpodivin (4)
- skyil7 (4)
- nikvaessen (3)
- lvwerra (3)
- hazrulakmal (3)
- Wauplin (3)
- tybrs (2)
- tupini07 (2)
- milistu (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 4
-
Total downloads:
- pypi 3,631,542 last-month
- Total docker downloads: 24,936,750
-
Total dependent packages: 222
(may contain duplicates) -
Total dependent repositories: 2,480
(may contain duplicates) - Total versions: 35
- Total maintainers: 3
pypi.org: evaluate
HuggingFace community-driven open-source library of evaluation
- Homepage: https://github.com/huggingface/evaluate
- Documentation: https://evaluate.readthedocs.io/
- License: Apache 2.0
-
Latest release: 0.4.5
published 8 months ago
Rankings
proxy.golang.org: github.com/huggingface/evaluate
- Documentation: https://pkg.go.dev/github.com/huggingface/evaluate#section-documentation
- License: apache-2.0
-
Latest release: v0.4.5
published 8 months ago
Rankings
conda-forge.org: evaluate
- Homepage: https://github.com/huggingface/evaluate
- License: Apache-2.0
-
Latest release: 0.2.2
published over 3 years ago
Rankings
anaconda.org: evaluate
Evaluate is a library that makes evaluating and comparing models and reporting their performance easier and more standardized. It currently contains: - implementations of dozens of popular metrics: the existing metrics cover a variety of tasks spanning from NLP to Computer Vision, and include dataset-specific metrics for datasets. With a simple command like `accuracy = load("accuracy")`, get any of these metrics ready to use for evaluating a ML model in any framework (Numpy/Pandas/PyTorch/TensorFlow/JAX). - comparisons and measurements: comparisons are used to measure the difference between models and measurements are tools to evaluate datasets. - an easy way of adding new evaluation modules to the ๐ค Hub: you can create new evaluation modules and push them to a dedicated Space in the ๐ค Hub with evaluate-cli create [metric name], which allows you to see easily compare different metrics and their outputs for the same sets of references and predictions.
- Homepage: https://github.com/huggingface/evaluate
- License: Apache-2.0
-
Latest release: 0.4.3
published 8 months ago
Rankings
Dependencies
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- huggingface_hub *
- gin-config * test
- unbabel-comet >=1.0.0 test
- scipy *
- scipy *
- datasets *
- scipy *
- torch *
- transformers *
- unidecode ==1.3.4
- scipy *
- torch *
- transformers *
- torch *
- transformers *
- torch *
- transformers *
- scikit-learn *
- nltk *
- scikit-learn *
- bert_score *
- scikit-learn *
- jiwer *
- cer >=1.2.0
- charcut >=1.1.1
- sacrebleu *
- torch *
- unbabel-comet *
- scikit-learn *
- torch *
- transformers *
- scikit-learn *
- scipy *
- nltk *
- scikit-learn *
- scipy *
- scikit-learn *
- scikit-learn *
- scikit-learn *
- scikit-learn *
- faiss-cpu *
- mauve-text *
- scikit-learn *
- nltk *
- scikit-learn *
- nltk *
- scipy *
- torch *
- transformers *
- scikit-learn *
- scikit-learn *
- scikit-learn *
- gin-config *
- scipy *
- tensorflow *
- scikit-learn *
- absl-py *
- nltk *
- rouge_score >=0.1.2
- sacrebleu *
- sacrebleu *
- sacremoses *
- seqeval *
- scikit-learn *
- scipy *
- scikit-learn *
- sacrebleu *
- trectools *
- jiwer *
- sacrebleu *
- sacremoses *
- scikit-learn *