tasksource
Datasets collection and preprocessings framework for NLP extreme multitask learning
Science Score: 64.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
✓Committers with academic emails
1 of 2 committers (50.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.2%) to scientific vocabulary
Keywords
Repository
Datasets collection and preprocessings framework for NLP extreme multitask learning
Basic Info
Statistics
- Stars: 184
- Watchers: 3
- Forks: 10
- Open Issues: 4
- Releases: 48
Topics
Metadata Files
README.md
tasksource
600+ curated datasets and preprocessings for instant and interchangeable use
Huggingface Datasets is an excellent library, but it lacks standardization, and datasets often require preprocessing work to be used interchangeably.
tasksource streamlines interchangeable datasets usage to scale evaluation or multi-task learning.
Each dataset is standardized to a MultipleChoice, Classification, or TokenClassification template with canonical fields. We focus on discriminative tasks (= with negative examples or classes) for our annotations but also provide a SequenceToSequence template. All implemented preprocessings are in tasks.py or tasks.md. A preprocessing is a function that accepts a dataset and returns the standardized dataset. Preprocessing code is concise and human-readable.
Installation and usage:
pip install tasksource
```python
from tasksource import listtasks, loadtask
df = list_tasks(multilingual=False) # takes some time
for id in df[df.tasktype=="MultipleChoice"].id: dataset = loadtask(id) # all yielded datasets can be used interchangeably ```
Browse the 500+ curated tasks in tasks.md (200+ MultipleChoice tasks, 200+ Classification tasks), and feel free to request a new task. Datasets are downloaded to $HF_DATASETS_CACHE (like any Hugging Face dataset), so ensure you have more than 100GB of space available.
You can now also use:
python
load_dataset("tasksource/data", "glue/rte",max_rows=30_000)
Pretrained models:
Text encoder pretrained on tasksource reached state-of-the-art results: 🤗/deberta-v3-base-tasksource-nli
Tasksource pretraining is notably helpful for RLHF reward modeling or any kind of classification, including zero-shot. You can also find a large and a multilingual version.
tasksource-instruct
The repo also contains some recasting code to convert tasksource datasets to instructions, providing one of the richest instruction-tuning datasets: 🤗/tasksource-instruct-v0
tasksource-label-nli
We also recast all classification tasks as natural language inference, to improve entailment-based zero-shot classification detection: 🤗/zero-shot-label-nli
Write and use custom preprocessings
```python from tasksource import MultipleChoice
codah = MultipleChoice('questionpropmt',choiceslist='candidateanswers', labels='correctansweridx', datasetname='codah', config_name='codah')
winogrande = MultipleChoice('sentence',['option1','option2'],'answer', datasetname='winogrande',configname='winogrande_xl', splits=['train','validation',None]) # test labels are not usable
tasks = [winogrande.load(), codah.load()]) # Aligned datasets (same columns) can be used interchangably
```
### Citation and contact
For more details, refer to this article:
bib
@inproceedings{sileo-2024-tasksource,
title = "tasksource: A Large Collection of {NLP} tasks with a Structured Dataset Preprocessing Framework",
author = "Sileo, Damien",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.1361",
pages = "15655--15684",
}
For help integrating tasksource into your experiments, please contact damien.sileo@inria.fr.
Owner
- Login: sileod
- Kind: user
- Repositories: 25
- Profile: https://github.com/sileod
Damien Sileo
Citation (CITATION.cff)
cff-version: 1.1.0
message: "If you use this work, please cite it as below."
authors:
- family-names: "Sileo"
given-names: "Damien"
title: "tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation"
version: "1.0.0"
date-released: 2023-01-01
url: "https://arxiv.org/abs/2301.05948"
GitHub Events
Total
- Release event: 1
- Watch event: 39
- Issue comment event: 1
- Push event: 3
- Pull request event: 2
- Fork event: 2
- Create event: 2
Last Year
- Release event: 1
- Watch event: 39
- Issue comment event: 1
- Push event: 3
- Pull request event: 2
- Fork event: 2
- Create event: 2
Committers
Last synced: almost 3 years ago
All Time
- Total Commits: 132
- Total Committers: 2
- Avg Commits per committer: 66.0
- Development Distribution Score (DDS): 0.038
Top Committers
| Name | Commits | |
|---|---|---|
| sileod | d****o@g****m | 127 |
| Damien Sileo | d****o@m****r | 5 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 8 months ago
All Time
- Total issues: 9
- Total pull requests: 0
- Average time to close issues: about 1 month
- Average time to close pull requests: N/A
- Total issue authors: 9
- Total pull request authors: 0
- Average comments per issue: 1.44
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- cuongnguyengit (1)
- pesvut (1)
- avidale (1)
- assaftibm (1)
- Deehan1866 (1)
- A1exRey (1)
- nickypro (1)
- nivibilla (1)
- imoneoi (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 244 last-month
- Total dependent packages: 1
- Total dependent repositories: 0
- Total versions: 45
- Total maintainers: 1
pypi.org: tasksource
Preprocessings to prepare datasets for a task
- Homepage: https://github.com/sileod/tasksource/
- Documentation: https://tasksource.readthedocs.io/
- License: BSD License
-
Latest release: 0.0.47
published about 1 year ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v3 composite
- pypa/gh-action-pypi-publish release/v1 composite
- actions/checkout v3 composite