promptsource
Toolkit for creating, sharing and using natural language prompts.
Science Score: 64.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
✓Committers with academic emails
9 of 65 committers (13.8%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.8%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
Toolkit for creating, sharing and using natural language prompts.
Basic Info
Statistics
- Stars: 2,923
- Watchers: 36
- Forks: 368
- Open Issues: 43
- Releases: 5
Topics
Metadata Files
README.md
PromptSource
PromptSource is a toolkit for creating, sharing and using natural language prompts.
Recent work has shown that large language models exhibit the ability to perform reasonable zero-shot generalization to new tasks. For instance, GPT-3 demonstrated that large language models have strong zero- and few-shot abilities. FLAN and T0 then demonstrated that pre-trained language models fine-tuned in a massively multitask fashion yield even stronger zero-shot performance. A common denominator in these works is the use of prompts which has gained interest among NLP researchers and engineers. This emphasizes the need for new tools to create, share and use natural language prompts.
Prompts are functions that map an example from a dataset to a natural language input and target output. PromptSource contains a growing collection of prompts (which we call P3: Public Pool of Prompts). As of January 20, 2022, there are ~2'000 English prompts for 170+ English datasets in P3.
PromptSource provides the tools to create, and share natural language prompts (see How to create prompts, and then use the thousands of existing and newly created prompts through a simple API (see How to use prompts). Prompts are saved in standalone structured files and are written in a simple templating language called Jinja. An example of prompt available in PromptSource for SNLI is: ```jinja2 {{premise}}
Question: Does this imply that "{{hypothesis}}"? Yes, no, or maybe? ||| {{answer_choices[label]}} ```
You can browse through existing prompts on the hosted version of PromptSource.
Setup
If you do not intend to create new prompts, you can simply run:
bash
pip install promptsource
Otherwise, you need to install the repo locally:
1. Download the repo
1. Navigate to the root directory of the repo
1. Run pip install -e . to install the promptsource module
Note: for stability reasons, you will currently need a Python 3.7 environment to run the last step. However, if you only intend to use the prompts, and not create new prompts through the interface, you can remove this constraint in the setup.py and install the package locally.
How to use prompts
You can apply prompts to examples from datasets of the Hugging Face Datasets library. ```python
Load an example from the datasets ag_news
from datasets import loaddataset dataset = loaddataset("ag_news", split="train") example = dataset[1]
Load prompts for this dataset
from promptsource.templates import DatasetTemplates agnewsprompts = DatasetTemplates('ag_news')
Print all the prompts available for this dataset. The keys of the dict are the UUIDs the uniquely identify each of the prompt, and the values are instances of Template which wraps prompts
print(agnewsprompts.templates) {'24e44a81-a18a-42dd-a71c-5b31b2d2cb39':
, '8fdc1056-1029-41a1-9c67-354fc2b8ceaf': , '918267e0-af68-4117-892d-2dbe66a58ce9': , '9345df33-4f23-4944-a33c-eef94e626862': , '98534347-fff7-4c39-a795-4e69a44791f7': , 'b401b0ee-6ffe-4a91-8e15-77ee073cd858': , 'cb355f33-7e8c-4455-a72b-48d315bd4f60': }
Select a prompt by its name
prompt = agnewsprompts["classifyquestionfirst"]
Apply the prompt to the example
result = prompt.apply(example) print("INPUT: ", result[0]) INPUT: What label best describes this news article? Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private investment firm Carlyle Group,\which has a reputation for making well-timed and occasionally\controversial plays in the defense industry, has quietly placed\its bets on another part of the market. print("TARGET: ", result[1]) TARGET: Business ```
In the case that you are looking for the prompts available for a particular subset of a dataset, you should use the following syntax: ```python datasetname, subsetname = "super_glue", "rte"
dataset = loaddataset(f"{datasetname}/{subset_name}", split="train") example = dataset[0]
prompts = DatasetTemplates(f"{datasetname}/{subsetname}") ```
You can also collect all the available prompts for their associated datasets:
```python
from promptsource.templates import TemplateCollection
Get all the prompts available in PromptSource
collection = TemplateCollection()
Print a dict where the key is the pair (dataset name, subset name)
and the value is an instance of DatasetTemplates
print(collection.datasetstemplates) {('poemsentiment', None):
, ('commongen', None): , ('anli', None): news', None):, ('cc , ('craigslist_bargains', None): ,...} ```
You can learn more about PromptSource's API to store, manipulate and use prompts in the documentation.
How to create prompts
PromptSource provides a Web-based GUI that enables developers to write prompts in a templating language and immediately view their outputs on different examples.
There are 3 modes in the app: - Sourcing: create and write new prompts - Prompted dataset viewer: check the prompts you wrote (or the existing ones) on the entire dataset - Helicopter view: aggregate high-level metrics on the current state of P3
To launch the app locally, please first make sure you have followed the steps in Setup, and from the root directory of the repo, run:
bash
streamlit run promptsource/app.py
You can also browse through existing prompts on the hosted version of PromptSource. Note the hosted version disables the Sourcing mode (streamlit run promptsource/app.py -- --read-only).
Writing prompts
Before creating new prompts, you should read the contribution guidelines which give an step-by-step description of how to contribute to the collection of prompts.
Datasets that require manual downloads
Some datasets are not handled automatically by datasets and require users to download the dataset manually (story_cloze for instance ).
To handle those datasets as well, we require users to download the dataset and put it in ~/.cache/promptsource. This is the root directory containing all manually downloaded datasets.
You can override this default path using PROMPTSOURCE_MANUAL_DATASET_DIR environment variable. This should point to the root directory.
Development structure
PromptSource and P3 were originally developed as part of the BigScience project for open research 🌸, a year-long initiative targeting the study of large models and datasets. The goal of the project is to research language models in a public environment outside large technology companies. The project has 600 researchers from 50 countries and more than 250 institutions.
In particular, PromptSource and P3 were the first steps for the paper Multitask Prompted Training Enables Zero-Shot Task Generalization.
You will find the official repository to reproduce the results of the paper here: https://github.com/bigscience-workshop/t-zero. We also released T0* (pronounce "T Zero"), a series of models trained on P3 and presented in the paper. Checkpoints are available here.
Known Issues
Warning or Error about Darwin on OS X: Try downgrading PyArrow to 3.0.0.
ConnectionRefusedError: [Errno 61] Connection refused: Happens occasionally. Try restarting the app.
Citation
If you find P3 or PromptSource useful, please cite the following reference:
bibtex
@misc{bach2022promptsource,
title={PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts},
author={Stephen H. Bach and Victor Sanh and Zheng-Xin Yong and Albert Webson and Colin Raffel and Nihal V. Nayak and Abheesht Sharma and Taewoon Kim and M Saiful Bari and Thibault Fevry and Zaid Alyafeai and Manan Dey and Andrea Santilli and Zhiqing Sun and Srulik Ben-David and Canwen Xu and Gunjan Chhablani and Han Wang and Jason Alan Fries and Maged S. Al-shaibani and Shanya Sharma and Urmish Thakker and Khalid Almubarak and Xiangru Tang and Xiangru Tang and Mike Tian-Jian Jiang and Alexander M. Rush},
year={2022},
eprint={2202.01279},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Owner
- Name: BigScience Workshop
- Login: bigscience-workshop
- Kind: organization
- Email: bigscience-contact@googlegroups.com
- Website: https://bigscience.huggingface.co
- Twitter: BigScienceW
- Repositories: 28
- Profile: https://github.com/bigscience-workshop
Research workshop on large language models - The Summer of Language Models 21
Citation (CITATION.cff)
cff-version: "0.2.2"
date-released: 2022-02
message: "If you use this software, please cite it using these metadata."
title: "PromptSource"
url: "https://github.com/bigscience-workshop/promptsource"
authors:
- family-names: Bach
given-names: "Stephen H."
- family-names: Sanh
given-names: Victor
- family-names: Yong
given-names: Zheng-Xin
- family-names: Webson
given-names: Albert
- family-names: Raffel
given-names: Colin
- family-names: Nayak
given-names: "Nihal V."
- family-names: Sharma
given-names: Abheesht
- family-names: Kim
given-names: Taewoon
- family-names: Bari
given-names: "M Saiful"
- family-names: Fevry
given-names: Thibault
- family-names: Alyafeaiu
given-names: Zaid
- family-names: Dey
given-names: Manan
- family-names: Santilli
given-names: Andrea
- family-names: Sun
given-names: Zhiqing
- family-names: Ben-David
given-names: Srulik
- family-names: Xu
given-names: Canwen
- family-names: Chhablani
given-names: Gunjan
- family-names: Wang
given-names: Han
- family-names: Fries
given-names: "Jason Alan"
- family-names: Al-shaibani
given-names: "Maged S."
- family-names: Sharma
given-names: Shanya
- family-names: Thakker
given-names: Urmish
- family-names: Almubarak
given-names: Khalid
- family-names: Tang
given-names: Xiangru
- family-names: Tian-Jian
given-names: Mike
- family-names: Rush
given-names: "Alexander M."
preferred-citation:
type: article
authors:
- family-names: Bach
given-names: "Stephen H."
- family-names: Sanh
given-names: Victor
- family-names: Yong
given-names: Zheng-Xin
- family-names: Webson
given-names: Albert
- family-names: Raffel
given-names: Colin
- family-names: Nayak
given-names: "Nihal V."
- family-names: Sharma
given-names: Abheesht
- family-names: Kim
given-names: Taewoon
- family-names: Bari
given-names: "M Saiful"
- family-names: Fevry
given-names: Thibault
- family-names: Alyafeaiu
given-names: Zaid
- family-names: Dey
given-names: Manan
- family-names: Santilli
given-names: Andrea
- family-names: Sun
given-names: Zhiqing
- family-names: Ben-David
given-names: Srulik
- family-names: Xu
given-names: Canwen
- family-names: Chhablani
given-names: Gunjan
- family-names: Wang
given-names: Han
- family-names: Fries
given-names: "Jason Alan"
- family-names: Al-shaibani
given-names: "Maged S."
- family-names: Sharma
given-names: Shanya
- family-names: Thakker
given-names: Urmish
- family-names: Almubarak
given-names: Khalid
- family-names: Tang
given-names: Xiangru
- family-names: Tian-Jian
given-names: Mike
- family-names: Rush
given-names: "Alexander M."
title: "PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts"
year: 2022
publisher: "arXiv"
url: "https://arxiv.org/abs/2202.01279"
address: "Online"
GitHub Events
Total
- Watch event: 244
- Fork event: 22
Last Year
- Watch event: 244
- Fork event: 22
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Victor SANH | v****h@g****m | 127 |
| Stephen Bach | s****h@g****m | 91 |
| Colin Raffel | c****l@g****m | 41 |
| M Saiful Bari | 3****f | 26 |
| Zaid Alyafeai | a****2@g****m | 26 |
| Arun Raja | 4****b | 26 |
| Manan Dey | m****1@g****m | 25 |
| Albert Webson | 2****n | 24 |
| Sasha Rush | s****h@g****m | 20 |
| Taewoon Kim | t****8@g****m | 19 |
| Kevin Canwen Xu | c****u@1****m | 19 |
| Shanya Sharma | s****7@g****m | 17 |
| Arnaud Stiegler | a****r@g****m | 17 |
| Yong Zheng-Xin | z****g@b****u | 15 |
| Nihal Nayak | n****k@g****m | 15 |
| Urmish | u****4@g****m | 14 |
| Eliza Szczechla | 3****s | 11 |
| Thibault FEVRY | T****y@g****m | 10 |
| Han Wang | W****0@o****m | 10 |
| Gunjan Chhablani | c****n@g****m | 9 |
| Andrea Santilli | a****i@l****t | 8 |
| 姜 天戩 Mike Tian-Jian Jiang | t****g@g****m | 7 |
| Abheesht | s****e@g****m | 6 |
| Debajyoti Datta | d****a | 6 |
| srulikbd | 3****d | 6 |
| Lintang Sutawika | l****g@k****i | 6 |
| Yong Zheng Xin | z****g@m****u | 6 |
| Antoine Chaffin | 3****w | 5 |
| Matteo Manica | d****g@g****m | 5 |
| Sheng (Arnold) Shen | s****s@b****u | 5 |
| and 35 more... | ||
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 28
- Total pull requests: 85
- Average time to close issues: 6 months
- Average time to close pull requests: 2 months
- Total issue authors: 22
- Total pull request authors: 38
- Average comments per issue: 1.96
- Average comments per pull request: 1.91
- Merged pull requests: 44
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- stephenbach (3)
- tianjianjiang (3)
- VictorSanh (2)
- Muennighoff (2)
- norabelrose (1)
- guoziting112 (1)
- rbawden (1)
- lauritowal (1)
- dayeonki (1)
- shermansiu (1)
- AmirPoursaberi (1)
- fxb392 (1)
- jzf2101 (1)
- harrylyf (1)
- TianlinZhang668 (1)
Pull Request Authors
- Muennighoff (20)
- stephenbach (9)
- VictorSanh (9)
- rbawden (3)
- afaji (3)
- jzf2101 (2)
- KhalidAlt (2)
- shanyas10 (2)
- shermansiu (2)
- haileyschoelkopf (2)
- sbmaruf (2)
- gentaiscool (2)
- jordiclive (2)
- JanKalo (1)
- Shashi456 (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 255 last-month
-
Total dependent packages: 2
(may contain duplicates) -
Total dependent repositories: 20
(may contain duplicates) - Total versions: 10
- Total maintainers: 1
pypi.org: promptsource
An Integrated Development Environment and Repository for Natural Language Prompts.
- Homepage: https://github.com/bigscience-workshop/promptsource.git
- Documentation: https://promptsource.readthedocs.io/
- License: Apache Software License 2.0
-
Latest release: 0.2.3
published almost 4 years ago
Rankings
Maintainers (1)
proxy.golang.org: github.com/bigscience-workshop/promptsource
- Documentation: https://pkg.go.dev/github.com/bigscience-workshop/promptsource#section-documentation
- License: apache-2.0
-
Latest release: v0.2.3
published almost 4 years ago
Rankings
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- jitterbit/get-changed-files v1 composite