https://github.com/csinva/iprompt

Finding semantically meaningful and accurate prompts.

https://github.com/csinva/iprompt

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, nature.com
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.1%) to scientific vocabulary

Keywords

ai autoprompt deep-learning explainability explainable-ai interpretability iprompt language-model large-language-models ml natural-language-processing neural-network prompting text text-classification xai
Last synced: 5 months ago · JSON representation

Repository

Finding semantically meaningful and accurate prompts.

Basic Info
Statistics
  • Stars: 46
  • Watchers: 3
  • Forks: 8
  • Open Issues: 0
  • Releases: 0
Topics
ai autoprompt deep-learning explainability explainable-ai interpretability iprompt language-model large-language-models ml natural-language-processing neural-network prompting text text-classification xai
Created over 3 years ago · Last pushed over 2 years ago
Metadata Files
Readme

readme.md

Interpretable autoprompting

Natural language explanations of a dataset via language-model autoprompting.

📚 sklearn-friendly api📖 demo notebook

Official code for using / reproducing iPrompt from the paper "Explaining Patterns in Data with Language Models via Interpretable Autoprompting" (Singh, Morris, Aneja, Rush, & Gao, 2022) iPrompt generates a human-interpretable prompt that explains patterns in data while still inducing strong generalization performance.

https://user-images.githubusercontent.com/4960970/197355573-e5a1af4c-0784-4344-a314-79793f284b97.mov

Quickstart

Installation: pip install imodelsx (or, for more control, clone and install from source)

Usage example (see imodelsX for more details):

```python from imodelsx import explaindatasetiprompt, getaddtwonumbersdataset

get a simple dataset of adding two numbers

inputstrings, outputstrings = getaddtwonumbersdataset(numexamples=100) for i in range(5): print(repr(inputstrings[i]), repr(output_strings[i]))

explain the relationship between the inputs and outputs

with a natural-language prompt string

prompts, metadata = explaindatasetiprompt( inputstrings=inputstrings, outputstrings=outputstrings, checkpoint='EleutherAI/gpt-j-6B', # which language model to use numlearnedtokens=3, # how long of a prompt to learn n_shots=3, # shots per example

n_epochs=15, # how many epochs to search
verbose=0, # how much to print
llm_float16=True, # whether to load the model in float_16

)

prompts is a list of found natural-language prompt strings ```

Docs

Abstract: Large language models (LLMs) have displayed an impressive ability to harness natural language to perform complex tasks. In this work, we explore whether we can leverage this learned ability to find and explain patterns in data. Specifically, given a pre-trained LLM and data examples, we introduce interpretable autoprompting (iPrompt), an algorithm that generates a natural-language string explaining the data. iPrompt iteratively alternates between generating explanations with an LLM and reranking them based on their performance when used as a prompt. Experiments on a wide range of datasets, from synthetic mathematics to natural-language understanding, show that iPrompt can yield meaningful insights by accurately finding groundtruth dataset descriptions. Moreover, the prompts produced by iPrompt are simultaneously human-interpretable and highly effective for generalization: on real-world sentiment classification datasets, iPrompt produces prompts that match or even improve upon human-written prompts for GPT-3. Finally, experiments with an fMRI dataset show the potential for iPrompt to aid in scientific discovery.
  • the main api requires simply importing imodelsx
  • the experiments and experiments/scripts folders contain hyperparameters for running sweeps contained in the paper
    • note: args that start with use_ are boolean
  • the notebooks folder contains notebooks for analyzing the outputs + making figures

Related work

  • fMRI data experiment: Uses scientific data/code from https://github.com/HuthLab/speechmodeltutorial linked to the paper "Natural speech reveals the semantic maps that tile human cerebral cortex" Huth, A. G. et al., (2016) Nature.
  • AutoPrompt: find an (uninterpretable) prompt using input-gradients (paper; github)
  • Aug-imodels: Explain a dataset by fitting an interpretable linear model/decision tree leveraging a pre-trained language model (paper; github)

Testing

  • to check if the pipeline seems to work, install pytest then run pytest from the repo's root directory

If this package is useful for you, please cite the following!

r @article{singh2022iprompt, title = {Explaining Patterns in Data with Language Models via Interpretable Autoprompting}, author = {Singh, Chandan and Morris, John X. and Aneja, Jyoti and Rush, Alexander M. and Gao, Jianfeng}, year = {2022}, url = {https://arxiv.org/abs/2210.01848}, publisher = {arXiv}, doi = {10.48550/ARXIV.2210.01848} }

Owner

  • Name: Chandan Singh
  • Login: csinva
  • Kind: user
  • Location: Microsoft research
  • Company: Senior researcher

Senior researcher @Microsoft interpreting ML models in science and medicine. PhD from UC Berkeley.

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • datasets *
  • dict_hash *
  • imodelsx *
  • numpy *
  • pandas *
  • scikit-learn *
  • scipy *
  • torch *
  • tqdm *
  • transformers *
setup.py pypi