self-refine
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.4%) to scientific vocabulary
Keywords
Repository
LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.
Basic Info
- Host: GitHub
- Owner: madaan
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://selfrefine.info
- Size: 53.7 MB
Statistics
- Stars: 679
- Watchers: 14
- Forks: 58
- Open Issues: 9
- Releases: 0
Topics
Metadata Files
README.md
Self-Refine: Iterative Refinement with Self-Feedback
With Self-Refine, LLMs can generate feedback on their work, use it to improve the output, and repeat this process.

Website | Paper
Table of Contents
- Updates
- Setup
- Getting Started with Acronym Generation
- Dialogue Response Generation
- Code Readability Improvement
- Commongen
- GSM-8k
- Yelp
- PIE
- General setup
- Citation
Updates
- Nov 2023: Added visual self-refine examples and colabs. Use GPT4-V to write tikz code for diagrams, and improve them iteratively.
Stokes' Theorem Example
Unicorn Example
Setup
- We use prompt-lib for querying LLMs. You can install
prompt-libby running the following commands:
sh
git clone https://github.com/reasoning-machines/prompt-lib
pip install prompt-lib/
Depending on your default settings, you may want to use the following to set a PYTHONPATH:
sh
export PYTHONPATH=".:../:.:src:../:../../:.:prompt-lib"
Getting Started with Acronym Generation
sh
python -u src/acronym/run.py "Using language models of code for few-shot commonsense"
- Since we sample with temperature 0.7, you'll get different results each time you run the above command.
```swift 0 INIT> Using language models of code for few-shot commonsense
0 GEN> CLoCK 0 SCORES> * Ease of pronunciation: CLoCK is pronounced "clah-k". This is easy acronym to pronounce. 4/5 * Ease of spelling: CLoCK is easy to spell. 5/5 * Relation to title: CLoCK stands for "Code Language Models of Commonsense Knowledge" which is related to the title. 5/5 * Positive connotation: CLoCK is a positive acronym. It implies accuracy and efficiency. 5/5 * Well-known: CLoCK is not a well-known acronym. 2/5
- Total score: 21/25
1 GEN> CMNSC 1 SCORES> * Ease of pronunciation: CMNSC is pronounced "see-min-es-see". This is a difficult acronym to pronounce. 1/5 * Ease of spelling: CMNSC is difficult to spell. 1/5 * Relation to title: CMNSC stands for "Commonsense" which is related to the title. 4/5 * Positive connotation: CMNSC is a neutral acronym. 3/5 * Well-known: CMNSC is not a well-known acronym. 1/5
- Total score: 10/25
2 GEN> COMMIT 2 SCORES> * Ease of pronunciation: COMMIT is pronounced "com-mit". This is an easy acronym to pronounce. 4/5 * Ease of spelling: COMMIT is easy to spell. 5/5 * Relation to title: COMMIT stands for "Commonsense Model of Code for Intuitive Training" which is related to the title. 5/5 * Positive connotation: COMMIT is a positive acronym. It implies commitment and dedication. 5/5 * Well-known: COMMIT is not a well-known acronym. 2/5
- Total score: 21/25 ```
Dialogue Response Generation
sh
PYTHONPATH="." python -u src/responsegen/run.py --output <OUTPUT FILE> --size <DATA SIZE>
- Use size 0 for running on all test instances
Code Readability Improvement
Note: Please unzip 'data/tasks/codeclean/code_readability/codenet-python-train.jsonl.zip' before running the following commands!
Running:
sh PYTHONPATH="." python -u src/readability/readability.py --output <OUTPUT FILE>Evaluation:
sh PYTHONPATH="." python -u src/readability/{count_comment|count_function|count_meaningful_var}.py --file <INPUT FILE>
Commongen
- We use a hard version of commongen. The data is located in
data/prompt/commongen. You can download the data by running the following commands:
sh
python -u src/commongen/run.py cmd stair bubble team dryer puppy aliens cat
GSM-8k
- To run the GSM-8k task:
sh
python -u src/gsm/run.py
The outputs will be saved in
data/tasks/gsm/gsm_outputs.jsonlTo evaluate the outputs:
sh
python src/gsm/gsm_selfref_eval.py --path data/tasks/gsm/gsm_outputs.jsonl
- The evaluation script will also generate a report (
data/tasks/gsm/gsm_outputs.jsonl.reports.txt) showing examples of wrong generations, feedback, and refined feedback generations.
Yelp
- To run the Yelp task:
sh
python -u src/sentiment_transfer_sr/run.py data/tasks/yelp/yelp-extreme.jso
nl 4 none
- The outputs will be saved in
data/tasks/yelp/
PIE
- To run the PIE task:
sh
python -u src/pie/run.py --slow_programs_file data/tasks/pie/codenet-python-test-1k.jsonl --max_attempts 4 --outfile data/tasks/pie/output --feedback_type rich
- For evaluation details, please see docs/pie_eval.md.
General setup
- Each task has three different types of prompts:
Init: used to initialize the task. This is how the initial output is generated.Feedback: used to get feedback from the model on the intermediate results.Iterate: used to get the next iteration from the model, based on the feedback.
Every task has a
run.pythat initializes the prompts and runs the task.As an example, the prompts for commongen are as follows:
- Init prompt:
sh
python src/commongen/task_init.py
- Feedback prompt:
sh
python src/commongen/feedback.py
- Iterate prompt:
sh
python src/commongen/task_iterate.py
You can also see these prompts on our website.
Citation
sql
@misc{madaan2023selfrefine,
title={Self-Refine: Iterative Refinement with Self-Feedback},
author={Aman Madaan and Niket Tandon and Prakhar Gupta and Skyler Hallinan and Luyu Gao and Sarah Wiegreffe and Uri Alon and Nouha Dziri and Shrimai Prabhumoye and Yiming Yang and Sean Welleck and Bodhisattwa Prasad Majumder and Shashank Gupta and Amir Yazdanbakhsh and Peter Clark},
year={2023},
eprint={2303.17651},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
mermaid
flowchart LR
Generator -->|Initializes| Unrefined
Critic_1 --> Critique_fb
... --> Critique_fb
Critic_k --> Critique_fb
Critique_fb --> Unrefined{Output to Refine}
Unrefined --> Refiner
Refiner --> |R: y_t, x, fb| Refined_Output{Refined Output}
Refined_Output --> |Stopping Criteria Not Met| Unrefined
Owner
- Name: Aman Madaan
- Login: madaan
- Kind: user
- Location: Pittsburgh, PA
- Website: https://madaan.github.io
- Twitter: aman_madaan
- Repositories: 11
- Profile: https://github.com/madaan
PhD student at CMU
Citation (CITATION.bib)
@misc{madaan2023selfrefine,
title={Self-Refine: Iterative Refinement with Self-Feedback},
author={Aman Madaan and Niket Tandon and Prakhar Gupta and Skyler Hallinan and Luyu Gao and Sarah Wiegreffe and Uri Alon and Nouha Dziri and Shrimai Prabhumoye and Yiming Yang and Sean Welleck and Bodhisattwa Prasad Majumder and Shashank Gupta and Amir Yazdanbakhsh and Peter Clark},
year={2023},
eprint={2303.17651},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
GitHub Events
Total
- Issues event: 1
- Watch event: 135
- Fork event: 17
Last Year
- Issues event: 1
- Watch event: 135
- Fork event: 17
Dependencies
- actions/checkout v2 composite
- technote-space/toc-generator v4 composite
- github-pages >= 0 development
- jekyll-feed ~> 0.6 development
- eventmachine ~> 1.2
- kramdown-parser-gfm >= 0
- minima ~> 2.0
- tzinfo ~> 1.2
- tzinfo-data >= 0
- activesupport 5.2.8.1
- addressable 2.8.1
- bundler 2.2.32
- coffee-script 2.4.1
- coffee-script-source 1.11.1
- colorator 1.1.0
- commonmarker 0.17.13
- concurrent-ruby 1.1.10
- dnsruby 1.61.9
- em-websocket 0.5.3
- ethon 0.15.0
- eventmachine 1.2.7
- execjs 2.8.1
- faraday 1.0.1
- ffi 1.15.5
- forwardable-extended 2.6.0
- gemoji 3.0.1
- github-pages 207
- github-pages-health-check 1.16.1
- html-pipeline 2.14.3
- http_parser.rb 0.8.0
- i18n 0.9.5
- jekyll 3.9.0
- jekyll-avatar 0.7.0
- jekyll-coffeescript 1.1.1
- jekyll-commonmark 1.3.1
- jekyll-commonmark-ghpages 0.1.6
- jekyll-default-layout 0.1.4
- jekyll-feed 0.13.0
- jekyll-gist 1.5.0
- jekyll-github-metadata 2.13.0
- jekyll-mentions 1.5.1
- jekyll-optional-front-matter 0.3.2
- jekyll-paginate 1.1.0
- jekyll-readme-index 0.3.0
- jekyll-redirect-from 0.15.0
- jekyll-relative-links 0.6.1
- jekyll-remote-theme 0.4.1
- jekyll-sass-converter 1.5.2
- jekyll-seo-tag 2.6.1
- jekyll-sitemap 1.4.0
- jekyll-swiss 1.0.0
- jekyll-theme-architect 0.1.1
- jekyll-theme-cayman 0.1.1
- jekyll-theme-dinky 0.1.1
- jekyll-theme-hacker 0.1.1
- jekyll-theme-leap-day 0.1.1
- jekyll-theme-merlot 0.1.1
- jekyll-theme-midnight 0.1.1
- jekyll-theme-minimal 0.1.1
- jekyll-theme-modernist 0.1.1
- jekyll-theme-primer 0.5.4
- jekyll-theme-slate 0.1.1
- jekyll-theme-tactile 0.1.1
- jekyll-theme-time-machine 0.1.1
- jekyll-titles-from-headings 0.5.3
- jekyll-watch 2.2.1
- jemoji 0.11.1
- kramdown 2.3.0
- kramdown-parser-gfm 1.1.0
- liquid 4.0.3
- listen 3.5.0
- mercenary 0.3.6
- mini_portile2 2.4.0
- minima 2.5.1
- minitest 5.15.0
- multipart-post 2.2.3
- nokogiri 1.10.10
- octokit 4.25.1
- pathutil 0.16.2
- public_suffix 3.1.1
- rb-fsevent 0.11.2
- rb-inotify 0.10.1
- rexml 3.2.5
- rouge 3.19.0
- ruby-enum 0.9.0
- rubyzip 1.3.0
- safe_yaml 1.0.5
- sass 3.7.4
- sass-listen 4.0.0
- sawyer 0.9.2
- simpleidn 0.2.1
- terminal-table 1.8.0
- thread_safe 0.3.6
- typhoeus 1.4.0
- tzinfo 1.2.10
- tzinfo-data 1.2022.5
- unf 0.1.4
- unf_ext 0.0.8.2
- unicode-display_width 1.8.0
- wdm 0.1.1