https://github.com/amazon-science/controllable-readability-summarization
Generating Summaries with Controllable Readability Levels (EMNLP 2023)
https://github.com/amazon-science/controllable-readability-summarization
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.2%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
Generating Summaries with Controllable Readability Levels (EMNLP 2023)
Basic Info
Statistics
- Stars: 14
- Watchers: 5
- Forks: 5
- Open Issues: 2
- Releases: 0
Topics
Metadata Files
README.md
Generating Summaries with Controllable Readability Levels (EMNLP 2023)
This repository contains the code for the paper "Generating Summaries with Controllable Readability Levels".
We developed three text generation techniques for controlling readability:
(a) illustrates the approach to control the summary readability via fine-grained instructions. (b) shows the RL method where given an input document and the readability level, the policy generates a summary to be scored by our Gaussian-based reward, and (c) shows the lookahead approach which uses a readability score of a future summary to guide the generation.
Environment
The easiest way to proceed is to create a conda environment:
conda create -n readability_summ python=3.7
conda activate readability_summ
Further, install PyTorch:
conda install pytorch torchvision torchaudio cpuonly -c pytorch
Install the packages required:
pip install -r requirements.txt
Install trlx (for the RL method):
git clone https://github.com/CarperAI/trlx.git
cd trlx
pip install torch --extra-index-url https://download.pytorch.org/whl/cu118
pip install -e .
Preprocess data
For computing the readability scores for CNN/DM, execute:
cd src/preprocess
python preprocess_cnndm.py
Generate the prompts:
python generate_prompts_category.py
python generate_prompts_score.py
Training
Execute the following commands for training for the prompt-based methods:
cd src/train
./train_cnndm.sh
For the RL method, execute:
cd src/train/rl
./train_rl_cnndm.sh
Inference
For inference, run:
cd inference/
./inference_score.sh <checkpoint_folder>
./inference_category.sh <checkpoint_folder>
For lookahead inference, run:
./inference_lookahead.sh <checkpoint_folder>
Security
See CONTRIBUTING for more information.
License Summary
The documentation is made available under the CC-BY-NC-4.0 License. See the LICENSE file.
Citation
``` @inproceedings{ribeiro-etal-2023-generating, title = "Generating Summaries with Controllable Readability Levels", author = "Ribeiro, Leonardo F. R. and Bansal, Mohit and Dreyer, Markus", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.714", doi = "10.18653/v1/2023.emnlp-main.714", pages = "11669--11687", abstract = "Readability refers to how easily a reader can understand a written text. Several factors affect the readability level, such as the complexity of the text, its subject matter, and the reader{'}s background knowledge. Generating summaries based on different readability levels is critical for enabling knowledge consumption by diverse audiences. However, current text generation approaches lack refined control, resulting in texts that are not customized to readers{'} proficiency levels. In this work, we bridge this gap and study techniques to generate summaries at specified readability levels. Unlike previous methods that focus on a specific readability level (e.g., lay summarization), we generate summaries with fine-grained control over their readability. We develop three text generation techniques for controlling readability: (1) instruction-based readability control, (2) reinforcement learning to minimize the gap between requested and observed readability and (3) a decoding approach that uses lookahead to estimate the readability of upcoming decoding steps. We show that our generation methods significantly improve readability control on news summarization (CNN/DM dataset), as measured by various readability metrics and human judgement, establishing strong baselines for controllable readability in summarization.", }
```
Owner
- Name: Amazon Science
- Login: amazon-science
- Kind: organization
- Website: https://amazon.science
- Twitter: AmazonScience
- Repositories: 80
- Profile: https://github.com/amazon-science
GitHub Events
Total
- Issues event: 1
- Watch event: 1
- Delete event: 4
- Issue comment event: 1
- Push event: 3
- Pull request event: 5
- Fork event: 3
- Create event: 5
Last Year
- Issues event: 1
- Watch event: 1
- Delete event: 4
- Issue comment event: 1
- Push event: 3
- Pull request event: 5
- Fork event: 3
- Create event: 5
Committers
Last synced: 7 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| dependabot[bot] | 4****] | 3 |
| Leonardo Ribeiro | l****o@g****m | 1 |
| Amazon GitHub Automation | 5****o | 1 |
| Ribeiro | l****e@b****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 7 months ago
All Time
- Total issues: 0
- Total pull requests: 2
- Average time to close issues: N/A
- Average time to close pull requests: 2 months
- Total issue authors: 0
- Total pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.5
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 2
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- aayush-5 (1)
Pull Request Authors
- dependabot[bot] (7)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- accelerate ==0.19.0
- datasets ==2.12.0
- deepspeed ==0.9.2
- evaluate ==0.4.0
- py-readability-metrics ==1.4.4
- rouge-score ==0.1.2
- transformers ==4.29.2