emnlp23-paraphrase-types
The official implementation of the EMNLP 2023 paper "Paraphrase Types for Generation and Detection"
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.5%) to scientific vocabulary
Keywords
Repository
The official implementation of the EMNLP 2023 paper "Paraphrase Types for Generation and Detection"
Basic Info
- Host: GitHub
- Owner: jpwahle
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://aclanthology.org/2023.emnlp-main.746/
- Size: 311 KB
Statistics
- Stars: 13
- Watchers: 0
- Forks: 2
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Paraphrase Types for Generation and Detection

The Repository
This repository implements the EMNLP'23 paper "Paraphrase Types for Generation and Detection".
Demo
A demonstration for Paraphrase Type Generation with an interactive chat window can be found on HuggingFace Spaces.
Data
The preprocessed ETPC dataset with paraphrase types can be found on HuggingFace Datasets.
Data card and loading scripts are under etpc/.
Fine-Tuning
Fine-tune generation models
You can use the src/finetune_generation.py script to train the generation models. Here is an example of how to use it:
bash
python3 src/finetune_generation.py --model_nane <model_name> --task_name <task_name> --device <device>
Replace <model_name>, <task_name>, and <device> with your specific values.
<model_name>: The name of the pre-trained model on HF.<task_name>: Paraphrase Type Generation or regular Paraphrase Generation.<device>: CUDA, CPU, or MPS (for Apple Silicon)
For more details on the parameters, refer to the script src/finetune_generation.py.
Fine-tune detection models
You can use the src/finetune_detection.py script to train the detection models. Here is an example of how to use it:
bash
python3 src/finetune_detection.py --model_nane <model_name> --task_name <task_name> --device <device>
Replace <model_name>, <task_name>, and <device> with your specific values.
<model_name>: The name of the pre-trained model on HF.<task_name>: Paraphrase Type Detection or regular Paraphrase Detection.<device>: CUDA, CPU, or MPS (for Apple Silicon)
For more details on the parameters, refer to the script src/finetune_detection.py.
Slurm
If you are using a slurm cluster for managing resources, see slurm_cls.sh and slurm_gen.sh.
Prompt-based learning with LLMs
To generate prompts for both type generation and detection, execute src/generate_prompts_etpc.py.
This will create four files: detection_train.jsonl, detection_test.jsonl, generation_train.jsonl, and generation_test.jsonl. These files are used for training and testing detection and generation respectively. You can generate prompts for QQP analogous using src/generate_prompts_qqp.py.
LLaMA
Update 18-10-2024: We have now also fine-tuned LLaMA 3.1 models (with PEFT / LORA adapters), which can be found below.
| Model | Params | Dataset | Task | Link | |---------|--------|---------|------|------| | LLaMA 3.1 | 8B | ETPC | PTG | llama-3.1-8b-etpc | | LLaMA 3.1 | 70B | ETPC | PTG | llama-3.1-70b-etpc |
Update 16-12-2023: We have now also fine-tuned LLaMA 2 models (with PEFT / LORA adapters), which can be found below.
| Model | Params | Dataset | Task | Link | |---------|--------|---------|------|------| | LLaMA 2 | 7B | ETPC | PTG | llama-2-7b-etpc | | LLaMA 2 | 13B | ETPC | PTG | llama-2-13b-etpc | | LLaMA 2 | 70B | ETPC | PTG | llama-2-70b-etpc | | LLaMA 2 | 7B | QQP | PD | llama-2-7b-qqp | | LLaMA 2 | 13B | QQP | PD | llama-2-13b-qqp | | LLaMA 2 | 70B | QQP | PD | llama-2-70b-qqp |
PTG = Paraphrase Type Generation, PD = Paraphrase Detection
The prompt template for LLaMA-style models is the following:
txt
"Instruction: {instruction}Given the following sentence, generate a paraphrase with the following types. Sentence: {sentence}. Paraphrase Types: {paraphrase_type}\n\nAnswer:"
To run LLaMA, execute src/llama_generation.py or src/llama_detection.py.
bash
python3 -m torch.distributed.run --nproc_per_node 8 src/llama_generation.py --ckpt_dir <ckpt_dir> --tokenizer_path <tokenizer_path> --data_file <data_file>
Replace <ckpt_dir>, <tokenizer_path>, <dataset_name>, and <params> with your specific values.
<ckpt_dir>: The directory where the model checkpoints are stored after downloading from the LLaMA repo.<tokenizer_path>: The path to the tokenizer used by the model.<data_file>: The file containing prompts and completions.
For running LLaMA with slurm, use slurm_llama_gen.sh and slurm_llama_cls.sh.
To finetune LLaMA, follow instructions here.
You can load the fine-tuned model with <ckpt_dir> to compare to the prompted model.
Under src/llama_transfer.py, you can test the prompted and fine-tuned model on other paraphrase tasks (e.g., PAWS).
ChatGPT
Update 01-10.2024: We have released the GPT-style models for gpt-3.5-turbo and gpt-4o-mini. The identifiers are below.
| Model | Identifier | |---------|------| | gpt-4o-mini | ft:gpt-4o-mini-2024-07-18:personal::ADQ0IcdZ | | gpt-3.5-turbo | ft:gpt-3.5-turbo-0613:personal::7xbU0xQ2 |
The prompt template for the GPT-style models is the following:
json
{
"role": "user",
"content": "Given the following sentence, generate a paraphrase with the following types. Sentence: {sentence}. Paraphrase Types: {paraphrase_type}"
}
To fine-tune GPT-based models, execute src/finetune_chatgpt.py. Specify either the detection_train.jsonl or generation_train.jsonl file that was generated using the generate_prompts_* scripts.
Evaluating the fine-tuned model on paraphrase type generation and detection can be achieved by running src/eval_type_detection_chatgpt.py and src/eval_generation_chatgpt.py and providing the <model_id> of the finetuned model and the <data_file> which can be generation_test.jsonl or detection_test.jsonl.
To evaluate on qqp, run src/eval_detection_chatgpt.py and use src/eval_generation_chatgpt.py with the other generated prompt files.
Contributing
There are many ways in which you can participate in this project, for example:
- Review source code changes
Citation
bib
@inproceedings{wahle-etal-2023-paraphrase,
title = "Paraphrase Types for Generation and Detection",
author = "Wahle, Jan Philip and
Gipp, Bela and
Ruas, Terry",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.746",
doi = "10.18653/v1/2023.emnlp-main.746",
pages = "12148--12164",
abstract = "Current approaches in paraphrase generation and detection heavily rely on a single general similarity score, ignoring the intricate linguistic properties of language. This paper introduces two new tasks to address this shortcoming by considering paraphrase types - specific linguistic perturbations at particular text positions. We name these tasks Paraphrase Type Generation and Paraphrase Type Detection. Our results suggest that while current techniques perform well in a binary classification scenario, i.e., paraphrased or not, the inclusion of fine-grained paraphrase types poses a significant challenge. While most approaches are good at generating and detecting general semantic similar content, they fail to understand the intrinsic linguistic variables they manipulate. Models trained in generating and identifying paraphrase types also show improvements in tasks without them. In addition, scaling these models further improves their ability to understand paraphrase types. We believe paraphrase types can unlock a new paradigm for developing paraphrase models and solving tasks in the future.",
}
If you use the ETPC dataset, please also cite:
bib
@inproceedings{kovatchev-etal-2018-etpc,
title = "{ETPC} - A Paraphrase Identification Corpus Annotated with Extended Paraphrase Typology and Negation",
author = "Kovatchev, Venelin and
Mart{\'\i}, M. Ant{\`o}nia and
Salam{\'o}, Maria",
booktitle = "Proceedings of the Eleventh International Conference on Language Resources and Evaluation ({LREC} 2018)",
month = may,
year = "2018",
address = "Miyazaki, Japan",
publisher = "European Language Resources Association (ELRA)",
url = "https://aclanthology.org/L18-1221",
}
License
Licensed under the Apache 2.0 license. Parts of the code under src/llama are licensed under the LLaMA Community License Agreement
Owner
- Name: Jan Philip Wahle
- Login: jpwahle
- Kind: user
- Location: Göttingen
- Company: @gipplab
- Website: https://jpwahle.com
- Twitter: jpwahle
- Repositories: 20
- Profile: https://github.com/jpwahle
👨🏼💻 Computer Science Researcher | 📍Göttingen, Germany
Citation (CITATION.bib)
@inproceedings{wahle-etal-2023-paraphrase,
title = {Paraphrase Types for Generation and Detection},
author = {Wahle, Jan Philip and Gipp, Bela and Ruas, Terry},
year = 2023,
month = dec,
booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing},
publisher = {Association for Computational Linguistics},
address = {Singapore, Singapore}
}
GitHub Events
Total
- Watch event: 1
- Push event: 3
Last Year
- Watch event: 1
- Push event: 3
Committers
Last synced: 6 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Jan Philip Wahle | h****o@j****m | 10 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 1
- Total pull requests: 0
- Average time to close issues: 12 days
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 0
- Average comments per issue: 2.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- saakethch (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- bert-score *
- datasets *
- evaluate *
- fairscale *
- fire *
- paraphrase-metrics *
- seaborn *
- spacy *
- transformers *