Updated 6 months ago
dpo-rlhf-paraphrase-types
Enhancing paraphrase-type generation using Direct Preference Optimization (DPO) and Reinforcement Learning from Human Feedback (RLHF), with large-scale HPC support. This project aligns model outputs to human-ranked data for robust, safety-focused NLP.
Updated 5 months ago
https://github.com/cluebbers/adverserial-paraphrasing
Evaluate how LLaMA 3.1 8B handles paraphrased adversarial prompts targeting refusal behavior.