https://github.com/ai4co/reevo
[NeurIPS 2024] ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org, scholar.google -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.0%) to scientific vocabulary
Keywords
Repository
[NeurIPS 2024] ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
Basic Info
- Host: GitHub
- Owner: ai4co
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://ai4co.github.io/reevo/
- Size: 26.7 MB
Statistics
- Stars: 178
- Watchers: 4
- Forks: 38
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
[NeurIPS 2024] ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
🥳 Welcome! This is a codebase that accompanies the paper ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution.
Give ReEvo 5 minutes, and get a state-of-the-art algorithm in return!
Table of Contents
- 1. News 📰
- 2. Introduction 🚀
- 3. Exciting Highlights 🌟
- 4. Usage 🔑
- 4.1. Dependency
- 4.2. To run ReEvo
- 4.3. Available problems
- 4.4. Simple steps to apply ReEvo to your problem
- 4.5. Use Alternative LLMs
- 5. Citation 🤩
- 6. Acknowledgments 🫡
1. News 📰
- Sep. 2024: ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution has been accepted at NeurIPS 2024 🥳
- May 2024: We release a new paper version
- Apr. 2024: Novel use cases for Neural Combinatorial Optimization (NCO) and Electronic Design Automation (EDA)
- Feb. 2024: We are excited to release ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution 🚀
2. Introduction 🚀

We introduce Language Hyper-Heuristics (LHHs), an emerging variant of Hyper-Heuristics (HHs) that leverages LLMs for heuristic generation, featuring minimal manual intervention and open-ended heuristic spaces.
To empower LHHs, we present Reflective Evolution (ReEvo), a generic searching framework that emulates the reflective design approach of human experts while much surpassing human capabilities with its scalable LLM inference, Internet-scale domain knowledge, and powerful evolutionary search.
3. Exciting Highlights 🌟
We can improve the following types of algorithms: - Neural Combinatorial Optimization (NCO) - Genetic Algorithm (GA) - Ant Colony Optimization (ACO) - Guided Local Search (GLS) - Constructive Heuristics
on the following problems: - Traveling Salesman Problem (TSP) - Capacitated Vehicle Routing Problem (CVRP) - Orienteering Problem (OP) - Multiple Knapsack Problems (MKP) - Bin Packing Problem (BPP) - Decap Placement Problem (DPP)
with both black-box and white-box settings.
4. Usage 🔑
- Set your LLM API key (OpenAI API, ZhiPu API, Llama API) as an environment variable or like this:
bash $ python main.py llm_client=openai llm_client.api_key="<Your API key>" # see more options in ./cfg/llm_client - Running logs and intermediate results are saved in
./outputs/main/by default. - Datasets are generated on the fly.
- Some test notebooks are provided in
./problems/*/test.ipynb.
4.1. Installation
[!TIP] We recommend using uv for lightning fast installation and dependency management (see below details), otherwise read on!
See details
We recommend using [uv](https://github.com/astral-sh/uv) for faster installation and dependency management. To install it, run: ```bash curl -LsSf https://astral.sh/uv/install.sh | sh ``` Then, clone the repository and cd into it: ```bash git clone git@github.com:ai4co/reevo.git cd reevo ``` Create a new virtual environment and activate it: ```bash uv venv --python 3.12 source .venv/bin/activate ``` Then synchronize the dependencies: ```bash uv sync --all-extras ``` This will install all the optional required dependencies; you can remove `--all-extras` to install only the default dependencies and add e.g. `--extra aco --extra gls` to install only the ACO and GLS problem-specific ones respectively and similarly (see the [`pyproject.toml`](pyproject.toml) for more details).After cloning the repo, you can install dependencies locally on Python>=3.11 as follows:
bash
pip install -e ".[gls,aco,nco]"
where gls, aco and nco are the optional dependencies for GLS, ACO and NCO problems (remove them if not needed).
4.2. To run ReEvo
```bash
e.g., for tsp_aco
python main.py \ problem=tspaco \ # problem name initpopsize=4 \ # initial population size popsize=4 \ # population size max_fe=20 \ # maximum number of heuristic evaluations timeout=20 # allowed evaluation time for one generation ```
Check out ./cfg/ for more options.
4.3. Available problems
- Traveling Salesman Problem (TSP):
tsp_aco,tsp_aco_black_box,tsp_constructive,tsp_gls,tsp_pomo,tsp_lehd - Capacitated Vehicle Routing Problem (CVRP):
cvrp_aco,cvrp_aco_black_box,cvrp_pomo,cvrp_lehd - Bin Packing Problem (BPP):
bpp_offline_aco,bpp_offline_aco_black_box,bpp_online - Multiple Knapsack Problems (MKP):
mkp_aco,mkp_aco_black_box - Orienteering Problem (OP):
op_aco,op_aco_black_box - Decap Placement Problem (DPP):
dpp_ga
4.4. Simple steps to apply ReEvo to your problem
- Define your problem in
./cfg/problem/. - Generate problem instances and implement the evaluation pipeline in
./problems/. - Add functiondescription, functionsignature, and seed_function in
./prompts/.
By default:
- The LLM-generated heuristic is written into
f"./problems/YOUR_PROBLEM/gpt.py", and will be imported into./problems/YOUR_PROBLEM/eval.py(e.g. for TSP_ACO), which is called byreevo._run_codeduring ReEvo. - In training mode,
./problems/YOUR_PROBLEM/eval.py(e.g. for TSP_ACO) should print out the meta-objective value as the last line of stdout, which is parsed byreevo.evaluate_populationfor heuristic evaluation.
4.5. Use Alternative LLMs
Use the cli parameter llm_client to designate an LLM API provider, and llm_client.model to determine the model to use. For example,
bash
$ export LLAMA_API_KEY=xxxxxxxxxxxxxxxxxxxx
$ python main.py llm_client=llama_api llm_client.model=gemma2-9b
Supported LLM API providers and models include (note that only chat models are supported): - OpenAI: gpt-3.5-turbo (default), gpt-4o, gpt-4o-mini, gpt-4-turbo, etc. - Zhipu AI: GLM-3-Turbo, GLM-4-Air, GLM-4-0520, etc. (full list) - DeepSeek: deepseek-chat - Moonshot AI: moonshot-v1-8k/32k/128k - Llama API: llama3.1-8b/70b/405b, gemma2-9b/27b, Qwen2-72B, etc. (full list) - And more providers supported via LiteLLM.
5. Citation 🤩
If you encounter any difficulty using our code, please do not hesitate to submit an issue or directly contact us!
We are also on Slack if you have any questions or would like to discuss ReEvo with us. We are open to collaborations and would love to hear from you 🚀
If you find our work helpful (or if you are so kind as to offer us some encouragement), please consider giving us a star, and citing our paper.
bibtex
@inproceedings{ye2024reevo,
title={ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution},
author={Ye, Haoran and Wang, Jiarui and Cao, Zhiguang and Berto, Federico and Hua, Chuanbo and Kim, Haeyeon and Park, Jinkyoo and Song, Guojie},
booktitle={Advances in Neural Information Processing Systems},
year={2024},
note={\url{https://github.com/ai4co/reevo}}
}
6. Acknowledgments 🫡
We are very grateful to Yuan Jiang, Yining Ma, Yifan Yang, and AI4CO community for valuable discussions and feedback.
Also, our work is built upon the following projects, among others: - DeepACO: Neural-enhanced Ant Systems for Combinatorial Optimization - Eureka: Human-Level Reward Design via Coding Large Language Models - Algorithm Evolution Using Large Language Model - Mathematical discoveries from program search with large language models - An Example of Evolutionary Computation + Large Language Model Beating Human: Design of Efficient Guided Local Search - Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model - DevFormer: A Symmetric Transformer for Context-Aware Device Placement
Owner
- Name: ai4co
- Login: ai4co
- Kind: organization
- Repositories: 1
- Profile: https://github.com/ai4co
GitHub Events
Total
- Issues event: 13
- Watch event: 80
- Issue comment event: 34
- Push event: 23
- Pull request review comment event: 2
- Pull request review event: 8
- Pull request event: 10
- Fork event: 19
- Create event: 3
Last Year
- Issues event: 13
- Watch event: 80
- Issue comment event: 34
- Push event: 23
- Pull request review comment event: 2
- Pull request review event: 8
- Pull request event: 10
- Fork event: 19
- Create event: 3
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 3
- Total pull requests: 4
- Average time to close issues: 17 days
- Average time to close pull requests: about 19 hours
- Total issue authors: 3
- Total pull request authors: 2
- Average comments per issue: 1.0
- Average comments per pull request: 0.0
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 3
- Pull requests: 4
- Average time to close issues: 17 days
- Average time to close pull requests: about 19 hours
- Issue authors: 3
- Pull request authors: 2
- Average comments per issue: 1.0
- Average comments per pull request: 0.0
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- ruibo5 (1)
- gtthbkha (1)
- Jason36912 (1)
- SnoopD201 (1)
- sarahamore (1)
Pull Request Authors
- fedebotu (3)
- HeXCatalyst (2)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- PyYAML ==6.0.1
- annotated-types ==0.6.0
- antlr4-python3-runtime ==4.9.3
- anyio ==4.2.0
- certifi ==2023.11.17
- distro ==1.9.0
- h11 ==0.14.0
- httpcore ==1.0.2
- httpx ==0.26.0
- hydra-core ==1.3.2
- idna ==3.6
- numpy ==1.26.3
- omegaconf ==2.3.0
- openai ==1.8.0
- packaging ==23.2
- pydantic ==2.5.3
- pydantic_core ==2.14.6
- scipy ==1.11.4
- sniffio ==1.3.0
- tqdm ==4.66.1
- typing_extensions ==4.9.0