https://github.com/huggingface/search-and-learn

Recipes to scale inference-time compute of open models

https://github.com/huggingface/search-and-learn

Science Score: 23.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.4%) to scientific vocabulary
Last synced: 4 months ago · JSON representation

Repository

Recipes to scale inference-time compute of open models

Basic Info
  • Host: GitHub
  • Owner: huggingface
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 884 KB
Statistics
  • Stars: 1,109
  • Watchers: 9
  • Forks: 123
  • Open Issues: 16
  • Releases: 0
Created about 1 year ago · Last pushed 9 months ago
Metadata Files
Readme License

README.md

🤗 Models & Datasets | 📃 Blog Post

Search and Learn

Recipes to enhance LLM capabilities by scaling inference-time compute. Name inspired by Rich Sutton's Bitter Lesson:

One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.

What is this?

Over the last few years, the scaling of train-time compute has dominated the progress of LLMs. Although this paradigm has proven to be remarkably effective, the resources needed to pretrain ever larger models are becoming prohibitively expensive, with billion-dollar clusters already on the horizon. This trend has sparked significant interest in a complementary approach: test-time compute scaling. Rather than relying on ever-larger pretraining budgets, test-time methods use dynamic inference strategies that allow models to “think longer” on harder problems. A prominent example is OpenAI’s o1 model, which shows consistent improvement on difficult math and coding problems as one increases the amount of test-time compute.

Although we don't know how o1 was trained, Search and Learn aims to fill that gap by providing the community with a series of recipes that enable open models to solve complex problems if you give them enough “time to think”.

News 🗞️

  • December 16, 2024: Initial release with code to replicate the test-time compute scaling results of our blog post.

How to navigate this project 🧭

This project is simple by design and mostly consists of:

  • scripts to scale test-time compute for open models.
  • recipes to apply different search algorithms at test-time. Three algorithms are currently supported: Best-of-N, beam search, and Diverse Verifier Tree Search (DVTS). Each recipe takes the form of a YAML file which contains all the parameters associated with a single inference run.

To get started, we recommend the following:

  1. Follow the installation instructions to set up your environment etc.
  2. Replicate our test-time compute results by following the recipe instructions.

Contents

The initial release of Search and Learn will focus on the following techniques:

  • Search against verifiers: guide LLMs to search for solutions to "verifiable problems" (math, code) by using a stepwise or process reward model to score each step. Includes techniques like Best-of-N sampling and tree search.
  • Training process reward models: train reward models to provide a sequence of scores, one for each step of the reasoning process. This ability to provide fine-grained feedback makes PRMs a natural fit for search methods with LLMs.

Installation instructions

To run the code in this project, first, create a Python virtual environment using e.g. Conda:

shell conda create -n sal python=3.11 && conda activate sal

shell pip install -e '.[dev]'

Next, log into your Hugging Face account as follows:

shell huggingface-cli login

Finally, install Git LFS so that you can push models to the Hugging Face Hub:

shell sudo apt-get install git-lfs

You can now check out the scripts and recipes directories for instructions on how to scale test-time compute for open models!

Project structure

├── LICENSE ├── Makefile <- Makefile with commands like `make style` ├── README.md <- The top-level README for developers using this project ├── recipes <- Recipe configs, accelerate configs, slurm scripts ├── scripts <- Scripts to scale test-time compute for models ├── pyproject.toml <- Installation config (mostly used for configuring code quality & tests) ├── setup.py <- Makes project pip installable (pip install -e .) so `sal` can be imported ├── src <- Source code for use in this project └── tests <- Unit tests

Replicating our test-time compute results

The recipes README includes launch commands and config files in order to replicate our results.

Citation

If you find the content of this repo useful in your work, please cite it as follows via \usepackage{biblatex}:

@misc{beeching2024scalingtesttimecompute, title={Scaling test-time compute with open models}, author={Edward Beeching and Lewis Tunstall and Sasha Rush}, url={https://huggingface.co/spaces/HuggingFaceH4/blogpost-scaling-test-time-compute}, }

Please also cite the original work by DeepMind upon which this repo is based:

@misc{snell2024scalingllmtesttimecompute, title={Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters}, author={Charlie Snell and Jaehoon Lee and Kelvin Xu and Aviral Kumar}, year={2024}, eprint={2408.03314}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2408.03314}, }

Owner

  • Name: Hugging Face
  • Login: huggingface
  • Kind: organization
  • Location: NYC + Paris

The AI community building the future.

GitHub Events

Total
  • Create event: 7
  • Issues event: 34
  • Watch event: 1,016
  • Delete event: 4
  • Issue comment event: 59
  • Public event: 1
  • Push event: 29
  • Pull request review event: 10
  • Pull request review comment event: 4
  • Pull request event: 34
  • Fork event: 118
Last Year
  • Create event: 7
  • Issues event: 34
  • Watch event: 1,016
  • Delete event: 4
  • Issue comment event: 59
  • Public event: 1
  • Push event: 29
  • Pull request review event: 10
  • Pull request review comment event: 4
  • Pull request event: 34
  • Fork event: 118

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 18
  • Total pull requests: 14
  • Average time to close issues: 18 days
  • Average time to close pull requests: 9 days
  • Total issue authors: 17
  • Total pull request authors: 8
  • Average comments per issue: 0.83
  • Average comments per pull request: 0.14
  • Merged pull requests: 10
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 18
  • Pull requests: 14
  • Average time to close issues: 18 days
  • Average time to close pull requests: 9 days
  • Issue authors: 17
  • Pull request authors: 8
  • Average comments per issue: 0.83
  • Average comments per pull request: 0.14
  • Merged pull requests: 10
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ShayekhBinIslam (2)
  • davromano (2)
  • VigneshHexo (1)
  • jianingh520 (1)
  • HossamAmer12 (1)
  • potatoQi (1)
  • pss0204 (1)
  • blattimer-asapp (1)
  • Aaronhuang-778 (1)
  • qgallouedec (1)
  • slavakurilyak (1)
  • weizier (1)
  • dbsrlskfdk (1)
  • jiogenes (1)
  • Fuyujia799 (1)
Pull Request Authors
  • qgallouedec (4)
  • lewtun (4)
  • ShayekhBinIslam (2)
  • Ritvik19 (2)
  • plaguss (2)
  • EvilFreelancer (1)
  • edbeeching (1)
  • sergiopaniego (1)
  • dwiddows (1)
  • chadbrewbaker (1)
  • bingps (1)
  • YoshikiTakashima (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

.github/workflows/quality.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
setup.py pypi
  • accelerate *
  • fastapi *
  • latex2sympy2 ==1.9.1
  • pebble *
  • transformers *
  • word2number *