https://github.com/amazon-science/mada_optimizer_search

Code the ICML 2024 paper: "MADA: Meta-Adaptive Optimizers through hyper-gradient Descent"

https://github.com/amazon-science/mada_optimizer_search

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.1%) to scientific vocabulary

Keywords

adam-optimizer deep-neural-networks gpt-2 large-language-models machine-learning machine-learning-algorithms meta-optimizer optimization optimization-algorithms
Last synced: 5 months ago · JSON representation

Repository

Code the ICML 2024 paper: "MADA: Meta-Adaptive Optimizers through hyper-gradient Descent"

Basic Info
Statistics
  • Stars: 4
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
adam-optimizer deep-neural-networks gpt-2 large-language-models machine-learning machine-learning-algorithms meta-optimizer optimization optimization-algorithms
Created almost 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme Contributing License Code of conduct

README.md

MADA: Meta-Adaptive Optimizers through hyper-gradient Descent

Authors: Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham Sabach, Branislav Kveton, Volkan Cevher

This repository includes the code to simulate experiments for our paper MADA: Meta Adaptive Momentum Estimates through Hypergradient Descent. The GPT training code is based on nanoGPT by Andrej Karpathy (https://github.com/karpathy/nanoGPT). Meta optimizer implementation is inspired by (https://github.com/kach/gradient-descent-the-ultimate-optimizer/tree/main).

./config includes configuration files that controls the parameters in the code.

./results includes some of the results that were mentioned in the quip document for the project.

./gdtuo.py is the implementation of meta optimizer through hypergradient descent.

./model.py includes a generic GPT-2 type implementation from nanoGPT.

./plot... .py files are used to plot the results that are in ./results.

train.py, train_ddp.py, toy.py, toy2.py, includes the files to run experiments.

train_ddp.py is the latest run file and has from scratch supoorts for ddp, gradient_accumulation.

Example run:

python train_ddp.py config/train_gpt2_small.py --dtype='float32' --beta1=0.9 --beta2=0.95 --beta3=0.0 --rho=0.6 --c=1.0 --gamma=1.0

The arguments here refer to the initial values of the optimizer parameters. Additional variables about the nanoGPT run can also be included if needed for e.g. to determine logging, grad accumulation and so on. At the moment, to change the hypergradient hyperparameters (such as learning rate) and ddp size one, please update the code. The output directory to save log files is set as a FSx directory and would need to be changed inside the code as well. There are two types of logging, the first one where for every log_iter the optimizer parameters, training loss and validation loss are logged. The second one is logging at the end of run.

Citation

Please consider citing our paper if you use our code: text @misc{ozkara2024mada, title={MADA: Meta-Adaptive Optimizers through hyper-gradient Descent}, author={Kaan Ozkara and Can Karakus and Parameswaran Raman and Mingyi Hong and Shoham Sabach and Branislav Kveton and Volkan Cevher}, year={2024}, eprint={2401.08893}, archivePrefix={arXiv}, primaryClass={cs.LG} }

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

Owner

  • Name: Amazon Science
  • Login: amazon-science
  • Kind: organization

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels