Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (6.7%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: kesperinc
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 60.5 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 3 years ago · Last pushed almost 3 years ago
Metadata Files
Readme License Citation Codeowners

README-MUP.md

How to use Mup (https://github.com/microsoft/mup)

Add mup neox args to your config

```

mup

"use-mup": true,

"save-base-shapes": false, # this only needs to be enabled once in order to generate the base-shapes-file on each rank

"base-shapes-file": "base-shapes", # load base shapes from this file

"coord-check": false, # generate coord check plots to verify mup's implementation in neox

mup hp search

"mup-init-scale": 1.0,

"mup-attn-temp": 1.0,

"mup-output-temp": 1.0,

"mup-embedding-mult": 1.0,

"mup-rp-embedding-mult": 1.0, ```

Generate base shapes

  1. Set use-mup to true
  2. Set save-base-shapes to true
  3. Run once. gpt-neox will instantiate a base model and a delta model, then save one file per rank named .. gpt-neox will exit immediately.
  4. Set save-base-shapes to false

Generate coord check plots (optional)

  1. Keep use-mup true
  2. Set coord-check to true
  3. Run once. gpt-neox will output jpg images similar to https://github.com/microsoft/mutransformers/blob/main/README.md#coord-check. gpt-neox will exit immediately
  4. Set coord-check to false

Tune mup hyperparameters and LR

The values under mup hp search were added and correspond to appendix F.4 from https://arxiv.org/pdf/2203.03466.pdf. These and LR are tuned with a random search using the scaled-up config (tested with 6-7B.yml) but with hidden-size set to the value from the scaled-down config (125M.yml).

Transfer

With the best LR set and the best mup HPs set, revert the value of hidden-size in the scaled-up config and run again.

Owner

  • Name: Sun Kim
  • Login: kesperinc
  • Kind: user

Citation (CITATION.cff)

# YAML 1.2
---
authors:
  - affiliation: EleutherAI
    family-names: Andonian
    given-names: Alex
  - affiliation: EleutherAI
    family-names: Biderman
    given-names: Stella
  - affiliation: EleutherAI
    family-names: Black
    given-names: Sid
  - affiliation: EleutherAI
    family-names: Gali
    given-names: Preetham
  - affiliation: EleutherAI
    family-names: Gao
    given-names: Leo
  - affiliation: EleutherAI
    family-names: Hallahan
    given-names: Eric
  - affiliation: EleutherAI
    family-names: Levy-Kramer
    given-names: Josh
  - affiliation: EleutherAI
    family-names: Leahy
    given-names: Connor
  - affiliation: EleutherAI
    family-names: Nestler
    given-names: Lucas
  - affiliation: EleutherAI
    family-names: Parker
    given-names: Kip
  - affiliation: EleutherAI
    family-names: Pieler
    given-names: Michael
  - affiliation: EleutherAI
    family-names: Purohit
    given-names: Shivanshu
  - affiliation: EleutherAI
    family-names: Songz
    given-names: Tri
  - affiliation: EleutherAI
    family-names: Phil
    given-names: Wang
  - affiliation: EleutherAI
    family-names: Weinbach
    given-names: Samuel
cff-version: "1.1.0"
keywords:
  - "Transformers"
  - "Massive language model"
  - "Autoregressive language model"
license: "Apache-2.0"
message: "If you use this software, please cite it using these metadata."
repository-code: "https://www.github.com/eleutherai/gpt-neox"
title: "GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch"
version: "0.0.1"
doi: "10.5281/zenodo.5879544"
date-released: 2021-08-23
...

GitHub Events

Total
Last Year

Dependencies

.github/workflows/cpu_ci.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/docker_build.yml actions
  • actions/checkout v2 composite
  • crazy-max/ghaction-docker-meta v1 composite
  • docker/build-push-action v2 composite
  • docker/login-action v1 composite
  • docker/setup-buildx-action v1 composite
  • docker/setup-qemu-action v1 composite
.github/workflows/pull_request.yml actions
  • actions/checkout v2 composite
  • actions/checkout v3 composite
  • actions/setup-python v2 composite
  • pre-commit/action v2.0.3 composite
Dockerfile docker
  • nvidia/cuda 11.1.1-devel-ubuntu20.04 build
requirements/requirements-dev.txt pypi
  • autopep8 >=1.5.6 development
  • clang-format >=13.0.1 development
  • pre-commit >=2.17.0 development
  • pytest >=6.2.3 development
  • pytest-cov >=2.11.1 development
  • pytest-forked >=1.3.0 development
  • pytest-xdist * development
requirements/requirements-flashattention.txt pypi
  • flash-attn ==0.2.2
requirements/requirements-onebitadam.txt pypi
  • cupy-cuda111 >=8.6.0
requirements/requirements-sparseattention.txt pypi
  • triton ==0.4.2
requirements/requirements-tensorboard.txt pypi
  • tensorboard ==2.5.0
requirements/requirements-wandb.txt pypi
  • wandb >=0.10.28
requirements/requirements.txt pypi
  • best_download *
  • deepspeed *
  • ftfy >=6.0.1
  • huggingface_hub >=0.11.0
  • lm_eval >=0.3.0
  • mpi4py >=3.0.3
  • numpy >=1.22.0
  • pybind11 >=2.6.2
  • regex *
  • sentencepiece *
  • six *
  • tiktoken >=0.1.2
  • tokenizers >=0.12.1
  • transformers >=4.24.0