https://github.com/bytedance/hammer
An efficient toolkit for training deep models.
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.2%) to scientific vocabulary
Repository
An efficient toolkit for training deep models.
Basic Info
- Host: GitHub
- Owner: bytedance
- License: other
- Language: Python
- Default Branch: main
- Size: 390 KB
Statistics
- Stars: 137
- Watchers: 5
- Forks: 15
- Open Issues: 2
- Releases: 0
Metadata Files
README.md
An Efficient Library for Training Deep Models
This repository provides an efficient PyTorch-based library for training deep models.
Installation
Make sure your Python >= 3.7, CUDA version >= 11.1, and CUDNN version >= 7.6.5.
Install package requirements via
conda:shell conda create -n <ENV_NAME> python=3.7 # create virtual environment with Python 3.7 conda activate <ENV_NAME> pip install -r requirements/minimal.txt -f https://download.pytorch.org/whl/cu111/torch_stable.htmlTo use video visualizer (optional), please also install
ffmpeg.
- Ubuntu: `sudo apt-get install ffmpeg`.
- MacOS: `brew install ffmpeg`.
- To reduce memory footprint (optional), you can switch to either
jemalloc(recommended) ortcmallocrather than your default memory allocator.
- jemalloc (recommended):
- Ubuntu: `sudo apt-get install libjemalloc`
- tcmalloc:
- Ubuntu: `sudo apt-get install google-perftools`
(optional) To speed up data loading on NVIDIA GPUs, you can install DALI, together with dill to pickle python objects. It is optional to also install CuPy for some customized operations if needed:
shell pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-<CUDA_VERSION> pip install dill pip install cupy # optional, installation can be slowFor example, on CUDA 11.1, DALI can be installed via:
shell pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-cuda110 # CUDA 11.1 compatible pip install dill pip install cupy # optional, installation can be slow
Quick Demo
Train StyleGAN2 on FFHQ in Resolution of 256x256
In your Terminal, run:
shell
./scripts/training_demos/stylegan2_ffhq256.sh <NUM_GPUS> <PATH_TO_DATA> [OPTIONS]
where
<NUM_GPUS>refers to the number of GPUs. Setting<NUM_GPUS>as 1 helps launch a training job on single-GPU platforms.<PATH_TO_DATA>refers to the path of FFHQ dataset (in resolution of 256x256) withzipformat. If running on local machines, a soft link of the data will be created under thedatafolder of the working directory to save disk space.[OPTIONS]refers to any additional option to pass. Detailed instructions on available options can be shown via./scripts/training_demos/stylegan2_ffhq256.sh <NUM_GPUS> <PATH_TO_DATA> --help.
This demo script uses stylegan2_ffhq256 as the default value of job_name, which is particularly used to identify experiments. Concretely, a directory with name job_name will be created under the root working directory (with is set as work_dirs/ by default). To prevent overwriting previous experiments, an exception will be raised to interrupt the training if the job_name directory has already existed. To change the job name, please use --job_name=<NEW_JOB_NAME> option.
More Demos
Please find more training demos under ./scripts/training_demos/.
Inspect Training Results
Besides using TensorBoard to track the training process, the raw results (e.g., training losses and running time) are saved in JSON Lines format. They can be easily inspected with the following script
```python import json
filename = '<PATHTOWORKDIR>/log.jsonl'
dataentries = [] with open(filename, 'r') as f: for line in f: dataentry = json.loads(line) dataentries.append(data_entry)
An example of data entry
{"Loss/D Fake": 0.4833524551040682, "Loss/D Real": 0.4966000154727226, "Loss/G": 1.1439273656869773, "Learning Rate/Discriminator": 0.002352941082790494, "Learning Rate/Generator": 0.0020000000949949026, "data time": 0.0036810599267482758, "iter time": 0.24490128830075264, "run time": 66108.140625}
```
Convert Pre-trained Models
See Model Conversion for details.
Prepare Datasets
See Dataset Preparation for details.
Develop
See Contributing Guide for details.
License
The project is under MIT License.
Acknowledgement
This repository originates from GenForce, with all modules carefully optimized to make it more flexible and robust for distributed training. On top of GenForce where only StyleGAN training is provided, this repository also supports training StyleGAN2 and StyleGAN3, both of which are fully reproduced. Any new method is welcome to merge into this repository! Please refer to the *Develop** section.*
Contributors
The main contributors are listed as follows.
| Member | Contribution | | :-- | :-- | |Yujun Shen | Refactor and optimize the entire codebase and reproduce state-of-the-art approaches. |Zhiyi Zhang | Contribute to a number of sub-modules and functions, especially dataset related. |Dingdong Yang | Contribute to DALI data loading acceleration. |Yinghao Xu | Originally contribute to runner and loss functions in GenForce. |Ceyuan Yang | Originally contribute to data loader in GenForce. |Jiapeng Zhu | Originally contribute to evaluation metrics in GenForce.
BibTex
We open source this library to the community to facilitate the research. If you do like our work and use the codebase for your projects, please cite our work as follows.
bibtex
@misc{hammer2022,
title = {Hammer: An Efficient Toolkit for Training Deep Models},
author = {Shen, Yujun and Zhang, Zhiyi and Yang, Dingdong and Xu, Yinghao and Yang, Ceyuan and Zhu, Jiapeng},
howpublished = {\url{https://github.com/bytedance/Hammer}},
year = {2022}
}
Owner
- Name: Bytedance Inc.
- Login: bytedance
- Kind: organization
- Location: Singapore
- Website: https://opensource.bytedance.com
- Twitter: ByteDanceOSS
- Repositories: 255
- Profile: https://github.com/bytedance
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Shen" given-names: "Yujun" - family-names: "Zhang" given-names: "Zhiyi" - family-names: "Yang" given-names: "Dingdong" - family-names: "Xu" given-names: "Yinghao" - family-names: "Yang" given-names: "Ceyuan" - family-names: "Zhu" given-names: "Jiapeng" title: "Hammer: An Efficient Toolkit for Training Deep Models" version: 1.0.0 date-released: 2022-02-08 url: "https://github.com/bytedance/Hammer"
GitHub Events
Total
- Watch event: 4
Last Year
- Watch event: 4
Issues and Pull Requests
Last synced: over 1 year ago
All Time
- Total issues: 1
- Total pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 1
- Average comments per issue: 1.0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- Ruye-aa (1)
Pull Request Authors
- TrellixVulnTeam (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- bs4 *
- easydict *
- ninja ==1.10.2
- opencv-python-headless ==4.5.5.62
- pillow ==9.0.0
- requests *
- rich *
- scikit-video ==1.1.11
- tensorflow-gpu ==1.15
- torch ==1.8.1
- tqdm *
- bpytop * development
- gpustat * development
- pylint * development
- bs4 *
- click *
- cloup *
- easydict *
- lmdb *
- matplotlib *
- ninja ==1.10.2
- numpy ==1.21.5
- opencv-python-headless ==4.5.5.62
- pillow ==9.0.0
- psutil *
- requests *
- rich *
- scikit-learn ==1.0.2
- scikit-video ==1.1.11
- scipy ==1.7.3
- tensorboard ==2.7.0
- torch-tb-profiler ==0.3.1
- tqdm *