vision-transformers-cifar10
Let's train vision transformers (ViT) for cifar 10 / cifar 100!
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org, scholar.google -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.5%) to scientific vocabulary
Repository
Let's train vision transformers (ViT) for cifar 10 / cifar 100!
Basic Info
Statistics
- Stars: 670
- Watchers: 5
- Forks: 130
- Open Issues: 5
- Releases: 4
Metadata Files
README.md
vision-transformers-cifar10
This is your go-to playground for training Vision Transformers (ViT) and its related models on CIFAR-10/CIFAR-100, a common benchmark dataset in computer vision.
The whole codebase is implemented in Pytorch, which makes it easier for you to tweak and experiment. Over the months, we've made several notable updates including adding different models like ConvMixer, CaiT, ViT-small, SwinTransformers, and MLP mixer. We've also adapted the default training settings for ViT to fit better with the CIFAR-10/CIFAR-100 dataset.
Using the repository is straightforward - all you need to do is run the train_cifar10.py script with different arguments, depending on the model and training parameters you'd like to use.
Thanks, the repo has been used in 20+ papers!
Please use this citation format if you use this in your research.
@misc{yoshioka2024visiontransformers,
author = {Kentaro Yoshioka},
title = {vision-transformers-cifar10: Training Vision Transformers (ViT) and related models on CIFAR-10},
year = {2024},
publisher = {GitHub},
howpublished = {\url{https://github.com/kentaroy47/vision-transformers-cifar10}}
}
Updates
Added ConvMixer implementation. Really simple! (2021/10)
Added wandb train log to reproduce results. (2022/3)
Added CaiT and ViT-small. (2022/3)
Added SwinTransformers. (2022/3)
Added MLP mixer. (2022/6)
Changed default training settings for ViT.
Fixed some bugs and training settings (2024/2)
Added onnx and torchscript model exports. (2024/12)
Added mobilevit. (2025/1)
Add CIFAR-100 support (2025/4)
Add Dynamic Tanh ViT (2025/6)
Usage example
python train_cifar10.py # vit-patchsize-4
python train_cifar10.py --dataset cifar100 # cifar-100
python train_cifar10.py --size 48 # vit-patchsize-4-imsize-48
python train_cifar10.py --patch 2 # vit-patchsize-2
python train_cifar10.py --net vit_small --n_epochs 400 # vit-small
python train_cifar10.py --net vit_timm # train with pretrained vit
python train_cifar10.py --net dyt # train with Layernorm-less ViT (DyT)
python train_cifar10.py --net convmixer --n_epochs 400 # train with convmixer
python train_cifar10.py --net mlpmixer --n_epochs 500 --lr 1e-3
python train_cifar10.py --net cait --n_epochs 200 # train with cait
python train_cifar10.py --net swin --n_epochs 400 # train with SwinTransformers
python train_cifar10.py --net res18 # resnet18+randaug
Results..
| CIFAR10 | Accuracy | Train Log | |:-----------:|:--------:|:--------:| | ViT patch=2 | 80% | | | ViT patch=4 Epoch@200 | 80% | Log | | ViT patch=4 Epoch@500 | 88% | Log | | ViT patch=4 Epoch@1000 | 89% | Log | | ViT patch=8 | 30% | | | ViT small | 80% | | | DyT | 74% | Log | | MLP mixer | 88% | | | CaiT | 80% | | | Swin-t | 90% | | | ViT small (timm transfer) | 97.5% | | | ViT base (timm transfer) | 98.5% | | | ConvMixerTiny(no pretrain) | 96.3% |Log| | resnet18 | 93% | | | resnet18+randaug | 95% | Log |
| CIFAR100 | Accuracy | Train Log | |:-----------:|:--------:|:--------:| | ViT patch=4 Epoch@200 | 52% | Log | | resnet18+randaug | 71% | Log |
Used in..
- Vision Transformer Pruning arxiv github
- Understanding why ViT trains badly on small datasets: an intuitive perspective arxiv
- Training deep neural networks with adaptive momentum inspired by the quadratic optimization arxiv
- Moderate coreset: A universal method of data selection for real-world data-efficient deep learning
Model Export
This repository supports exporting trained models to ONNX and TorchScript formats for deployment purposes. You can export your trained models using the export_models.py script.
Basic Usage
```bash python exportmodels.py --checkpoint path/to/checkpoint --modeltype vit --outputdir exportedmodels
Owner
- Name: Kentaro Yoshioka
- Login: kentaroy47
- Kind: user
- Location: Tokyo, Japan
- Company: Keio University
- Website: https://sites.google.com/keio.jp/keio-csg/
- Repositories: 9
- Profile: https://github.com/kentaroy47
2-year old Ph.D researcher interested in efficient and fast systems.
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: Yoshioka given-names: Kentaro orcid: "https://orcid.org/0000-0001-5640-2250" title: "vision-transformers-cifar10" version: 1.0.0 doi: 10.5281/zenodo.14279880 date-released: 2024-12-04 url: "https://github.com/kentaroy47/vision-transformers-cifar10"
GitHub Events
Total
- Issues event: 3
- Watch event: 135
- Delete event: 1
- Issue comment event: 2
- Push event: 19
- Pull request event: 8
- Fork event: 20
- Create event: 3
Last Year
- Issues event: 3
- Watch event: 135
- Delete event: 1
- Issue comment event: 2
- Push event: 19
- Pull request event: 8
- Fork event: 20
- Create event: 3
Committers
Last synced: 11 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Kentaro Yoshioka | m****7@g****m | 76 |
Issues and Pull Requests
Last synced: 11 months ago
All Time
- Total issues: 24
- Total pull requests: 15
- Average time to close issues: 5 months
- Average time to close pull requests: 1 minute
- Total issue authors: 21
- Total pull request authors: 4
- Average comments per issue: 2.25
- Average comments per pull request: 0.07
- Merged pull requests: 10
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 2
- Pull requests: 4
- Average time to close issues: 3 months
- Average time to close pull requests: less than a minute
- Issue authors: 2
- Pull request authors: 1
- Average comments per issue: 1.0
- Average comments per pull request: 0.0
- Merged pull requests: 4
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- kentaroy47 (3)
- ShijianXu (2)
- yuelinxin (1)
- srdfjy (1)
- yuanzhi-zhu (1)
- xinchenduobian (1)
- laserljy (1)
- Egg-Hu (1)
- wonkicho (1)
- Jerryme-xxm (1)
- longmalongma (1)
- zhoufengfan (1)
- ppalantir (1)
- Icathian-Rain (1)
- bismillahkani (1)
Pull Request Authors
- kentaroy47 (11)
- jimar884 (3)
- caizhi-mt (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- einops *
- odach *
- vit-pytorch *
- wandb *