v1t
Code for "V1T: Large-scale mouse V1 response prediction using a Vision Transformer"
Science Score: 41.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org, nature.com -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.5%) to scientific vocabulary
Keywords
Repository
Code for "V1T: Large-scale mouse V1 response prediction using a Vision Transformer"
Basic Info
- Host: GitHub
- Owner: bryanlimy
- License: mit
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://openreview.net/forum?id=qHZs2p4ZD4
- Size: 21.4 MB
Statistics
- Stars: 20
- Watchers: 4
- Forks: 6
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
V1T: Large-scale mouse V1 response prediction using a Vision Transformer
Code for TMLR2023 paper "V1T: Large-scale mouse V1 response prediction using a Vision Transformer".

Authors: Bryan M. Li, Isabel M. Cornacchia, Nathalie L. Rochefort, Arno Onken
bibtex
@article{
li2023vt,
title={V1T: large-scale mouse V1 response prediction using a Vision Transformer},
author={Bryan M. Li and Isabel Maria Cornacchia and Nathalie Rochefort and Arno Onken},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2023},
url={https://openreview.net/forum?id=qHZs2p4ZD4},
note={}
}
Acknowledgement
We sincerely thank Willeke et al. for organizing the Sensorium challenge and, along with Franke et al., for making their high-quality large-scale mouse V1 recordings publicly available. This codebase is inspired by sinzlab/sensorium, sinzlab/neuralpredictors and sinzlab/nnfabrik.
File structure
The codebase repository has the following structure. Check .gitignore for the ignored files.
sensorium2022/
data/
sensorium/
static21067-10-18-GrayImageNet-94c6ff995dac583098847cfecd43e7b6.zip
...
franke2022/
static25311-4-6-ColorImageNet-104e446ed0128d89c639eef0abe4655b.zip
...
README.md
misc/
src/
v1t/
...
.gitignore
README.md
setup.sh
demo.ipynb
submission.py
sweep.py
train.py
...
- demo.ipynb demonstrates how to load the best V1T model and inference the Sensorium+ test set, as well as extracting the attention rollout maps.
- sweep.py performs hyperparameter tuning using Weights & Biases.
- train.py contains the model training procedure.
- data store the datasets, please check data/README.md for more information.
- misc contains scripts and notebooks to generate various plots and figures used in the paper.
- src/v1t contains the code for the main Python package.
Installation
- Create a new conda environment in Python 3.10.
bash conda create -n v1t python=3.10 - Activate
v1tvirtual environmentbash conda activate v1t - We have created a
setup.shscript to install the relevantcondaandpippackages for macOS and Ubuntu devices.bash sh setup.sh - Alternative, you can install PyTorch 2.0 and all the relevant packages with:
bash # install PyTorch conda install -c pytorch pytorch=2.0 torchvision torchaudio -y # install V1T package pip install -e .
Train model
- An example command to train a V1T core and Gaussian readout on the Sensorium+ dataset
bash python train.py --dataset data/sensorium --output_dir runs/v1t_model --core vit --readout gaussian2d --behavior_mode 3 --batch_size 16 - use the
--helpflag to see all available options
Visualize training performance
- The training code
train.pyuses both TensorBoard and Weights & Biases to log training information.- TensorBoard
- Use the following command to monitor training performance with TensorBoard
bash tensorboard --logdir runs/v1t_model --port 6006 - Visit
localhost:6006on your browser - Weights & Biases
- use
--use_wandband (optional)--wandb_group <group name>to enablewandblogging.
Owner
- Name: Bryan M. Li
- Login: bryanlimy
- Kind: user
- Location: Edinburgh
- Website: https://bryanli.io
- Twitter: bryanlimy
- Repositories: 44
- Profile: https://github.com/bryanlimy
Citation (CITATION.bib)
@article{li2023vt,
title = {V1T: large-scale mouse V1 response prediction using a Vision Transformer},
author = {Bryan M. Li and Isabel Maria Cornacchia and Nathalie Rochefort and Arno Onken},
journal = {Transactions on Machine Learning Research},
issn = {2835-8856},
year = {2023},
url = {https://openreview.net/forum?id=qHZs2p4ZD4},
note = {}
}
GitHub Events
Total
- Watch event: 1
- Fork event: 1
Last Year
- Watch event: 1
- Fork event: 1
Issues and Pull Requests
Last synced: 10 months ago
All Time
- Total issues: 6
- Total pull requests: 19
- Average time to close issues: 2 months
- Average time to close pull requests: 2 days
- Total issue authors: 2
- Total pull request authors: 1
- Average comments per issue: 0.33
- Average comments per pull request: 0.74
- Merged pull requests: 17
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 1
- Average time to close issues: about 7 hours
- Average time to close pull requests: less than a minute
- Issue authors: 1
- Pull request authors: 1
- Average comments per issue: 1.0
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- bryanlimy (5)
- Zhiyi12 (1)
Pull Request Authors
- bryanlimy (20)