v1t

Code for "V1T: Large-scale mouse V1 response prediction using a Vision Transformer"

https://github.com/bryanlimy/v1t

Science Score: 41.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, nature.com
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.5%) to scientific vocabulary

Keywords

deep-learning neural-response pytorch vision-transformer vit
Last synced: 6 months ago · JSON representation ·

Repository

Code for "V1T: Large-scale mouse V1 response prediction using a Vision Transformer"

Basic Info
Statistics
  • Stars: 20
  • Watchers: 4
  • Forks: 6
  • Open Issues: 0
  • Releases: 0
Topics
deep-learning neural-response pytorch vision-transformer vit
Created over 3 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

V1T: Large-scale mouse V1 response prediction using a Vision Transformer

Code for TMLR2023 paper "V1T: Large-scale mouse V1 response prediction using a Vision Transformer".

Authors: Bryan M. Li, Isabel M. Cornacchia, Nathalie L. Rochefort, Arno Onken

bibtex @article{ li2023vt, title={V1T: large-scale mouse V1 response prediction using a Vision Transformer}, author={Bryan M. Li and Isabel Maria Cornacchia and Nathalie Rochefort and Arno Onken}, journal={Transactions on Machine Learning Research}, issn={2835-8856}, year={2023}, url={https://openreview.net/forum?id=qHZs2p4ZD4}, note={} }

Acknowledgement

We sincerely thank Willeke et al. for organizing the Sensorium challenge and, along with Franke et al., for making their high-quality large-scale mouse V1 recordings publicly available. This codebase is inspired by sinzlab/sensorium, sinzlab/neuralpredictors and sinzlab/nnfabrik.

File structure

The codebase repository has the following structure. Check .gitignore for the ignored files. sensorium2022/ data/ sensorium/ static21067-10-18-GrayImageNet-94c6ff995dac583098847cfecd43e7b6.zip ... franke2022/ static25311-4-6-ColorImageNet-104e446ed0128d89c639eef0abe4655b.zip ... README.md misc/ src/ v1t/ ... .gitignore README.md setup.sh demo.ipynb submission.py sweep.py train.py ... - demo.ipynb demonstrates how to load the best V1T model and inference the Sensorium+ test set, as well as extracting the attention rollout maps. - sweep.py performs hyperparameter tuning using Weights & Biases. - train.py contains the model training procedure. - data store the datasets, please check data/README.md for more information. - misc contains scripts and notebooks to generate various plots and figures used in the paper. - src/v1t contains the code for the main Python package.

Installation

  • Create a new conda environment in Python 3.10. bash conda create -n v1t python=3.10
  • Activate v1t virtual environment bash conda activate v1t
  • We have created a setup.sh script to install the relevant conda and pip packages for macOS and Ubuntu devices. bash sh setup.sh
  • Alternative, you can install PyTorch 2.0 and all the relevant packages with: bash # install PyTorch conda install -c pytorch pytorch=2.0 torchvision torchaudio -y # install V1T package pip install -e .

Train model

  • An example command to train a V1T core and Gaussian readout on the Sensorium+ dataset bash python train.py --dataset data/sensorium --output_dir runs/v1t_model --core vit --readout gaussian2d --behavior_mode 3 --batch_size 16
  • use the --help flag to see all available options

Visualize training performance

  • The training code train.py uses both TensorBoard and Weights & Biases to log training information.
    • TensorBoard
    • Use the following command to monitor training performance with TensorBoard bash tensorboard --logdir runs/v1t_model --port 6006
    • Visit localhost:6006 on your browser
    • Weights & Biases
    • use --use_wandb and (optional) --wandb_group <group name> to enable wandb logging.

Owner

  • Name: Bryan M. Li
  • Login: bryanlimy
  • Kind: user
  • Location: Edinburgh

Citation (CITATION.bib)

@article{li2023vt,
  title   = {V1T: large-scale mouse V1 response prediction using a Vision Transformer},
  author  = {Bryan M. Li and Isabel Maria Cornacchia and Nathalie Rochefort and Arno Onken},
  journal = {Transactions on Machine Learning Research},
  issn    = {2835-8856},
  year    = {2023},
  url     = {https://openreview.net/forum?id=qHZs2p4ZD4},
  note    = {}
}

GitHub Events

Total
  • Watch event: 1
  • Fork event: 1
Last Year
  • Watch event: 1
  • Fork event: 1

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 6
  • Total pull requests: 19
  • Average time to close issues: 2 months
  • Average time to close pull requests: 2 days
  • Total issue authors: 2
  • Total pull request authors: 1
  • Average comments per issue: 0.33
  • Average comments per pull request: 0.74
  • Merged pull requests: 17
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 1
  • Average time to close issues: about 7 hours
  • Average time to close pull requests: less than a minute
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • bryanlimy (5)
  • Zhiyi12 (1)
Pull Request Authors
  • bryanlimy (20)
Top Labels
Issue Labels
Pull Request Labels