bionet

Deep Neural Networks with Bio-inspired Convolutions

https://github.com/bdevans/bionet

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.1%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Deep Neural Networks with Bio-inspired Convolutions

Basic Info
  • Host: GitHub
  • Owner: bdevans
  • Language: Python
  • Default Branch: main
  • Size: 266 KB
Statistics
  • Stars: 6
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 5 years ago · Last pushed about 4 years ago
Metadata Files
Readme Citation

README.md

BioNet

Deep Convolutional Neural Networks with bio-inspired filters.

  1. Clone this BioNet repository
  2. Clone the CIFAR-10G generalisation test set
  3. Set your project_dir in the notebook and pass your data_dir (ln -s /shared/data/ data) which contains the image sets

Expected directory structure

. ├── bionet │ ├── config.py │ ├── explain.py │ ├── __init__.py │ ├── plots.py │ └── preparation.py ├── data │ ├── CIFAR-10G │ ├── ecoset │ └── ecoset-cifar10 ├── logs ├── models ├── notebooks ├── results ├── scripts ├── model.py └── README.md

Training and testing the model

The main script to handle training and testing is model.py in the project's root directory. This script is called to both train and test the models. If saved weights files are found in the models directory, training will be skipped (unless the clean flag is passed) and the code will proceed to testing.

Arguments and usage

``` usage: model.py [-h] [--convolution CONVOLUTION] [--base BASE] [--pretrain] [--architecture ARCHITECTURE] [--interpolation INTERPOLATION] [--optimizer {SGD,RMSprop,Adagrad,Adadelta,Adam,Adamax,Nadam}] [--lr LR] [--decay DECAY] [--use_initializer] [--internalnoise INTERNALNOISE] [--trial TRIAL] [--label LABEL] [--seed SEED] [-t] [--recalculate_statistics] [--epochs EPOCHS] [--batch BATCH] [--imagepath IMAGEPATH] [--trainimagepath TRAINIMAGEPATH] [--test_generalisation] [--inverttestimages INVERTTESTIMAGES] [--test_perturbations] [--data_augmentation] [--extra_augmentation] [-c] [--skip_test] [-l] [--save_images] [-p] [--gpu GPU] [--projectdir PROJECTDIR] [-v VERBOSE]

optional arguments: -h, --help show this help message and exit --convolution CONVOLUTION Name of convolutional filter to use --base BASE Name of model to use --pretrain Flag to use pretrained ImageNet weights in the model --architecture ARCHITECTURE Parameter file (JSON) to load --interpolation INTERPOLATION Method to interpolate the images when upscaling. Default: 0 ("nearest" i.e. no interpolation) --optimizer {SGD,RMSprop,Adagrad,Adadelta,Adam,Adamax,Nadam} Name of optimizer to use: https://keras.io/optimizers/ --lr LR, --learningrate LR Learning rate for training --decay DECAY Optimizer decay for training --useinitializer Flag to use the weight initializer (then freeze weights) for the Gabor filters --internalnoise INTERNALNOISE Standard deviation for adding a Gaussian noise layer after the first convolutional layer --trial TRIAL Trial number for labeling different runs of the same model --label LABEL For labeling different runs of the same model --seed SEED Random seed to use -t, --train Flag to train the model --recalculatestatistics Flag to recalculate normalisation statistics over the training set --epochs EPOCHS Number of epochs to train model --batch BATCH Size of mini-batches passed to the network --imagepath IMAGEPATH Path to image files to load --trainimagepath TRAINIMAGEPATH Path to training image files to load --testgeneralisation Flag to test the model on sets of untrained images --inverttestimages INVERTTESTIMAGES Flag to invert the luminance of the test images --testperturbations Flag to test the model on perturbed images --dataaugmentation Flag to train the model with data augmentation --extraaugmentation Flag to train the model with additional data augmentation -c, --clean Flag to retrain model --skiptest Flag to skip testing the model -l, --log Flag to log training data --saveimages Flag to save preprocessed (perturbed) test images -p, --savepredictions Flag to save category predictions --gpu GPU GPU ID to run on --projectdir PROJECTDIR Path to the root project directory -v VERBOSE, --verbose VERBOSE Verbosity level ```

To train and test the models, the code below may be used and adapted as required.

```python import os import sys import pprint import subprocess import random from tqdm.notebook import tqdm import tensorflow as tf import tensorflow.keras import tensorflow.keras.backend projectrootdir = "/home/jovyan/work/BioNet" # Change as necessary print(f"Project directory: {projectrootdir}\n") sys.path.append(projectrootdir) print("\nTensorFlow:", tf.version) print(f"Channel ordering: {tf.keras.backend.imagedataformat()}") # TensorFlow: Channels last order. gpus = tf.config.experimental.listphysicaldevices('GPU')

gpus = tf.config.listphysicaldevices('GPU')

pprint.pprint(gpus) label = "paper" image_path = '' # Empty string defaults to CIFAR-10

image_path = '/shared/data/ecoset-cifar10'

convolutions = ['Original', 'Low-pass', 'DoG', 'Gabor', 'Combined-trim'] bases = ['ALL-CNN', 'VGG-16', 'VGG-19', 'ResNet'] seed = 0 starttrial = 1 numtrials = 5 trials = range(starttrial, starttrial+numtrials) train = True pretrain = False clean = False epochs = 100 optimizer = "RMSprop" lr = 1e-4 useinitializer = True dataaugmentation = True extraaugmentation = False internalnoise = 0 skiptest = False saveimages = False savepredictions = True testgeneralisation = True testperturbations = True interpolation = 4 # Lanczos recalculatestatistics = False verbose = 0 halton_error = False gpu = 1

script = os.path.join(projectrootdir, "model.py") flags = ['--log'] if train: flags.append('-t') if clean: flags.append('-c') if useinitializer: flags.append('--useinitializer') if dataaugmentation: flags.append('--dataaugmentation') if extraaugmentation: flags.append('--extraaugmentation') if skiptest: flags.append('--skiptest') if recalculatestatistics: flags.append('--recalculatestatistics') if savepredictions: flags.append('--savepredictions') optionalargs = [] if imagepath: optionalargs.extend(['--imagepath', str(imagepath)]) if testperturbations: optionalargs.append('--testperturbations') if testgeneralisation: optionalargs.append('--testgeneralisation') if pretrain: optionalargs.append('--pretrain') if internalnoise: optionalargs.extend(['--internalnoise', str(internalnoise)]) if interpolation: optionalargs.extend(['--interpolation', str(interpolation)]) if verbose: optionalargs.extend(['--verbose', str(verbose)]) count = 1 for trial in tqdm(trials, desc='Trial'): if seed is None: seed = random.randrange(2**32) for base in tqdm(bases, desc='Model Base', leave=False): for conv in tqdm(convolutions, desc='Convolution', leave=False): cmd = [script, *flags] if saveimages and count == 1: cmd.append('--saveimages') cmd.extend(['--convolution', conv, '--base', base, '--label', label, '--trial', str(trial), '--seed', str(seed), '--optimizer', optimizer, '--lr', str(lr), '--epochs', str(epochs), '--gpu', str(gpu)]) cmd.extend(optionalargs) completed = subprocess.run(cmd, shell=False, captureoutput=True, text=True) if completed.returncode != 0: print(completed.stdout) print(completed.stderr) count += 1 f'Finished job "{label}"!' ```

Notes

rsync -vzhrLKe ssh --progress user@host:/storage/models/paper /shared/data/

Create symlinks to consolidate simulations

find response -maxdepth 1 -mindepth 1 -type d -exec ln -s ../'{}' paper/ \;

Owner

  • Name: Ben Evans
  • Login: bdevans
  • Kind: user
  • Location: Brighton, UK
  • Company: University of Sussex

Computational Neuroscientist (s̶t̶i̶l̶l̶ ̶i̶n̶ ̶b̶e̶t̶a̶) and Python evangelist! Likes include open-science, machine learning and Jack Russell Terriers! 🐶

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite the accompanying article."
preferred-citation:
  type: article
  authors:
  - family-names: "Evans"
    given-names: "Benjamin D."
    orcid: "https://orcid.org/0000-0002-1734-6070"
  - family-names: "Malhotra"
    given-names: "Gaurav"
  - family-names: "Bowers"
    given-names: "Jeffrey S."
  doi: "10.1016/j.neunet.2021.12.005"
  journal: "Neural Networks"
  month: 2
  start: 96
  end: 110
  title: "Biological convolutions improve DNN robustness to noise and generalisation"
  volume: 148
  year: 2022

GitHub Events

Total
  • Watch event: 2
Last Year
  • Watch event: 2

Committers

Last synced: about 1 year ago

All Time
  • Total Commits: 274
  • Total Committers: 1
  • Avg Commits per committer: 274.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Ben Evans b****s@g****m 274

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels