https://github.com/arashakbarinia/deepths

A framework to compute threshold sensitivity of deep networks to visual stimuli.

https://github.com/arashakbarinia/deepths

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.2%) to scientific vocabulary

Keywords

cognitive-neuroscience deep-learning deep-neural-networks explainable-ai human-machine-behavior linear-classifier linear-probing sensitivity-analysis vision-models
Last synced: 5 months ago · JSON representation

Repository

A framework to compute threshold sensitivity of deep networks to visual stimuli.

Basic Info
  • Host: GitHub
  • Owner: ArashAkbarinia
  • License: gpl-3.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 446 KB
Statistics
  • Stars: 1
  • Watchers: 3
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
cognitive-neuroscience deep-learning deep-neural-networks explainable-ai human-machine-behavior linear-classifier linear-probing sensitivity-analysis vision-models
Created over 3 years ago · Last pushed over 1 year ago
Metadata Files
Readme License

README.md

DeepTHS

A framework to compute threshold sensitivity of deep networks to visual stimuli.

I have converted this project into a pip package called osculari PyPi Status. Please use the osculari package GitHub Latest Release) instead of this repository.

This repository provides an easy interface to train a linear classifier on top of the extract features from pretrained networks implemented in PyTorch. It includes:

  • Most models and pretrained networks from PyTorch's official website.
  • CLIP language-vision model.
  • Taskonomy networks.
  • Different architectures (e.g., CNN and ViT).
  • Different tasks (e.g., classification and segmentation).
  • Training/testing routines for 2AFC and 4AFC tasks.

Examples

2AFC Task

Creating a model

Let's create a linear classifier to perform a binary-classification 2AFC (two-alternative-force-choice) task. This is easily achieved by inheriting the readout.ClassifierNet .

``` python

from deepths.models import readout

class FeatureDiscrimination2AFC(readout.ClassifierNet): def init(self, classifierkwargs, readoutkwargs): super(FeatureDiscrimination2AFC, self).init( inputnodes=2, numclasses=2, *classifier_kwargs, *readout_kwargs )

def forward(self, x0, x1):
    x0 = self.do_features(x0)
    x1 = self.do_features(x1)
    x = torch.cat([x0, x1], dim=1)
    return self.do_classifier(x)

```

The parameter input_nodes specify the number of images we are extracting features from. The parameter num_classes specifies the number of outputs for the linear classifier.

Instantiating

Let's use ResNet50 as our pretrained network and extract feature from the layer area0.

```python

netname = 'resnet50' weights = 'resnet50' targetsize = 224 readoutkwargs = { 'architecture': netname, 'targetsize': targetsize, 'transfer_weights': [weights, 'area0'],

} classifierlwargs = { 'classifier': 'nn', 'pooling': None } net = FeatureDiscrimination2AFC(classifierlwargs, readout_kwargs)

```

The variable readout_kwargs specifies the details of the pretrained network:

  • architecture is network's architecture (e.g., ResNet50 or ViT-B32).
  • transfer_weights defines the weights of the pretrained network and the layer(s) to use:
    • The first index must be either path to the pretrained weights or PyTorch supported weights (in this example we are using the default PyTorch weights of ResNet50).
    • The second index is the read-out (cut-off) layer. In this example, we extract features from area0.

The variable classifier_lwargs specifies the details of the linear classifier:

  • classifier specifies types of linear classifier. It mainly supports neural network (NN) with partial support for SVM.
  • pooling specifies whether to perform pooling over extracted features (without any new weights to learn). This is useful to reduce the dimensionality of the extracted features.

Let's print our network:

``` print(net)

FeatureDiscrimination2AFC( (backbone): Sequential( (0): Conv2d(3, 64, kernelsize=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, trackrunningstats=True) (2): ReLU(inplace=True) (3): MaxPool2d(kernelsize=3, stride=2, padding=1, dilation=1, ceilmode=False) ) (fc): Linear(infeatures=401408, out_features=2, bias=True) ) ```

We can see that it contains of two nodes: backbone and fc corresponding to the pretrained network and linear classifier, respectively.

Pooling

From the print above, we can observe that the dimensionality of the input to the linear classifier is too large (a vector of 401408 elements). It might be of interest to reduce this by means of pooling operations. For instance, we can make an instance of FeatureDiscrimination2AFC with 'pooling': 'avg_2_2' (i.e., average pooling over a 2-by-2 window). In the new instance the input to the linear layer is only 512 elements.

FeatureDiscrimination2AFC( (backbone): Sequential( (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace=True) (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) ) (pool_avg): AdaptiveAvgPool2d(output_size=(2, 2)) (fc): Linear(in_features=512, out_features=2, bias=True) )

  • To use max pooling: 'pooling': 'max_2_2'.
  • To pool over a different window: e.g., 'pooling': 'max_5_3' pools over a 5-by-3 window.

Script

Contrast Sensitivity Function (CSF)

We have used this repository to measure the networks' CSF.

To train a contrast discriminator linear classifier:

```shell

python csftrain.py -aname $MODEL --transferweights $WEIGHTS $LAYER \ --targetsize 224 --classifier "nn" \ -dname $DB --datadir $DATADIR --trainsamples 15000 --valsample 1000 \ --contrastspace "rgb" --colourspace "imagenetrgb" --visiontype "trichromat" \ -b 64 --experimentname $EXPERIMENTNAME --outputdir $OUT_DIR \ -j 4 --gpu 0 --epochs 10

```

To measure the CSFs ($CONTRAST_SPACE can be one of the following values "lumycc" "rgycc" "yb_ycc" corresponding to luminance, red-green and yellow-blue channels).

shell python csf_test.py -aname $MODEL_PATH --contrast_space $CONTRAST_SPACE \ --target_size 224 --classifier "nn" --mask_image "fixed_cycle" \ --experiment_name $EXPERIMENT_NAME \ --colour_space "imagenet_rgb" --vision_type "trichromat" \ --print_freq 1000 --output_dir $OUT_DIR --gpu 0

Colour Discrimination

We have used this repository to measure the networks' colour discrimination.

To train a colour discriminator linear classifier:

```shell

python colourdiscrimination.py -aname $MODEL --transferweights $WEIGHTS $LAYER \ --targetsize 224 --classifier "nn" \ -dname $DB --datadir $DATADIR --trainsamples 15000 --valsample 1000 \ --colourspace "imagenetrgb" \ -b 64 --experimentname $EXPERIMENTNAME --outputdir $OUT_DIR \ -j 4 --gpu 0 --epochs 10 ```

To measure the sensitivity threshold $TEST_FILE must be passed

```shell

python colourdiscrimination.py -aname $MODEL --testnet $MODELPATH \ --targetsize 224 --classifier "nn" \ -dname $DB --datadir $DATADIR \ --testfile $TESTFILE --background 128 \ --colourspace "imagenetrgb" \ -b 64 --experimentname $TESTNAME --outputdir $OUTDIR \ -j 4 --gpu 0 ```

Colour Categories

We have used this repository to measure the networks' colour categories.

```shell

python colourcatodd4.py -aname $MODEL --testnet $MODELPATH \ --targetsize 224 --classifier "nn" \ -dname $DB --datadir $DATADIR \ --testfile $TESTFILE --focalfile $FOCALFILE --background 128 \ --colourspace "imagenetrgb" \ -b 64 --experimentname $TESTNAME --outputdir $OUT_DIR \ -j 4 --gpu 0 ```

Owner

  • Name: Arash Akbarinia
  • Login: ArashAkbarinia
  • Kind: user

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1