https://github.com/bethgelab/siamese-mask-rcnn

Siamese Mask R-CNN model for one-shot instance segmentation

https://github.com/bethgelab/siamese-mask-rcnn

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.9%) to scientific vocabulary

Keywords

few-shot-learning instance-segmentation object-detection one-shot-instance-segmentation one-shot-learning
Last synced: 5 months ago · JSON representation

Repository

Siamese Mask R-CNN model for one-shot instance segmentation

Basic Info
  • Host: GitHub
  • Owner: bethgelab
  • License: other
  • Language: Jupyter Notebook
  • Default Branch: master
  • Homepage:
  • Size: 84.8 MB
Statistics
  • Stars: 348
  • Watchers: 14
  • Forks: 60
  • Open Issues: 10
  • Releases: 0
Topics
few-shot-learning instance-segmentation object-detection one-shot-instance-segmentation one-shot-learning
Created over 7 years ago · Last pushed almost 6 years ago
Metadata Files
Readme License

README.md

Siamese Mask R-CNN

This is the official implementation of Siamese Mask R-CNN from One-Shot Instance Segmentation. It is based on the Mask R-CNN implementation by Matterport.

The repository includes: - [x] Source code of Siamese Mask R-CNN - [x] Training code for MS COCO - [x] Evaluation on MS COCO metrics (AP) - [x] Training and evaluation of one-shot splits of MS COCO - [x] Training code to reproduce the results from the paper - [x] Pre-trained weights for ImageNet - [x] Pre-trained weights for all models from the paper - [x] Code to evaluate all models from the paper - [x] Code to generate result figures

One-Shot Instance Segmentation

One-shot instance segmentation can be summed up as: Given a query image and a reference image showing an object of a novel category, we seek to detect and segment all instances of the corresponding category (in the image above ‘person’ on the left, ‘car’ on the right). Note that no ground truth annotations of reference categories are used during training. This type of visual search task creates new challenges for computer vision algorithms, as methods from metric and few-shot learning have to be incorporated into the notoriously hard tasks ofobject identification and segmentation. Siamese Mask R-CNN extends Mask R-CNN - a state-of-the-art object detection and segmentation system - with a Siamese backbone and a matching procedure to perform this type of visual search.

Installation

  1. Clone this repository
  2. Prepare COCO dataset as described below
  3. Run the install_requirements.ipynb notebook to install all relevant dependencies.

Requirements

Linux, Python 3.4+, Tensorflow, Keras 2.1.6, cython, scikitimage 0.13.1, h5py, imgaug and opencvpython

Prepare COCO dataset

The model requires MS COCO and the CocoAPI to be added to /data. cd data git clone https://github.com/cocodataset/cocoapi.git It is recommended to symlink the dataset root of MS COCO. ln -s $PATH_TO_COCO$/coco coco If unsure follow the instructions of the Matterport Mask R-CNN implementation.

Get pretrained weights

Get the pretrained weights from the releases menu and save them to /checkpoints.

Training

To train siamese mask r-cnn on MS COCO simply follow the instructions in the training.ipynb notebook. There are two model configs available, a small one which runs on a single GPU with 12GB memory and a large one which needs 4 GPUs with 12GB memory each. The second model config is the same as used in our experiments.

To reproduce our results and train the models reported in the paper run the notebooks provided in experiments. Those models need 4 GPUs with 12GB memory each.

Our models are trained on the coco 2017 training set, of which we remove the last 3000 images for validation.

Evaluation

To evaluate and visualize a models results run the evaluation.ipynb notebook. Make sure to use the same config as used for training the model.

To evaluate the models reported in the paper run the evaluation notebook provided in experiments. Each model will be evaluated 5 times to compensate for the stochastic effects introduced by randomly choosing the reference instances. The final result is the mean of those five runs.

We use the coco 2017 val set for testing and the last 3000 images from the training set for validation.

Model description

Siamese Mask R-CNN is designed as a minimal variation of Mask R-CNN which can perform the visual search task described above. For more details please read the paper.

Citation

If you use this repository or want to reference our work please cite our paper: @article{michaelis_one-shot_2018, title = {One-Shot Instance Segmentation}, author = {Michaelis, Claudio and Ustyuzhaninov, Ivan and Bethge, Matthias and Ecker, Alexander S.}, year = {2018}, journal = {arXiv}, url = {http://arxiv.org/abs/1811.11507} }

Owner

  • Name: Bethge Lab
  • Login: bethgelab
  • Kind: organization
  • Location: Tübingen

Perceiving Neural Networks

GitHub Events

Total
  • Watch event: 5
Last Year
  • Watch event: 5

Dependencies

lib/Mask_RCNN/requirements.txt pypi
  • IPython *
  • Pillow *
  • cython *
  • h5py *
  • imgaug *
  • keras >=2.0.8
  • matplotlib *
  • numpy *
  • opencv-python *
  • scikit-image *
  • scipy *
  • tensorflow >=1.3.0