mmsegmentation-macvi
MMSegmentation with LaRS configs and dataloaders
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.2%) to scientific vocabulary
Repository
MMSegmentation with LaRS configs and dataloaders
Basic Info
- Host: GitHub
- Owner: lojzezust
- License: apache-2.0
- Language: Python
- Default Branch: master
- Size: 13.7 MB
Statistics
- Stars: 6
- Watchers: 2
- Forks: 3
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
LaRS Segmentation Starter Kit (MMSegmentation)
This repository is a fork of MMSegmentation 0.x.
It provides a starting point for running semantic segmentation experiments on the LaRS dataset. - Dataloader for LaRS - Configs for a large number of segmentation methods - Utilities for training and making predictions on LaRS
This document provides the basic information and steps to run simple training and inference tasks. For more complex use case scenarios, please refer to the official MMSegmentation repository.
Installation
Follow the instructions to install this version of MMSegmentation.
Step 1. Clone the repository:
shell
git clone https://github.com/lojzezust/mmsegmentation-macvi.git
cd mmsegmentation-macvi
Step 1. Create a conda or virtualenv environment. Install PyTorch following official instructions, e.g.
shell
pip3 install torch torchvision
Step 2. Install MMCV using MIM.
shell
pip install -U openmim
mim install mmcv-full
Step 3. Install MMSegmentation (MaCVi) from source.
```shell pip install -v -e .
"-v" means verbose, or more output
"-e" means installing a project in editable mode,
thus any local modifications made to the code will take effect without reinstallation.
```
Step 4. Install additional requirements.
shell
pip install -r requirements.txt
Step 5. Configure paths.
Download the LaRS dataset. Update the path in the dataset config configs/_base_/datasets/lars.py, to point to the location of LaRS dataset.
Getting started
Training methods
Use one of the provided training configs to train a method.
shell
export CUDA_VISIBLE_DEVICES=0,1
python tools/train.py configs/fcn/fcn_r50-d8_512x1024_40k_lars.py
By default the configs use a batch size of 4 per GPU. You can change this in the dataset config (configs/_base_/datasets/lars.py).
Running inference
Use the tools/test.py script to run inference on the LaRS test set (for submission to macvi.org).
```shell CONFIG=configs/fcn/fcnr50-d8512x102440klars.py WEIGHTS=workdirs/fcnr50-d8512x102440klars/latest.pth # Weights path OUTDIR=output/fcnr50-d8512x102440klars # Output dir
export CUDAVISIBLEDEVICES=0 python tools/test.py $CONFIG $WEIGHTS --show-dir $OUT_DIR ```
Use the --val flag to run on the validation set instead (for local evaluation).
shell
python tools/test.py $CONFIG $WEIGHTS --show-dir $OUT_DIR --val
Configs
The following LaRS configs are included in this repository:
| method | backbone | config | |------------|------------|-------------------------------------------------------------------------| | FCN | ResNet-50 | configs/fcn/fcnr50-d8512x102440klars.py | | FCN | ResNet-101 | configs/fcn/fcnr101-d8512x102440klars.py | | UNet | S5 | configs/unet/fcnunets5-d164x4512x1024160klars.py | | DeepLabv3 | ResNet-101 | configs/deeplabv3/deeplabv3r101-d8512x102440klars.py | | DeepLabv3+ | ResNet-101 | configs/deeplabv3plus/deeplabv3plusr101-d8512x102440klars.py | | BiSeNetv1 | ResNet-50 | configs/bisenetv1/bisenetv1r50-d32in1k-pre4x41024x1024160klars.py | | BiSeNetv2 | - | configs/bisenetv2/bisenetv2fcn4x41024x1024160klars.py | | STDC 1 | - | configs/stdc/stdc1in1k-pre512x102480klars.py | | STDC 2 | - | configs/stdc/stdc2in1k-pre512x102480klars.py | | PointRend | ResNet-101 | configs/pointrend/pointrendr101512x102480klars.py | | SegFormer | MiT-B2 | configs/segformer/segformermit-b28x11024x1024160klars.py | | Segmenter | ViT-B | configs/segmenter/segmentervit-bmask8x1512x512160klars.py | | KNet | Swin-T | configs/knet/knets3upernetswin-t8x2512x512adamw80k_lars.py |
Owner
- Name: Lojze Žust
- Login: lojzezust
- Kind: user
- Company: University of Ljubljana
- Repositories: 23
- Profile: https://github.com/lojzezust
Computer Vision and AI Researcher | PhD Student
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMSegmentation Contributors" title: "OpenMMLab Semantic Segmentation Toolbox and Benchmark" date-released: 2020-07-10 url: "https://github.com/open-mmlab/mmsegmentation" license: Apache-2.0
GitHub Events
Total
- Watch event: 3
- Fork event: 1
Last Year
- Watch event: 3
- Fork event: 1
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- codecov/codecov-action v1.0.10 composite
- codecov/codecov-action v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- docutils ==0.16.0
- myst-parser *
- sphinx ==4.0.2
- sphinx_copybutton *
- sphinx_markdown_tables *
- mmcls >=0.20.1
- mmcv-full >=1.4.4,<1.7.0
- cityscapesscripts *
- mmcv *
- prettytable *
- scipy *
- torch *
- torchvision *
- matplotlib *
- mmcls >=0.20.1
- numpy *
- opencv-python *
- packaging *
- prettytable *
- scipy *
- codecov * test
- flake8 * test
- interrogate * test
- pytest * test
- xdoctest >=0.10.0 test
- yapf * test