https://github.com/chris10m/real-time-semantic-segmentation
Science Score: 10.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.9%) to scientific vocabulary
Keywords
Repository
Basic Info
Statistics
- Stars: 7
- Watchers: 2
- Forks: 1
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
MobileNetV3 real-time semantic segmentation
This repository contains the implementation for a dual-path network with mobilenetv3-small backbone. I have used PSP module as the context aggregation block.
Requirements
The Cityscapes dataset, which can be downloaded here.
NOTE: The code has been tested in Ubuntu 18.04, and requirements.txt contains all the nessary packages.
Usage
Train
To train the model, we run train.py
python3 train.py --root Cityscapes_root_directory --model_path optional_param, to resume training from a checkpoint.
Evaluate
The trainer, also evaluates the model for every save and logs the results, but if evaluation needs to be done for a particular model, we run evaluate.py
python3 evaluate.py --root Cityscapes_root_directory --model_path saved_model_path_to_evaluate.
Evaluate Server
The evaluateserver.py evaluates the model, and store the segmentation masks in *cityscapesresults* folder created in the root path of the script. This is used for submiting the results to Cityscapes evaluation server.
python3 evaluate_server.py --root Cityscapes_root_directory --model_path saved_model_path_to_evaluate.
Demo
To visulaize the results, we run demo.py.
python3 demo.py --root Cityscapes_root_directory --model_path saved_model_path_to_run_demo.
Demo Single Image
To run inference on a single image, we run demosingle.py. Can run inference to any image given by imgpath.
python3 demo_single.py --model_path saved_model_path_to_run_demo. --img_path optional_param, default is images/demo.png.
Result
The FPS metrics are evaluated on a RTX2070. And evaluation was done by single scale input images.
- Cityscapes
| Config | Params(M) | RES | FLOPS (G) | FP32(fps) | FP16(fps)| train-split | mIoU - val | mIoU - test | model |
| :-------: | :--: | :----: | :----: | :---: | :-------:| :------: | :------: | :------: | :------: |
| MV3-Small + PSP + FFM | 1.74 |2048x1024 | 11.63 | 40.85 | 54.50 | train | 0.662 | 0.6388 | file (6.86MB) |
| MV3-Small + PSP + FFM | 1.74 |1024x512 | 2.91 | 78.79 | 71.74 | train | 0.615 | - | file (6.86MB) |
| MV3-Small + PSP + FFM | 1.74 |2048x1024 | 11.63 | 40.85 | 54.50 | train + val | 0.717 | 0.6559 | file (6.86MB) |
| MV3-Small + PSP + FFM | 1.74 |1024x512 | 2.91 | 78.79 | 71.74 | train + val | 0.646 | - | file (6.86MB) |
Note: Params and FLOPS are got using torchstat.
To Do
- [ ] Add mobilenetv3 large
- [ ] Improve performance.
- [ ] Add more configurations support.
Owner
- Name: Christen Millerdurai
- Login: Chris10M
- Kind: user
- Repositories: 25
- Profile: https://github.com/Chris10M
PhD & Researcher @ AV DFKI-Kaiserslautern.
GitHub Events
Total
Last Year
Dependencies
- Pillow ==8.2.0
- cityscapesScripts ==2.2.0
- geffnet ==1.0.0
- imutils ==0.5.4
- numpy ==1.19.5
- opencv_contrib_python ==4.5.1.48
- scikit_learn ==0.24.1
- tabulate ==0.8.9
- torch ==1.7.1
- torchstat ==0.0.7
- torchvision ==0.8.2
- tqdm ==4.58.0