led-net

LED-Net: A lightweight and efficient dual-branch convolutional neural network designed to address the challenge of achieving high-performance tree branch and trunk semantic segmentation in resource-constrained mobile device environments.

https://github.com/ly27253/led-net

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.1%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

LED-Net: A lightweight and efficient dual-branch convolutional neural network designed to address the challenge of achieving high-performance tree branch and trunk semantic segmentation in resource-constrained mobile device environments.

Basic Info
  • Host: GitHub
  • Owner: ly27253
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 61.9 MB
Statistics
  • Stars: 9
  • Watchers: 1
  • Forks: 0
  • Open Issues: 1
  • Releases: 0
Created about 1 year ago · Last pushed 6 months ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

LED-Net

Compares branch segmentation accuracy, computational cost, and parameter size on the orchard dataset. Smaller circle radii indicate fewer parameters. Our method achieves an optimal balance among these metrics.

Compares branch segmentation accuracy, computational cost, and parameter size on the orchard dataset. Smaller circle radii indicate fewer parameters. Our method achieves an optimal balance among these metrics.

Compares branch segmentation accuracy, computational cost, and parameter size on the orchard dataset. Smaller circle radii indicate fewer parameters. Our method achieves an optimal balance among these metrics.

(a) Overall network framework, (b) Semantic Segmentation Head.

Compares branch segmentation accuracy, computational cost, and parameter size on the orchard dataset. Smaller circle radii indicate fewer parameters. Our method achieves an optimal balance among these metrics.

Qualitative comparison of branch segmentation results on real orchard tree branches. (a) Ground truth; (b) LED-Net; (c) PIDNet; (d) DDRNet; (e) SegNeXt; (f) BiSeNetV2; (g) STDC2; (h) STDC1; (i) HRNet; (j) BiSeNetV1.

1. Environment Setup

This project is built on top of MMsegmentation. To configure the environment, please follow the official installation guide:
- MMsegmentation Installation Guide

2. Dataset Structure

In addition to the configuration files provided for commonly used public datasets such as Cityscapes, ADE, COCO, and CamVid, we have developed custom dataset loading scripts to accommodate our proprietary Apple Branch Seg data dataset. The structure of the dataset is as follows:

├── Apple Branch Seg data 
        ├── JPEGImages              # Original images of the dataset 
        ├── SegmentationClassPNG    # Corresponding segmentation labels in PNG format 
        ├── train.txt               # File defining the training data split 
        └── val.txt                 # File defining the validation data split
  • JPEGImages: Directory containing the original images for the dataset.
  • SegmentationClassPNG: Directory containing the segmentation labels in PNG format, corresponding to the original images.
  • train.txt & val.txt: Files that define the data splits for training and validation.

To integrate this dataset into your workflow, please update the dataset path and directory structure in the ../configs/_base_/datasets/pascal_voc12.py configuration file. Additionally, ensure that the classes and palette settings in the ../mmseg/datasets/voc.py file are adjusted to reflect the specific classes and image file extensions used in your dataset.

Dataset Download:
You can download the Apple Branch Seg data dataset from NWPU-Apple Branch Seg data.

3. Model Training

To train the LED-Net model, use the configuration file located at:
../configs/LED_Net/LEDNet_80k_cityscapes-1024x1024.py

Specify the work directory where logs and model checkpoints will be saved:
--work-dir, default path is ../LEDNet_fordata_11g15. This directory is where logs and models will be stored.

Adjust additional training parameters according to the provided documentation to suit your specific requirements.

4. Model Testing

For testing the trained model, configure the following settings:
- Configuration file: ../configs/LED_Net/LEDNet_80k_cityscapes-1024x1024.py
- Checkpoint file: ../lednet_fordata_11g15/iter_80000.pth

You can download the pretrained model checkpoint iter_80000.pth from iter_80000.pth.

5. Model Performance (FLOPs) Testing

To evaluate the computational complexity of the model (FLOPs), utilize the get_flops.py script. Please set the appropriate parameters to obtain the required performance metrics.

6. Inference Speed Benchmarking

For benchmarking the inference speed of the trained model, use the benchmark.py script. Adjust the script parameters as needed to assess the model’s efficiency under different conditions.

Acknowledgements

We would like to express our sincere gratitude to outstanding open-source projects such as MMsegmentation, DDRNet, PIDNet, and DSNet for providing valuable inspiration and technical support for our research. Our project is based on these excellent works, and we have been deeply inspired by their innovative ideas. We are thankful to the open-source community for their contributions, which enable us to continue advancing and innovating upon existing achievements.

Owner

  • Name: Lydai
  • Login: ly27253
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: "MMSegmentation Contributors"
title: "OpenMMLab Semantic Segmentation Toolbox and Benchmark"
date-released: 2020-07-10
url: "https://github.com/open-mmlab/mmsegmentation"
license: Apache-2.0

GitHub Events

Total
  • Issues event: 1
  • Watch event: 12
  • Delete event: 2
  • Push event: 29
  • Create event: 4
Last Year
  • Issues event: 1
  • Watch event: 12
  • Delete event: 2
  • Push event: 29
  • Create event: 4

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • GE-123-cpu (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

.github/workflows/deploy.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.circleci/docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/albu.txt pypi
  • albumentations >=0.3.2
requirements/docs.txt pypi
  • docutils ==0.16.0
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx_copybutton *
  • sphinx_markdown_tables *
  • urllib3 <2.0.0
requirements/mminstall.txt pypi
  • mmcv >=2.0.0rc4,<2.2.0
  • mmengine >=0.5.0,<1.0.0
requirements/multimodal.txt pypi
  • ftfy *
  • regex *
requirements/optional.txt pypi
  • cityscapesscripts *
  • diffusers *
  • einops ==0.3.0
  • imageio ==2.9.0
  • imageio-ffmpeg ==0.4.2
  • invisible-watermark *
  • kornia ==0.6
  • nibabel *
  • omegaconf ==2.1.1
  • pudb ==2019.2
  • pytorch-lightning ==1.4.2
  • streamlit >=0.73.1
  • test-tube >=0.7.5
  • timm *
  • torch-fidelity ==0.3.0
  • torchmetrics ==0.6.0
  • transformers ==4.19.2
requirements/readthedocs.txt pypi
  • mmcv >=2.0.0rc1,<2.1.0
  • mmengine >=0.4.0,<1.0.0
  • prettytable *
  • scipy *
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • matplotlib *
  • numpy *
  • packaging *
  • prettytable *
  • scipy *
requirements/tests.txt pypi
  • codecov * test
  • flake8 * test
  • ftfy * test
  • interrogate * test
  • pytest * test
  • regex * test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi