lazylabel

An image segmentation GUI that leverages SAM to prepare ML ready tensors

https://github.com/dnzckn/lazylabel

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.1%) to scientific vocabulary

Keywords

computer-vision image-segmentation labeling-tool
Last synced: 6 months ago · JSON representation ·

Repository

An image segmentation GUI that leverages SAM to prepare ML ready tensors

Basic Info
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 11
Topics
computer-vision image-segmentation labeling-tool
Created 8 months ago · Last pushed 6 months ago
Metadata Files
Readme License Citation

README.md

LazyLabel

LazyLabel Logo LazyLabel Cursive

AI-Assisted Image Segmentation for Machine Learning Dataset Preparation

LazyLabel combines Meta's Segment Anything Model (SAM) with comprehensive manual annotation tools to accelerate the creation of pixel-perfect segmentation masks for computer vision applications.

LazyLabel Screenshot

Quick Start

bash pip install lazylabel-gui lazylabel-gui

From source: bash git clone https://github.com/dnzckn/LazyLabel.git cd LazyLabel pip install -e . lazylabel-gui

Requirements: Python 3.10+, 8GB RAM, ~2.5GB disk space (for model weights)


Core Features

AI-Powered Segmentation

LazyLabel leverages Meta's SAM for intelligent object detection: - Single-click object segmentation - Interactive refinement with positive/negative points
- Support for both SAM 1.0 and SAM 2.1 models - GPU acceleration with automatic CPU fallback

Manual Annotation Tools

When precision matters: - Polygon drawing with vertex-level editing - Bounding box annotations for object detection - Edit mode for adjusting existing segments - Merge tool for combining related segments

Image Processing & Filtering

Advanced preprocessing capabilities: - FFT filtering: Remove noise and enhance edges - Channel thresholding: Isolate objects by color - Border cropping: Define crop regions that set pixels outside the area to zero in saved outputs - View adjustments: Brightness, contrast, gamma correction

Multi-View Mode

Process multiple images efficiently: - Annotate up to 4 images simultaneously - Synchronized zoom and pan across views - Mirror annotations to all linked images


Export Formats

NPZ Format (Semantic Segmentation)

One-hot encoded masks optimized for deep learning:

```python import numpy as np

data = np.load('image.npz') mask = data['mask'] # Shape: (height, width, num_classes)

Each channel represents one class

sky = mask[:, :, 0] boats = mask[:, :, 1] cats = mask[:, :, 2] dogs = mask[:, :, 3] ```

YOLO Format (Object Detection)

Normalized polygon coordinates for YOLO training: 0 0.234 0.456 0.289 0.478 0.301 0.523 ... 1 0.567 0.123 0.598 0.145 0.612 0.189 ...

Class Aliases (JSON)

Maintains consistent class naming across datasets: json { "0": "background", "1": "person", "2": "vehicle" }


Typical Workflow

  1. Open folder containing your images
  2. Click objects to generate AI masks (mode 1)
  3. Refine with additional points or manual tools
  4. Assign classes and organize in the class table
  5. Export as NPZ or YOLO format

Advanced Preprocessing Workflow

For challenging images: 1. Apply FFT filtering to reduce noise 2. Use channel thresholding to isolate color ranges 3. Enable "Operate on View" to pass filtered images to SAM 4. Fine-tune with manual tools


Advanced Features

Multi-View Mode

Access via the "Multi" tab to process multiple images: - 2-view (side-by-side) or 4-view (grid) layouts - Annotations mirror across linked views automatically - Synchronized zoom maintains alignment

SAM 2.1 Support

LazyLabel supports both SAM 1.0 (default) and SAM 2.1 models. SAM 2.1 offers improved segmentation accuracy and better handling of complex boundaries.

To use SAM 2.1 models: 1. Install the SAM 2 package: bash pip install git+https://github.com/facebookresearch/sam2.git 2. Download a SAM 2.1 model (e.g., sam2.1_hiera_large.pt) from the SAM 2 repository 3. Place the model file in LazyLabel's models folder: - If installed via pip: ~/.local/share/lazylabel/models/ (or equivalent on your system) - If running from source: src/lazylabel/models/ 4. Select the SAM 2.1 model from the dropdown in LazyLabel's settings

Note: SAM 1.0 models are automatically downloaded on first use.


Key Shortcuts

| Action | Key | Description | |--------|-----|-------------| | AI Mode | 1 | SAM point-click segmentation | | Draw Mode | 2 | Manual polygon creation | | Edit Mode | E | Modify existing segments | | Accept AI Segment | Space | Confirm AI segment suggestion | | Save | Enter | Save annotations | | Merge | M | Combine selected segments | | Pan Mode | Q | Enter pan mode | | Pan | WASD | Navigate image | | Delete | V/Delete | Remove segments | | Undo/Redo | Ctrl+Z/Y | Action history |


Documentation


Owner

  • Name: Deniz N. Cakan
  • Login: dnzckn
  • Kind: user
  • Location: San Diego
  • Company: @fenning-research-group

Citation (CITATION.cff)

cff-version: 1.2.0
title: >-
  LazyLabel
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: >-
      Deniz N. Cakan
    email: deniz.n.cakan@gmail.com
    orcid: 'https://orcid.org/0000-0001-5177-8654'
url: "https://github.com/dnzckn/LazyLabel"
license: MIT

GitHub Events

Total
  • Release event: 18
  • Watch event: 1
  • Delete event: 25
  • Push event: 115
  • Public event: 1
  • Pull request event: 10
  • Fork event: 1
  • Create event: 18
Last Year
  • Release event: 18
  • Watch event: 1
  • Delete event: 25
  • Push event: 115
  • Public event: 1
  • Pull request event: 10
  • Fork event: 1
  • Create event: 18

Dependencies

requirements.txt pypi
  • PyQt6 ==6.9.1
  • numpy ==2.1.2
  • opencv-python ==4.11.0.86
  • pyqtdarktheme ==2.1.0
  • scipy ==1.15.3
  • segment-anything ==1.0
  • torch ==2.7.1
  • torchvision ==0.22.1