lazylabel
An image segmentation GUI that leverages SAM to prepare ML ready tensors
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.1%) to scientific vocabulary
Keywords
Repository
An image segmentation GUI that leverages SAM to prepare ML ready tensors
Basic Info
- Host: GitHub
- Owner: dnzckn
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://pypi.org/project/lazylabel-gui/
- Size: 17.4 MB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 1
- Open Issues: 0
- Releases: 11
Topics
Metadata Files
README.md
LazyLabel
AI-Assisted Image Segmentation for Machine Learning Dataset Preparation
LazyLabel combines Meta's Segment Anything Model (SAM) with comprehensive manual annotation tools to accelerate the creation of pixel-perfect segmentation masks for computer vision applications.
Quick Start
bash
pip install lazylabel-gui
lazylabel-gui
From source:
bash
git clone https://github.com/dnzckn/LazyLabel.git
cd LazyLabel
pip install -e .
lazylabel-gui
Requirements: Python 3.10+, 8GB RAM, ~2.5GB disk space (for model weights)
Core Features
AI-Powered Segmentation
LazyLabel leverages Meta's SAM for intelligent object detection:
- Single-click object segmentation
- Interactive refinement with positive/negative points
- Support for both SAM 1.0 and SAM 2.1 models
- GPU acceleration with automatic CPU fallback
Manual Annotation Tools
When precision matters: - Polygon drawing with vertex-level editing - Bounding box annotations for object detection - Edit mode for adjusting existing segments - Merge tool for combining related segments
Image Processing & Filtering
Advanced preprocessing capabilities: - FFT filtering: Remove noise and enhance edges - Channel thresholding: Isolate objects by color - Border cropping: Define crop regions that set pixels outside the area to zero in saved outputs - View adjustments: Brightness, contrast, gamma correction
Multi-View Mode
Process multiple images efficiently: - Annotate up to 4 images simultaneously - Synchronized zoom and pan across views - Mirror annotations to all linked images
Export Formats
NPZ Format (Semantic Segmentation)
One-hot encoded masks optimized for deep learning:
```python import numpy as np
data = np.load('image.npz') mask = data['mask'] # Shape: (height, width, num_classes)
Each channel represents one class
sky = mask[:, :, 0] boats = mask[:, :, 1] cats = mask[:, :, 2] dogs = mask[:, :, 3] ```
YOLO Format (Object Detection)
Normalized polygon coordinates for YOLO training:
0 0.234 0.456 0.289 0.478 0.301 0.523 ...
1 0.567 0.123 0.598 0.145 0.612 0.189 ...
Class Aliases (JSON)
Maintains consistent class naming across datasets:
json
{
"0": "background",
"1": "person",
"2": "vehicle"
}
Typical Workflow
- Open folder containing your images
- Click objects to generate AI masks (mode 1)
- Refine with additional points or manual tools
- Assign classes and organize in the class table
- Export as NPZ or YOLO format
Advanced Preprocessing Workflow
For challenging images: 1. Apply FFT filtering to reduce noise 2. Use channel thresholding to isolate color ranges 3. Enable "Operate on View" to pass filtered images to SAM 4. Fine-tune with manual tools
Advanced Features
Multi-View Mode
Access via the "Multi" tab to process multiple images: - 2-view (side-by-side) or 4-view (grid) layouts - Annotations mirror across linked views automatically - Synchronized zoom maintains alignment
SAM 2.1 Support
LazyLabel supports both SAM 1.0 (default) and SAM 2.1 models. SAM 2.1 offers improved segmentation accuracy and better handling of complex boundaries.
To use SAM 2.1 models:
1. Install the SAM 2 package:
bash
pip install git+https://github.com/facebookresearch/sam2.git
2. Download a SAM 2.1 model (e.g., sam2.1_hiera_large.pt) from the SAM 2 repository
3. Place the model file in LazyLabel's models folder:
- If installed via pip: ~/.local/share/lazylabel/models/ (or equivalent on your system)
- If running from source: src/lazylabel/models/
4. Select the SAM 2.1 model from the dropdown in LazyLabel's settings
Note: SAM 1.0 models are automatically downloaded on first use.
Key Shortcuts
| Action | Key | Description |
|--------|-----|-------------|
| AI Mode | 1 | SAM point-click segmentation |
| Draw Mode | 2 | Manual polygon creation |
| Edit Mode | E | Modify existing segments |
| Accept AI Segment | Space | Confirm AI segment suggestion |
| Save | Enter | Save annotations |
| Merge | M | Combine selected segments |
| Pan Mode | Q | Enter pan mode |
| Pan | WASD | Navigate image |
| Delete | V/Delete | Remove segments |
| Undo/Redo | Ctrl+Z/Y | Action history |
Documentation
- Usage Manual - Comprehensive feature guide
- Architecture Guide - Technical implementation details
- GitHub Issues - Report bugs or request features
Owner
- Name: Deniz N. Cakan
- Login: dnzckn
- Kind: user
- Location: San Diego
- Company: @fenning-research-group
- Website: linkedin.com/in/dcakan/
- Repositories: 1
- Profile: https://github.com/dnzckn
Citation (CITATION.cff)
cff-version: 1.2.0
title: >-
LazyLabel
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: >-
Deniz N. Cakan
email: deniz.n.cakan@gmail.com
orcid: 'https://orcid.org/0000-0001-5177-8654'
url: "https://github.com/dnzckn/LazyLabel"
license: MIT
GitHub Events
Total
- Release event: 18
- Watch event: 1
- Delete event: 25
- Push event: 115
- Public event: 1
- Pull request event: 10
- Fork event: 1
- Create event: 18
Last Year
- Release event: 18
- Watch event: 1
- Delete event: 25
- Push event: 115
- Public event: 1
- Pull request event: 10
- Fork event: 1
- Create event: 18
Dependencies
- PyQt6 ==6.9.1
- numpy ==2.1.2
- opencv-python ==4.11.0.86
- pyqtdarktheme ==2.1.0
- scipy ==1.15.3
- segment-anything ==1.0
- torch ==2.7.1
- torchvision ==0.22.1