Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.7%) to scientific vocabulary
Repository
fork from CVHubber520
Basic Info
- Host: GitHub
- Owner: JacksonH02
- License: gpl-3.0
- Language: Python
- Default Branch: main
- Size: 14 MB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 1
- Open Issues: 0
- Releases: 0
Metadata Files
README.md

📄 Table of Contents
🥳 What's New ⏏️
- Nov. 2023:
- 🤗🤗🤗 Release the latest version 2.0.0.
- 🔥🔥🔥 Added support for Grounding-SAM, combining GroundingDINO with HQ-SAM to achieve sota zero-shot high-quality predictions!
- 🚀🚀🚀 Enhanced support for HQ-SAM model to achieve high-quality mask predictions.
- 🙌🙌🙌 Support the PersonAttribute and VehicleAttribute model for multi-label classification task.
- 🆕🆕🆕 Introducing a new multi-label attribute annotation functionality.
- Release the latest version 1.1.0.
- Support pose estimation: YOLOv8-Pose.
- Support object-level tag with yolov5_ram.
- Add a new feature enabling batch labeling for arbitrary unknown categories based on Grounding-DINO.
- Oct. 2023:
- Release the latest version 1.0.0.
- Add a new feature for rotation box.
- Support YOLOv5-OBB with DroneVehicle and DOTA-v1.0/v1.5/v2.0 model.
- SOTA Zero-Shot Object Detection - GroundingDINO is released.
- SOTA Image Tagging Model - Recognize Anything is released.
- Support YOLOv5-SAM and YOLOv8-EfficientViT_SAM union task.
- Support YOLOv5 and YOLOv8 segmentation task.
- Release Gold-YOLO and DAMO-YOLO models.
- Release MOT algorithms: OC_Sort (CVPR'23).
- Add a new feature for small object detection using SAHI.
- Sep. 2023:
- Aug. 2023:
- Jul. 2023:
- Add label_converter.py script.
- Release RT-DETR model.
- Jun. 2023:
- Release YOLO-NAS model.
- Support instance segmentation: YOLOv8-seg.
- Add README_zh-CN.md of X-AnyLabeling.
- May. 2023:
👋 Brief Introduction ⏏️
X-AnyLabeling is an exceptional annotation tool that draws inspiration from renowned projects like LabelImg, roLabelImg, Labelme, and Anylabeling. It transcends the realm of ordinary annotation tools, representing a significant stride into the future of automated data annotation. This cutting-edge tool not only simplifies the annotation process but also seamlessly integrates state-of-the-art AI models to deliver superior results. With a strong focus on practical applications, X-AnyLabeling is purpose-built to provide developers with an industrial-grade, feature-rich solution for automating annotation and data processing across a wide range of complex tasks.
🔥 Highlight ⏏️
🗝️Key Features
- Support for importing
imagesandvideos. CPUandGPUinference support with on-demand selection.- Compatibility with multiple SOTA deep-learning algorithms.
- Single-frame prediction and
one-clickprocessing for all images. - Export options for formats like
COCO-JSON,VOC-XML,YOLOv5-TXT,DOTA-TXTandMOT-CSV. - Integration with popular frameworks such as PaddlePaddle, OpenMMLab, timm, and others.
- Providing comprehensive
help documentationalong with activedeveloper community support. - Accommodation of various visual tasks such as
detection,segmentation,face recognition, and so on. - Modular design that empowers users to compile the system according to their specific needs and supports customization and further development.
- Image annotation capabilities for
polygons,rectangles,rotation,circles,lines, andpoints, as well astext detection,recognition, andKIEannotations.
⛏️Model Zoo
|
|
|
|
| **2D Lane Detection** | **OCR** | **MOT** | **Instance Segmentation** |
|
|
|
|
|
| **Image Tagging** | **Grounding DINO** | **Recognition** | **Rotation** |
|
|
|
|
|
| **[SAM](https://segment-anything.com/)** | **BC-SAM** | **Skin-SAM** | **Polyp-SAM** |
|
|
|
|
|
For more details, please refer to [models_list](./docs/models_list.md).
📖 Tutorials ⏏️
🔜Quick Start
Download and run the GUI version directly from Release or Baidu Disk.
Note: - For MacOS: - After installation, go to the Applications folder. - Right-click on the application and choose Open. - From the second time onwards, you can open the application normally using Launchpad.
- Due to the lack of necessary hardware, the current tool is only available in executable versions for
WindowsandLinux. If you require executable programs for other operating systems, e.g.,MacOS, please refer to the following steps for self-compilation. - To obtain more stable performance and feature support, it is strongly recommended to build from source code.
👨🏼💻Build from source
- Install the required libraries:
bash
pip install -r requirements.txt
If you need to use GPU inference, install the corresponding requirements-gpu.txt file and download the appropriate version of onnxruntime-gpu based on your local CUDA and CuDNN versions. For more details, refer to the FAQ.
- Generate resources [Option]:
pyrcc5 -o anylabeling/resources/resources.py anylabeling/resources/resources.qrc
- Run the application:
python anylabeling/app.py
📦Build executable
```bash
Windows-CPU
bash scripts/build_executable.sh win-cpu
Windows-GPU
bash scripts/build_executable.sh win-gpu
Linux-CPU
bash scripts/build_executable.sh linux-cpu
Linux-GPU
bash scripts/build_executable.sh linux-gpu ```
Note:
1. Before compiling, please modify the `__preferred_device__` parameter in the "anylabeling/app_info.py" file according to the appropriate GPU/CPU version. 2. If you need to compile the GPU version, install the corresponding environment using "pip install -r requirements-gpu*.txt". Specifically, for compiling the GPU version, manually modify the "datas" list parameters in the "anylabeling-*-gpu.spec" file to include the relevant dynamic libraries (*.dll or *.so) of your local onnxruntime-gpu. Additionally, when downloading the onnxruntime-gpu package, ensure compatibility with your CUDA version. You can refer to the official [documentation](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html) for the specific compatibility table. 3. For macOS versions, you can make modifications by referring to the "anylabeling-win-*.spec" script.📋 Usage ⏏️
📌Basic usage
- Build and launch using the instructions above.
- Click
Change Output Dirin theMenu/Fileto specify a output directory; otherwise, it will save by default in the current image path. - Click
Open/Open Dir/Open Videoto select a specific file, folder, or video. - Click the
Start drawing xxxbutton on the left-hand toolbar or theAuto Lalbelingcontrol to initiate. - Click and release left mouse to select a region to annotate the rect box. Alternatively, you can press the "Run (i)" key for one-click processing.
Note: The annotation will be saved to the folder you specify and you can refer to the below hotkeys to speed up your workflow.
🚀Advanced usage
- Select AutoLalbeing Button on the left side or press the shortcut key "Ctrl + A" to activate auto labeling.
- Select one of the
Segment Anything-liked Modelsfrom the dropdown menu Model, where the Quant indicates the quantization of the model. - Use
Auto segmentation marking toolsto mark the object.- +Point: Add a point that belongs to the object.
- -Point: Remove a point that you want to exclude from the object.
- +Rect: Draw a rectangle that contains the object. Segment Anything will automatically segment the object.
- Clear: Clear all auto segmentation markings.
- Finish Object (f): Finish the current marking. After finishing the object, you can enter the label name and save the object.
📜Docs
🧷Hotkeys
Click to Expand/Collapse
| Shortcut | Function | |-------------------|-----------------------------------------| | d | Open next file | | a | Open previous file | | p | Create polygon | | o | Create rotation | | r | Create rectangle | | i | Run model | | r | Create rectangle | | + | `+point` of SAM mode | | - | `-point` of SAM mode | | g | Group selected shapes | | u | Ungroup selected shapes | | Ctrl + q | Quit | | Ctrl + i | Open image file | | Ctrl + o | Open video file | | Ctrl + u | Load all images from a directory | | Ctrl + e | Edit label | | Ctrl + j | Edit polygon | | Ctrl + d | Duplicate polygon | | Ctrl + p | Toggle keep previous mode | | Ctrl + y | Toggle auto use last label | | Ctrl + m | Run all images at once | | Ctrl + a | Enable auto annotation | | Ctrl + s | Save current information | | Ctrl + Shift + s | Change output directory | | Ctrl - | Zoom out | | Ctrl + 0 | Zoom to Original | | [Ctrl++, Ctrl+=] | Zoom in | | Ctrl + f | Fit window | | Ctrl + Shift + f | Fit width | | Ctrl + z | Undo the last operation | | Ctrl + Delete | Delete file | | Delete | Delete polygon | | Esc | Cancel the selected object | | Backspace | Remove selected point | | ↑→↓← | Keyboard arrows to move selected object | | zxcv | Keyboard to rotate selected rect box |📧 Contact ⏏️
🤗 Enjoying this project? Please give it a star! 🤗
If you find this project helpful or interesting, consider starring it to show your support, and if you have any questions or encounter any issues while using this project, feel free to reach out for assistance using the following methods:
- Create an issue
- Email: cv_hub@163.com
- WeChat:
ww10874(Please includeX-Anylabeing+brief description of the issuein your message)
✅ License ⏏️
This project is released under the GPL-3.0 license.
🏷️ Citing ⏏️
BibTeX
If you use this software in your research, please cite it as below:
@misc{X-AnyLabeling,
year = {2023},
author = {Wei Wang},
publisher = {Github},
organization = {CVHub},
journal = {Github repository},
title = {Advanced Auto Labeling Solution with Added Features},
howpublished = {\url{https://github.com/CVHub520/X-AnyLabeling}}
}
Owner
- Login: JacksonH02
- Kind: user
- Repositories: 2
- Profile: https://github.com/JacksonH02
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: "CVHub" title: "Advanced Auto Labeling Solution with Added Features" url: "https://github.com/CVHub520/X-AnyLabeling" license: GPL-3
GitHub Events
Total
Last Year
Issues and Pull Requests
Last synced: almost 2 years ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- build * development
- pyinstaller * development
- twine * development
- build * development
- pyinstaller * development
- twine * development
- PyQt5 ==5.15.7
- PyYAML ==6.0
- filterpy *
- imgviz ==1.5.0
- lap ==0.4.0
- natsort ==8.1.0
- onnx ==1.13.1
- onnxruntime-gpu >=1.16.0
- opencv-contrib-python-headless ==4.7.0.72
- pyclipper *
- qimage2ndarray ==1.10.0
- scipy *
- shapely *
- termcolor ==1.1.0
- tokenizers *
- tqdm *
- build * development
- twine * development
- PyYAML ==6.0
- filterpy *
- imgviz ==1.5.0
- lap ==0.4.0
- natsort ==8.1.0
- onnx ==1.13.1
- onnxruntime >=1.16.0
- opencv-contrib-python-headless ==4.7.0.72
- pyclipper *
- qimage2ndarray ==1.10.0
- scipy *
- shapely *
- termcolor ==1.1.0
- tokenizers *
- tqdm *
- PyQt5 ==5.15.7
- PyYAML ==6.0
- filterpy *
- imgviz ==1.5.0
- lap ==0.4.0
- natsort ==8.1.0
- onnx ==1.13.1
- onnxruntime >=1.16.0
- opencv-contrib-python-headless ==4.7.0.72
- pyclipper *
- qimage2ndarray ==1.10.0
- scipy *
- shapely *
- termcolor ==1.1.0
- tokenizers *
- tqdm *
- Pillow >=2.8
- PyQt5 >=5.15.7
- PyYAML *
- filterpy *
- imgviz >=0.11
- lap ==0.4.0
- natsort >=7.1.0
- numpy *
- onnx ==1.13.1
- opencv-python-headless *
- pyclipper *
- qimage2ndarray ==1.10.0
- scipy *
- shapely *
- termcolor *
- tokenizers *
- tqdm *
