Annotate-Lab

Annotate-Lab: Simplifying Image Annotation - Published in JOSS (2024)

https://github.com/sumn2u/annotate-lab

Science Score: 93.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 4 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Keywords

annotation-tool classification collaborate computer-vision image-annotation image-labeling image-masking image-segmentation machine-learning python react segmentation

Keywords from Contributors

clade sequences benchmarking distributed parallelism mesh
Last synced: 4 months ago · JSON representation

Repository

Annotate-lab is an open-source image annotation tool for efficient dataset creation. With an intuitive interface and flexible export options, it streamlines your machine learning workflow. 🖼️✏️📑

Basic Info
Statistics
  • Stars: 117
  • Watchers: 4
  • Forks: 25
  • Open Issues: 29
  • Releases: 5
Topics
annotation-tool classification collaborate computer-vision image-annotation image-labeling image-masking image-segmentation machine-learning python react segmentation
Created over 1 year ago · Last pushed 4 months ago
Metadata Files
Readme Contributing License Code of conduct Security

README.md

annotate-lab

Annotate Lab - Simplifying Image Annotation

Annotate Lab is an open-source application designed for image annotation, comprising two main components: the client and the server. The client, a React application, is responsible for the user interface where users perform annotations. On the other hand, the server, a Flask application, manages persisting the annotated changes and generating masked and annotated images, along with configuration settings. More information can be found in our documentation.

Test Workflow Test Workflow OpenSSF Best Practices GitHub forks GitHub stars GitHub license Code style: prettier Code style: black GitHub issues Open Source Helpers DOI badge


example

Table of Contents

Project Structure [documentation page]

```sh

annotation-lab/ ├── client/ │ ├── public/ │ ├── src/ │ ├── package.json │ ├── package-lock.json │ └── ... (other React app files) ├── server/ │ ├── db/ │ ├── tests/ │ ├── venv/ │ ├── app.py │ ├── requirements.txt │ └── ... (other Flask app files) ├── README.md ```

Client

  • public/: Static files and the root HTML file.
  • src/: React components and other frontend code.
  • package.json: Contains client dependencies and scripts.

Server

  • db/: Database-related files and handlers.
  • venv/: Python virtual environment (not included in version control).
  • tests/: Contains test files.
  • app.py: Main Flask application file.
  • requirements.txt: Contains server dependencies.

Dependencies [documentation page]

Client

  • React
  • Axios
  • Other dependencies as listed in package.json

Server

  • Flask
  • Pandas
  • Other dependencies as listed in requirements.txt

Setup and Installation [documentation page]

Client Setup

  1. Navigate to the client directory: sh cd client
  2. Install the dependencies: sh npm install ### Server Setup
  3. Navigate to the server directory: sh cd server
  4. Create and activate a virtual environment: ```sh python3 -m venv venv

source venv/bin/activate # On Windows use venv\Scripts\activate 3. Install the dependencies: sh pip install -r requirements.txt ```

Running the Application

Running the Client

  1. Navigate to the client directory: sh cd client
  2. Install the dependencies: sh npm start The application should now be running on http://localhost:5173.

Running the Server

  1. Navigate to the server directory: sh cd server
  2. Activate the virtual environment: sh source venv/bin/activate # On Windows use `venv\Scripts\activate`
  3. Start the Flask application: sh flask run The server should now be running on http://localhost:5000.

Running using Docker

Navigate to the root directory and run the following command to start the application: sh docker-compose build docker-compose up -d #running in detached mode The application should be running on http://localhost.

Running Tests [documentation page]

Client Tests

The client tests are located in the client/src directory and utilize .test.js extensions. They are built using Jest and React Testing Library.

Install Dependencies:

bash cd client npm install

Run Tests:

bash npm test

This command launches the test runner in interactive watch mode. It runs all test files and provides feedback on test results.

Server Tests

The server tests are located in the server/tests directory and are implemented using unittest.

Install Dependencies:

bash cd ../server python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate` pip install -r requirements.txt

Run Tests:

```bash python3 -m unittest discover -s tests -p 'test_*.py'

```

This command discovers and runs all test files (test_*.py) in the server/tests directory using unittest.

Code Formatting [documentation page]

Client-side (Vite React Application)

  • Code Formatter: Prettier
  • Configuration File: .prettierrc
  • Command: Run npm run format or yarn format to format client-side code using Prettier.

Server-side (Flask Application)

  • Code Formatter: Black
  • Configuration File: pyproject.toml
  • Command: Run black . to format server-side code using Black.

Usage

  1. Open your web browser and navigate to http://localhost:5173.
  2. Use the user interface to upload and annotate images.
  3. The annotations and other interactions will be handled by the Flask server running at http://localhost:5000.

Settings [documentation page]

One can configure the tools, tags, upload images and do many more from the settings.

configuration

Configurations (Optional) [documentation page]

You can customize various aspects of Annotate-Lab through configuration settings. To do this, modify the config.py file in the server directory or the config.js file in the client directory. ```python

config.py

MASKBACKGROUNDCOLOR = (0, 0, 0) # Black background for masks SAMMODELENABLED = False # Segment Anything Model for auto bounding box selection ```

Javascript // config.js const config = { SERVER_URL, // url of server UPLOAD_LIMIT: 500, // image upload limit OUTLINE_THICKNESS_CONFIG : { // outline thickness of tools POLYGON: 2, CIRCLE: 2, BOUNDING_BOX: 2 }, SAM_MODEL_ENABLED: false, // displays button that allows auto bounding box selection SHOW_CLASS_DISTRIBUTION: true // displays annotated class distribution bar chart };

Demo V2.0

Annotate Lab

Auto Bounding Box Selection with Segment Anything Model (SAM)[documentation page]

Selection of bounding box automatically is made possible with the Segment Anything Model (SAM). One can toggle this feature from the configuration of server and client. When enabled, a wand icon will appear in the toolbar. Clicking the wand icon will initiate auto-annotation and display the results

auto_annotation

Outputs [documentation page]

Sample of annotated image along with its mask and settings is show below.

orange_annotation orange_annotation_mask


```json { "orange.png": { "configuration": [ { "image-name": "orange.png", "regions": [ { "region-id": "13371375927088525", "image-src": "http://127.0.0.1:5000/uploads/orange.png", "class": "Print", "comment": "", "tags": "", "points": [ [ 0.5863691595741748, 0.7210152721281337 ], [ 0.6782101128815677, 0.6587584627896123 ], [ 0.7155520389516067, 0.5731553499491453 ], [ 0.7286721751383771, 0.40065210740699225 ], [ 0.7518847237765094, 0.352662483541882 ], [ 0.6862840428426572, 0.2307428985872776 ], [ 0.6045355019866261, 0.1581099543590026 ], [ 0.533888614827093, 0.13476365085705708 ], [ 0.44204766151970004, 0.13476365085705708 ], [ 0.3441512607414899, 0.17886222413850975 ], [ 0.2957076809749529, 0.23852499975459276 ], [ 0.2523103074340969, 0.3163460114277445 ], [ 0.2129498988737856, 0.418810343464061 ], [ 0.20891293389324087, 0.5121955574718431 ], [ 0.22506079381541985, 0.6016897208959676 ], [ 0.2563472724146416, 0.6652435470957082 ], [ 0.30378161093604245, 0.7197182552669145 ], [ 0.3683730506247584, 0.7819750646054359 ], [ 0.4057149766947973, 0.8066183849686005 ], [ 0.46223248642242376, 0.776786997160559 ], [ 0.5308608910916844, 0.7586287611034903 ] ] } ], "color-map": { "Apple": [ 244, 67, 54 ], "Orange": [ 33, 150, 243 ] } } ] } }

```

YOLO Format [documentation page]

YOLO format is also supported by A.Lab. Below is an example of annotated ripe and unripe tomatoes. The entire dataset can be found on Kaggle. In this example, 0 represents ripe tomatoes and 1 represents unripe ones.

yolo_annotation_example

The label of the above image are as follows: 0 0.213673 0.474717 0.310212 0.498856 0 0.554777 0.540507 0.306350 0.433638 1 0.378432 0.681239 0.223970 0.268879

Applying the generated labels we get following results.

yolo_with_generated_labels

Normalization process of YOLO annotations [documentation page]

Example Conversion

To convert non-normalized bounding box coordinates (xmax, ymax, xmin, ymin) to YOLO format (xcenter, ycenter, width, height):

yolo-normalization

Image Credit: Leandro de Oliveira

```python

Assuming row contains your bounding box coordinates

row = {'xmax': 400, 'xmin': 200, 'ymax': 300, 'ymin': 100} class_id = 0 # Example class id (replace with actual class id)

Image dimensions

WIDTH = 640 # annotated image width HEIGHT = 640 # annotated image height

Calculate width and height of the bounding box

width = row['xmax'] - row['xmin'] height = row['ymax'] - row['ymin']

Calculate the center of the bounding box

xcenter = row['xmin'] + (width / 2) ycenter = row['ymin'] + (height / 2)

Normalize the coordinates

normalizedxcenter = xcenter / WIDTH normalizedycenter = ycenter / HEIGHT normalizedwidth = width / WIDTH normalizedheight = height / HEIGHT

Create the annotation string in YOLO format

content = f"{classid} {normalizedxcenter} {normalizedycenter} {normalizedwidth} {normalized_height}" print(content) The above conversion will give us YOLO format string. txt 0 0.46875 0.3125 0.3125 0.3125 ```

Troubleshooting [documentation page]

  • Ensure that both the client and server are running.
  • Check the browser console and terminal for any errors and troubleshoot accordingly.
  • Verify that dependencies are correctly installed.

Contributing

If you would like to contribute to this project, please fork the repository and submit a pull request. For major changes, open an issue first to discuss your proposed changes. Additionally, please adhere to the code of conduct. More information about contributing can be found here.

License

This project is licensed under the MIT License.

Reporting Security Issues

If you find a security vulnerability in annotate-lab, please read our Security Policy for instructions on how to report it securely.

Acknowledgment

This project is detached from idapgroup's react-image-annotate, which is licensed under the MIT license, and it uses some work from image_annotator.

Owner

  • Name: Suman Kunwar
  • Login: sumn2u
  • Kind: user
  • Location: Texas
  • Company: @LatanaTech

Co-Founder Mom's Store Nepal | Frontend Consultant @Latana

JOSS Publication

Annotate-Lab: Simplifying Image Annotation
Published
November 14, 2024
Volume 9, Issue 103, Page 7210
Authors
Suman Kunwar ORCID
Faculty of Computer Science, Selinus University of Sciences and Literature, Ragusa, Italy
Editor
Sébastien Boisgérault ORCID
Tags
Image Annotation Open-Source Tools Machine Learning Computer Vision Annotation Software

GitHub Events

Total
  • Create event: 20
  • Release event: 2
  • Issues event: 5
  • Watch event: 12
  • Delete event: 3
  • Issue comment event: 27
  • Push event: 23
  • Pull request review event: 2
  • Pull request event: 25
  • Fork event: 6
Last Year
  • Create event: 20
  • Release event: 2
  • Issues event: 5
  • Watch event: 12
  • Delete event: 3
  • Issue comment event: 27
  • Push event: 23
  • Pull request review event: 2
  • Pull request event: 25
  • Fork event: 6

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 695
  • Total Committers: 24
  • Avg Commits per committer: 28.958
  • Development Distribution Score (DDS): 0.568
Past Year
  • Commits: 6
  • Committers: 2
  • Avg Commits per committer: 3.0
  • Development Distribution Score (DDS): 0.167
Top Committers
Name Email Commits
sumn2u s****u@g****m 300
seveibar s****r@g****m 263
semantic-release-bot s****t@m****t 48
Oleh Yasenytsky y****h@g****m 13
snyk-bot s****t@s****o 11
Tamay Eser Uysal t****l@g****m 8
Henry LIANG H****y@g****m 7
Emiliano Castellano e****a@g****m 6
sreevardhanreddi s****i@g****m 5
DQ4443 d****3@g****m 5
Mykyta Holubakha h****o@g****m 4
Katsuhisa Yuasa b****n@g****m 3
dependabot[bot] 4****] 3
OmG2011 o****0@o****m 3
Mews 6****s 3
Severin Ibarluzea s****e@p****n 2
Josep de Cid j****d@g****m 2
Mohammed Eldadah m****h@g****m 2
linyers l****6@g****m 2
HoangHN m****m@g****m 1
Joey Figaro j****y@j****m 1
Shahidul Islam Majumder d****v@s****o 1
harith-hacky03 h****3@g****m 1
ThibautGeriz 4****z 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 23
  • Total pull requests: 87
  • Average time to close issues: 1 day
  • Average time to close pull requests: about 20 hours
  • Total issue authors: 4
  • Total pull request authors: 4
  • Average comments per issue: 2.09
  • Average comments per pull request: 1.05
  • Merged pull requests: 47
  • Bot issues: 0
  • Bot pull requests: 1
Past Year
  • Issues: 7
  • Pull requests: 37
  • Average time to close issues: 1 day
  • Average time to close pull requests: about 17 hours
  • Issue authors: 3
  • Pull request authors: 3
  • Average comments per issue: 0.14
  • Average comments per pull request: 1.03
  • Merged pull requests: 7
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • sumn2u (28)
  • leo-smi (10)
  • jpcbertoldo (4)
  • DQ4443 (3)
  • PetervanLunteren (1)
  • harith-hacky03 (1)
  • OmG2011 (1)
Pull Request Authors
  • sumn2u (218)
  • dependabot[bot] (4)
  • linyers (3)
  • boisgera (2)
  • DQ4443 (2)
  • Mews (2)
  • OmG2011 (2)
  • glenntfung (1)
  • harith-hacky03 (1)
Top Labels
Issue Labels
up for grabs (17) good first issue (15) enhancement (15) bug (5) feature-request (4) help wanted (1) documentation (1)
Pull Request Labels
dependencies (4) enhancement (3) good first issue (2) up for grabs (2) bug (1) feature-request (1)

Dependencies

.github/workflows/python-app.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
.github/workflows/vite-app.yml actions
  • actions/checkout v2 composite
  • actions/setup-node v3 composite
client/Dockerfile docker
  • nginx alpine build
  • node 18-alpine build
docker-compose.yml docker
server/Dockerfile docker
  • python 3.9-slim build
client/package-lock.json npm
  • 1169 dependencies
client/package.json npm
  • @babel/preset-env ^7.24.7 development
  • @babel/preset-react ^7.24.7 development
  • @semantic-release/git ^10.0.1 development
  • @testing-library/jest-dom ^6.4.5 development
  • @testing-library/react ^16.0.0 development
  • @vitejs/plugin-react ^4.1.0 development
  • jest ^29.7.0 development
  • jest-environment-jsdom ^29.7.0 development
  • prettier ^3.2.5 development
  • vite ^5.2.11 development
  • vite-plugin-node-polyfills ^0.22.0 development
  • vite-raw-plugin ^1.0.2 development
  • @emotion/react ^11.11.4
  • @emotion/styled ^11.11.5
  • @fortawesome/fontawesome-svg-core ^6.5.2
  • @fortawesome/free-solid-svg-icons ^6.5.2
  • @fortawesome/react-fontawesome ^0.2.2
  • @mui/icons-material ^5.15.20
  • @mui/material ^5.15.21
  • @mui/x-charts ^7.8.0
  • axios ^1.7.4
  • cash-dom ^8.1.5
  • classnames ^2.3.2
  • color-alpha ^2.0.0
  • i18next ^23.11.5
  • i18next-browser-languagedetector ^8.0.0
  • lodash.debounce ^4.0.8
  • material-survey ^2.1.0
  • moment ^2.29.4
  • prop-types ^15.8.1
  • react-draggable ^4.4.6
  • react-dropzone ^14.2.3
  • react-hotkeys ^2.0.0
  • react-i18next ^14.1.2
  • react-remove-scroll ^2.5.10
  • react-select ^5.7.7
  • react-use ^17.4.0
  • seamless-immutable ^7.1.4
  • transformation-matrix-js ^2.7.6
  • use-event-callback ^0.1.0
server/pyproject.toml pypi
server/requirements.txt pypi
  • black *
  • flask-cors *
  • opencv-python *
  • pandas *
  • pillow *
  • requests *
  • supervision *
  • torch *
  • torchvision *
  • zipp >=3.19.1