labelme-plus
add module API hope human pose labeled ang human detect based on labelme and baidu
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.3%) to scientific vocabulary
Repository
add module API hope human pose labeled ang human detect based on labelme and baidu
Basic Info
- Host: GitHub
- Owner: JOKER-3-z
- License: other
- Language: Python
- Default Branch: pose
- Size: 45 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
LabelPose
a coco-like pose label tools base on Labelme + baidu pose api + baidu EISEG etc features
Addition features describe
- create pose by predefined template. (Edit-> create Pose or Alt+P)
ideas from https://www.youtube.com/watch?v=q17gqr0EIUQ&t=103s
user could change template by edit->save pose template
- Label with skeleton, the lines between keypoints.
The keypoints and connection defined in labelme/pose_config.py
- segmentation by the EISEG
- temfplate keypoint label
for more info refer to below video (./docs/labelexample.webm) <video src="./docs/labelexample.webm" width="800px" height="600px" controls="controls">
label tips:
1. for small or crowd people, mark it as crowd that means will not trained as sample
2. Only label the visible keypoints, not guess
3. the label keep confidence and visible level, but, only visible = 2,0 are usde for coco-conversion
4. eiseg params are fixed. needn't changed.
script:
- docker build cd script && sh build_docker.sh
- start docker cd script && sh start_docker.sh
pip install -r requirements !!!!!! paddlepaddle paddlepaddle, packagedll 1. site-packages\paddle\dataset\image.py line 44-60 import cv2 ''' interpreter = sys.executable # Note(zhouwei): if use Python/C 'PyRunSimpleString', 'sys.executable' # will be the C++ execubable on Windows if sys.platform == 'win32' and 'python.exe' not in interpreter: interpreter = sys.execprefix + os.sep + 'python.exe' importcv2proc = subprocess.Popen( [interpreter, "-c", "import cv2"], stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = importcv2proc.communicate() retcode = importcv2proc.poll() if retcode != 0: cv2 = None else: import cv2 '''
- site-packages\paddle\fluid\proto\passdescpb2.py 16 #import frameworkpb2 as frameworkpb2 import paddle.fluid.proto.frameworkpb2 as framework__pb2
TODO:
build standalone app.
meets issues:
1. no frame_pb2-> should use paddle 2.1.3. paddle 2.2.1 has bugs
2. app stuck after build , refer to https://blog.csdn.net/u010674979/article/details/117291879
should change file paddle\dataset\image.py
3. terminate called after throwing an instance of 'paddle::platform::EnforceNotMet'
what(): (NotFound) No allocator found for the place, CUDAPlace(0)
[Hint: Expected iter != allocators.end(), but received iter == allocators.end().]

labelme
Image Polygonal Annotation with Python
Description
Labelme is a graphical image annotation tool inspired by http://labelme.csail.mit.edu.
It is written in Python and uses Qt for its graphical interface.

VOC dataset example of instance segmentation.

Other examples (semantic segmentation, bbox detection, and classification).

Various primitives (polygon, rectangle, circle, line, and point).
Features
- [x] Image annotation for polygon, rectangle, circle, line and point. (tutorial)
- [x] Image flag annotation for classification and cleaning. (#166)
- [x] Video annotation. (video annotation)
- [x] GUI customization (predefined labels / flags, auto-saving, label validation, etc). (#144)
- [x] Exporting VOC-format dataset for semantic/instance segmentation. (semantic segmentation, instance segmentation)
- [x] Exporting COCO-format dataset for instance segmentation. (instance segmentation)
Requirements
- Ubuntu / macOS / Windows
- Python2 / Python3
- PyQt4 / PyQt5
Installation
There are options:
- Platform agnostic installation: Anaconda, Docker
- Platform specific installation: Ubuntu, macOS, Windows
- Pre-build binaries from the release section
Anaconda
You need install Anaconda, then run below:
```bash
python2
conda create --name=labelme python=2.7 source activate labelme
conda install -c conda-forge pyside2
conda install pyqt pip install labelme
if you'd like to use the latest version. run below:
pip install git+https://github.com/wkentaro/labelme.git
python3
conda create --name=labelme python=3.6 source activate labelme
conda install -c conda-forge pyside2
conda install pyqt
pip install pyqt5 # pyqt5 can be installed via pip on python3
pip install labelme
or you can install everything by conda command
conda install labelme -c conda-forge
```
Docker
You need install docker, then run below:
```bash
on macOS
socat TCP-LISTEN:6000,reuseaddr,fork UNIX-CLIENT:\"$DISPLAY\" & docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=docker.for.mac.host.internal:0 -v $(pwd):/root/workdir wkentaro/labelme
on Linux
xhost + docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=:0 -v $(pwd):/root/workdir wkentaro/labelme ```
Ubuntu
```bash
Ubuntu 14.04 / Ubuntu 16.04
Python2
sudo apt-get install python-qt4 # PyQt4
sudo apt-get install python-pyqt5 # PyQt5 sudo pip install labelme
Python3
sudo apt-get install python3-pyqt5 # PyQt5 sudo pip3 install labelme
or install standalone executable from:
https://github.com/wkentaro/labelme/releases
```
Ubuntu 19.10+ / Debian (sid)
bash
sudo apt-get install labelme
macOS
```bash brew install pyqt # maybe pyqt5 pip install labelme # both python2/3 should work
brew install wkentaro/labelme/labelme # command line interface
brew install --cask wkentaro/labelme/labelme # app
or install standalone executable/app from:
https://github.com/wkentaro/labelme/releases
```
Windows
Install Anaconda, then in an Anaconda Prompt run:
```bash
python3
conda create --name=labelme python=3.6 conda activate labelme pip install labelme ```
Usage
Run labelme --help for detail.
The annotations are saved as a JSON file.
```bash labelme # just open gui
tutorial (single image example)
cd examples/tutorial labelme apc2016obj3.jpg # specify image file labelme apc2016obj3.jpg -O apc2016obj3.json # close window after the save labelme apc2016obj3.jpg --nodata # not include image data but relative image path in JSON file labelme apc2016obj3.jpg \ --labels highland6539selfsticknotes,meadindexcards,kongairdogsqueakairtennisball # specify label list
semantic segmentation example
cd examples/semanticsegmentation labelme dataannotated/ # Open directory to annotate all images in it labelme data_annotated/ --labels labels.txt # specify label list with a file ```
For more advanced usage, please refer to the examples:
- Tutorial (Single Image Example)
- Semantic Segmentation Example
- Instance Segmentation Example
- Video Annotation Example
Command Line Arguments
--outputspecifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.- The first time you run labelme, it will create a config file in
~/.labelmerc. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the--configflag. - Without the
--nosortlabelsflag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided. - Flags are assigned to an entire image. Example
- Labels are assigned to a single polygon. Example
FAQ
- How to convert JSON file to numpy array? See examples/tutorial.
- How to load label PNG file? See examples/tutorial.
- How to get annotations for semantic segmentation? See examples/semantic_segmentation.
- How to get annotations for instance segmentation? See examples/instance_segmentation.
Testing
bash
pip install hacking pytest pytest-qt
flake8 .
pytest -v tests
Developing
```bash git clone https://github.com/wkentaro/labelme.git cd labelme
Install anaconda3 and labelme
curl -L https://github.com/wkentaro/dotfiles/raw/main/local/bin/install_anaconda3.sh | bash -s . source .anaconda3/bin/activate pip install -e . ```
How to build standalone executable
Below shows how to build the standalone executable on macOS, Linux and Windows.
```bash
Setup conda
conda create --name labelme python==3.6.0 conda activate labelme
Build the standalone executable
pip install . pip install pyinstaller pyinstaller labelme.spec dist/labelme --version ```
How to contribute
Make sure below test passes on your environment.
See .github/workflows/ci.yml for more detail.
```bash pip install black hacking pytest pytest-qt
flake8 . black --line-length 79 --check labelme/ MPLBACKEND='agg' pytest tests/ -m 'not gpu' ```
Acknowledgement
This repo is the fork of mpitid/pylabelme.
Owner
- Name: HBen
- Login: JOKER-3-z
- Kind: user
- Repositories: 1
- Profile: https://github.com/JOKER-3-z
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Wada" given-names: "Kentaro" orcid: "https://orcid.org/0000-0002-6347-5156" title: "Labelme: Image Polygonal Annotation with Python" doi: 10.5281/zenodo.5711226 url: "https://github.com/wkentaro/labelme" license: GPL-3