glasses-detector
Glasses detection, classification and segmentation
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (4.7%) to scientific vocabulary
Keywords
Repository
Glasses detection, classification and segmentation
Basic Info
- Host: GitHub
- Owner: mantasu
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://mantasu.github.io/glasses-detector/
- Size: 2.23 MB
Statistics
- Stars: 81
- Watchers: 6
- Forks: 9
- Open Issues: 6
- Releases: 6
Topics
Metadata Files
README.md
Glasses Detector
About
Package for processing images with different types of glasses and their parts. It provides a quick way to use the pre-trained models for 3 kinds of tasks, each divided into multiple categories, for instance, classification of sunglasses or segmentation of glasses frames.
| Classification | 👓 transparent 🕶️ opaque 🥽 any ➿shadows |
| Detection | 🤓 worn 👓 standalone 👀 eye-area |
| Segmentation | 😎 full 🖼️ frames 🦿 legs 🔍 lenses 👥 shadows |
Installation
[!IMPORTANT] Minimum version of Python 3.12 is REQUIRED. Also, you may want to install Pytorch in advance to select specific configuration for your device and environment.
Pip Package
If you only need the library with pre-trained models, just install the pip package and see Quick Start for usage (also check Glasses Detector Installation for more details):
bash
pip install glasses-detector
You can also install it from the source:
bash
git clone https://github.com/mantasu/glasses-detector
cd glasses-detector && pip install .
Local Project
If you want to train your own models on the given datasets (or on some other datasets), just clone the project and install training requirements, then see Running section to see how to run training and testing.
bash
git clone https://github.com/mantasu/glasses-detector
cd glasses-detector && pip install -r requirements.txt
You can create a virtual environment for your packages via venv, however, if you have conda, then you can simply use it to create a new environment, for example:
bash
conda create -n glasses-detector python=3.12
conda activate glasses-detector
To set-up the datasets, refer to Data section.
Quick Start
Command Line
You can run predictions via the command line. For example, classification of a single image and segmentation of images inside a directory can be performed by running:
bash
glasses-detector -i path/to/img.jpg -t classification -d cuda -f int # Prints 1 or 0
glasses-detector -i path/to/img_dir -t segmentation -f mask -e .jpg # Generates masks
[!TIP] You can also specify things like
--output-path,--size,--batch-sizeetc. Check the Glasses Detector CLI and Command Line Examples for more details.
Python Script
You can import the package and its models via the python script for more flexibility. Here is an example of how to classify people wearing sunglasses:
```python from glasses_detector import GlassesClassifier
Generates a CSV with each line ","
classifier = GlassesClassifier(size="small", kind="sunglasses") classifier.process_dir("path/to/dir", "path/to/preds.csv", format="bool") ```
And here is a more efficient way to process a dir for detection task (only single bbox per image is currently supported):
```python from glasses_detector import GlassesDetector
Generates dir_preds with bboxes as .txt for each img
detector = GlassesDetector(kind="eyes", device="cuda") detector.processdir("path/to/dir", ext=".txt", batchsize=64) ```
[!TIP] Again, there are a lot more things that can be specified, for instance,
output_sizeandpbar. It is also possible to directly output the results or save them in a variable. See Glasses Detector API and Python Script Examples for more details.
Demo
Feel free to play around with some demo image files. For example, after installing through pip, you can run:
bash
git clone https://github.com/mantasu/glasses-detector && cd glasses-detector/data
glasses-detector -i demo -o demo_labels.csv --task classification:eyeglasses
You can also check out the demo notebook which can be also accessed via Google Colab.
Data
Before downloading the datasets, please install unrar package, for example if you're using Ubuntu (if you're using Windows, just install WinRAR):
bash
sudo apt-get install unrar
Also, ensure the scripts are executable:
bash
chmod +x scripts/*
Once you download all the datasets (or some that interest you), process them:
bash
python scripts/preprocess.py --root data -f -d
[!TIP] You can also specify only certain tasks, e.g.,
--tasks classification segmentationwould ignore detection datasets. It is also possible to change image size and val/test split fractions: use--helpto see all the available CLI options.
After processing all the datasets, your data directory should have the following structure:
bash
└── data # The data directory (root) under project
├── classification
│ ├── anyglasses # Datasets with any glasses as positives
│ ├── eyeglasses # Datasets with transparent glasses as positives
│ ├── shadows # Datasets with visible glasses frames shadows as positives
│ └── sunglasses # Datasets with semi-transparent/opaque glasses as positives
│
├── detection
│ ├── eyes # Datasets with bounding boxes for eye area
│ ├── solo # Datasets with bounding boxes for standalone glasses
│ └── worn # Datasets with bounding boxes for worn glasses
│
└── segmentation
├── frames # Datasets with masks for glasses frames
├── full # Datasets with masks for full glasses (frames + lenses)
├── legs # Datasets with masks for glasses legs (part of frames)
├── lenses # Datasets with masks for glasses lenses
├── shadows # Datasets with masks for eyeglasses frames cast shadows
└── smart # Datasets with masks for glasses frames and lenses if opaque
Almost every dataset will have train, val and test sub-directories. These splits for classification datasets are further divided to <category> and no_<category>, for detection - to images and annotations, and for segmentation - to images and masks sub-sub-directories. By default, all the images are 256x256.
[!NOTE] Instead of downloading the datasets manually one-by-one, here is a Kaggle Dataset that you could download which already contains everything.
Download Instructions
Download the following files and _place them all_ inside the cloned project under directory `data` which will be your data `--root` (please note for some datasets you need to have created a free [Kaggle](https://www.kaggle.com/) account): **Classification** datasets: 1. From [CMU Face Images](http://archive.ics.uci.edu/dataset/124/cmu+face+images) download `cmu+face+images.zip` 2. From [Specs on Faces](https://sites.google.com/view/sof-dataset) download `original images.rar` and `metadata.rar` 3. From [Sunglasses / No Sunglasses](https://www.kaggle.com/datasets/amol07/sunglasses-no-sunglasses) download `archive.zip` and _rename_ to `sunglasses-no-sunglasses.zip` 4. From [Glasses and Coverings](https://www.kaggle.com/datasets/mantasu/glasses-and-coverings) download `archive.zip` and _rename_ to `glasses-and-coverings.zip` 5. From [Face Attributes Grouped](https://www.kaggle.com/datasets/mantasu/face-attributes-grouped) download `archive.zip` and _rename_ to `face-attributes-grouped.zip` 6. From [Face Attributes Extra](https://www.kaggle.com/datasets/mantasu/face-attributes-extra) download `archive.zip` and _rename_ to `face-attributes-extra.zip` 7. From [Glasses No Glasses](https://www.kaggle.com/datasets/jorgebuenoperez/datacleaningglassesnoglasses) download `archive.zip` and _rename_ to `glasses-no-glasses.zip` 8. From [Indian Facial Database](https://drive.google.com/file/d/1DPQQ2omEYPJDLFP3YG2h1SeXbh2ePpOq/view) download `An Indian facial database highlighting the Spectacle.zip` 9. From [Face Attribute 2](https://universe.roboflow.com/heheteam-g9fnm/faceattribute-2) download `FaceAttribute 2.v2i.multiclass.zip` (choose `v2` and `Multi Label Classification` format) 10. From [Glasses Shadows Synthetic](https://www.kaggle.com/datasets/mantasu/glasses-shadows-synthetic) download `archive.zip` and _rename_ to `glasses-shadows-synthetic.zip` **Detection** datasets: 11. From [AI Pass](https://universe.roboflow.com/shinysky5166/ai-pass) download `AI-Pass.v6i.coco.zip` (choose `v6` and `COCO` format) 12. From [PEX5](https://universe.roboflow.com/pex-5-ylpua/pex5-gxq3t) download `PEX5.v4i.coco.zip` (choose `v4` and `COCO` format) 13. From [Sunglasses Glasses Detect](https://universe.roboflow.com/burhan-6fhqx/sunglasses_glasses_detect) download `sunglasses_glasses_detect.v1i.coco.zip` (choose `v1` and `COCO` format) 14. From [Glasses Detection](https://universe.roboflow.com/su-yee/glasses-detection-qotpz) download `Glasses Detection.v2i.coco.zip` (choose `v2` and `COCO` format) 15. From [Glasses Image Dataset](https://universe.roboflow.com/new-workspace-ld3vn/glasses-ffgqb) download `glasses.v1-glasses_2022-04-01-8-12pm.coco.zip` (choose `v1` and `COCO` format) 16. From [EX07](https://universe.roboflow.com/cam-vrmlm/ex07-o8d6m) download `Ex07.v1i.coco.zip` (choose `v1` and `COCO` format) 17. From [No Eyeglass](https://universe.roboflow.com/doms/no-eyeglass) download `no eyeglass.v3i.coco.zip` (choose `v3` and `COCO` format) 18. From [Kacamata-Membaca](https://universe.roboflow.com/uas-kelas-machine-learning-blended/kacamata-membaca) download `Kacamata-Membaca.v1i.coco.zip` (choose `v1` and `COCO` format) 19. From [Only Glasses](https://universe.roboflow.com/woodin-ixal8/onlyglasses) download `onlyglasses.v1i.coco.zip` (choose `v1` and `COCO` format) **Segmentation** datasets: 20. From [CelebA Mask HQ](https://drive.google.com/file/d/1badu11NqxGf6qM3PTTooQDJvQbejgbTv/view) download `CelebAMask-HQ.zip` and from [CelebA Annotations](https://drive.google.com/file/d/1xd-d1WRnbt3yJnwh5ORGZI3g-YS-fKM9/view) download `annotations.zip` 21. From [Glasses Segmentation Synthetic Dataset](https://www.kaggle.com/datasets/mantasu/glasses-segmentation-synthetic-dataset) download `archive.zip` and _rename_ to `glasses-segmentation-synthetic.zip` 22. From [Face Synthetics Glasses](https://www.kaggle.com/datasets/mantasu/face-synthetics-glasses) download `archive.zip` and _rename_ to `face-synthetics-glasses.zip` 23. From [Eyeglass](https://universe.roboflow.com/azaduni/eyeglass-6wu5y) download `eyeglass.v10i.coco-segmentation.zip` (choose `v10` and `COCO Segmentation` format) 24. From [Glasses Lenses Segmentation](https://universe.roboflow.com/yair-etkes-iy1bq/glasses-lenses-segmentation) download `glasses lenses segmentation.v7-sh-improvments-version.coco.zip` (choose `v7` and `COCO` format) 25. From [Glasses Lens](https://universe.roboflow.com/yair-etkes-iy1bq/glasses-lens) download `glasses lens.v6i.coco-segmentation.zip` (choose `v6` and `COCO Segmentation` format) 26. From [Glasses Segmentation Cropped Faces](https://universe.roboflow.com/yair-etkes-iy1bq/glasses-segmentation-cropped-faces) download `glasses segmentation cropped faces.v2-segmentation_models_pytorch-s_1st_version.coco-segmentation.zip` (choose `v2` and `COCO Segmentation` format) 27. From [Spects Segmentation](https://universe.roboflow.com/teamai-wuk2z/spects-segementation) download `Spects Segementation.v3i.coco-segmentation.zip` (choose `v3` and `COCO Segmentation`) 28. From [KINH](https://universe.roboflow.com/fpt-university-1tkhk/kinh) download `kinh.v1i.coco.zip` (choose `v1` and `COCO` format) 29. From [Capstone Mini 2](https://universe.roboflow.com/christ-university-ey6ms/capstone_mini_2-vtxs3) download `CAPSTONE_MINI_2.v1i.coco-segmentation.zip` (choose `v1` and `COCO Segmentation` format) 30. From [Sunglasses Color Detection](https://universe.roboflow.com/andrea-giuseppe-parial/sunglasses-color-detection-roboflow) download `Sunglasses Color detection roboflow.v2i.coco-segmentation.zip` (choose `v2` and `COCO Segmentation` format) 31. From [Sunglasses Color Detection 2](https://universe.roboflow.com/andrea-giuseppe-parial/sunglasses-color-detection-2) download `Sunglasses Color detection 2.v3i.coco-segmentation.zip` (choose `v3` and `COCO Segmentation` format) 32. From [Glass Color](https://universe.roboflow.com/snap-ml/glass-color) download `Glass-Color.v1i.coco-segmentation.zip` (choose `v1` and `COCO Segmentation` format) The table below shows which datasets are used for which tasks and their categories. Feel free to pick only the ones that interest you.Running
To run custom training and testing, it is first advised to familiarize with how Pytorch Lightning works and briefly check its CLI documentation. In particular, take into account what arguments are accepted by the Trainer class and how to customize your own optimizer and scheduler via command line. Prerequisites:
- Clone the repository
- Install the requirements
- Download and preprocess the data
Training
You can run simple training as follows (which is the default):
bash
python scripts/run.py fit --task classification:anyglasses --size medium
You can customize things like batch-size, num-workers, as well as trainer and checkpoint arguments:
bash
python scripts/run.py fit --batch-size 64 --trainer.max_epochs 300 --checkpoint.dirname ckpt
It is also possible to overwrite default optimizer and scheduler:
bash
python scripts/run.py fit --optimizer Adam --optimizer.lr 1e-3 --lr_scheduler CosineAnnealingLR
Testing
To run testing, specify the trained model and the checkpoint to it:
bash
python scripts/run.py test -t classification:anyglasses -s small --ckpt_path path/to/model.ckpt
Or you can also specify the pth file to pre-load the model with weights:
bash
python scripts/run.py test -t classification:anyglasses -s small -w path/to/weights.pth
If you get UserWarning: No positive samples in targets, true positive value should be meaningless, increase the batch size.
Credits
For references and citation, please see Glasses Detector Credits.
Owner
- Name: Mantas
- Login: mantasu
- Kind: user
- Location: UK
- Repositories: 3
- Profile: https://github.com/mantasu
Master's student at the University of Edinburgh
GitHub Events
Total
- Create event: 2
- Release event: 2
- Issues event: 4
- Watch event: 27
- Issue comment event: 5
- Push event: 2
- Fork event: 2
Last Year
- Create event: 2
- Release event: 2
- Issues event: 4
- Watch event: 27
- Issue comment event: 5
- Push event: 2
- Fork event: 2
Issues and Pull Requests
Last synced: 8 months ago
All Time
- Total issues: 20
- Total pull requests: 1
- Average time to close issues: about 1 month
- Average time to close pull requests: about 17 hours
- Total issue authors: 16
- Total pull request authors: 1
- Average comments per issue: 2.05
- Average comments per pull request: 2.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 7
- Pull requests: 0
- Average time to close issues: 1 day
- Average time to close pull requests: N/A
- Issue authors: 5
- Pull request authors: 0
- Average comments per issue: 1.57
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- masterofobzene (3)
- rakage (2)
- YoucanBaby (1)
- MMumtazSakho (1)
- homodigitus (1)
- ale-trevizoli (1)
- RamitPahwa (1)
- z-10 (1)
- godisme1220 (1)
- dhyey-dreamwave (1)
- duongvanbien13081999 (1)
- manadopeee (1)
- MobileMon-Majestic (1)
- kienld3049 (1)
Pull Request Authors
- Dobiasd (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 46,168 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 6
- Total maintainers: 1
pypi.org: glasses-detector
Glasses classification, detection, and segmentation.
- Homepage: https://github.com/mantasu/glasses-detector
- Documentation: https://mantasu.github.io/glasses-detector
- License: MIT
-
Latest release: 1.0.3
published 10 months ago
Rankings
Maintainers (1)
Dependencies
- pytorch_lightning *
- scipy *
- tensorboard *
- torchsr *
- tqdm *
- actions/checkout v3 composite
- actions/setup-python v3 composite
- pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
- actions/checkout v3 composite
- actions/configure-pages v3 composite
- actions/deploy-pages v2 composite
- actions/setup-python v3 composite
- actions/upload-pages-artifact v1 composite
- sphinx *
- sphinx_copybutton *
- sphinx_rtd_theme *
- sphinx_toolbox *
- albumentations *
- torch *
- torchvision *
- tqdm *