Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 12 DOI reference(s) in README
  • Academic publication links
    Links to: ieee.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: LaniakeaDG
  • License: mit
  • Language: Python
  • Default Branch: main
  • Size: 52.8 MB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 9 months ago · Last pushed 9 months ago
Metadata Files
Readme Funding License Code of conduct Citation

README.md

deepface

[![Downloads](https://static.pepy.tech/personalized-badge/deepface?period=total&units=international_system&left_color=grey&right_color=blue&left_text=downloads)](https://pepy.tech/project/deepface) [![Stars](https://img.shields.io/github/stars/serengil/deepface?color=yellow&style=flat&label=%E2%AD%90%20stars)](https://github.com/serengil/deepface/stargazers) [![Pulls](https://img.shields.io/docker/pulls/serengil/deepface?logo=docker)](https://hub.docker.com/r/serengil/deepface) [![License](http://img.shields.io/:license-MIT-green.svg?style=flat)](https://github.com/serengil/deepface/blob/master/LICENSE) [![Tests](https://github.com/serengil/deepface/actions/workflows/tests.yml/badge.svg)](https://github.com/serengil/deepface/actions/workflows/tests.yml) [![DOI](http://img.shields.io/:DOI-10.17671/gazibtd.1399077-blue.svg?style=flat)](https://doi.org/10.17671/gazibtd.1399077) [![Blog](https://img.shields.io/:blog-sefiks.com-blue.svg?style=flat&logo=wordpress)](https://sefiks.com) [![YouTube](https://img.shields.io/:youtube-@sefiks-red.svg?style=flat&logo=youtube)](https://www.youtube.com/@sefiks?sub_confirmation=1) [![Twitter](https://img.shields.io/:follow-@serengil-blue.svg?style=flat&logo=x)](https://twitter.com/intent/user?screen_name=serengil) [![Patreon](https://img.shields.io/:become-patron-f96854.svg?style=flat&logo=patreon)](https://www.patreon.com/serengil?repo=deepface) [![GitHub Sponsors](https://img.shields.io/github/sponsors/serengil?logo=GitHub&color=lightgray)](https://github.com/sponsors/serengil) [![Buy Me a Coffee](https://img.shields.io/badge/-buy_me_a%C2%A0coffee-gray?logo=buy-me-a-coffee)](https://buymeacoffee.com/serengil)
serengil%2Fdeepface | Trendshift

DeepFace is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, FaceNet, OpenFace, DeepFace, DeepID, ArcFace, Dlib, SFace, GhostFaceNet, Buffalo_L.

Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.

Installation PyPI

The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well.

shell $ pip install deepface

Alternatively, you can also install deepface from its source code. Source code may have new features not published in pip release yet.

shell $ git clone https://github.com/serengil/deepface.git $ cd deepface $ pip install -e .

Once you installed the library, then you will be able to import it and use its functionalities.

python from deepface import DeepFace

A Modern Facial Recognition Pipeline - Demo

A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While DeepFace handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.

Face Verification - Demo

This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or base64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.

python result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")

Face recognition - Demo

Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return list of pandas data frame as output. Meanwhile, facial embeddings of the facial database are stored in a pickle file to be searched faster in next time. Result is going to be the size of faces appearing in the source image. Besides, target images in the database can have many faces as well.

python dfs = DeepFace.find(img_path = "img1.jpg", db_path = "C:/my_db")

Embeddings - Demo

Face recognition models basically represent facial images as multi-dimensional vectors. Sometimes, you need those embedding vectors directly. DeepFace comes with a dedicated representation function. Represent function returns a list of embeddings. Result is going to be the size of faces appearing in the image path.

python embedding_objs = DeepFace.represent(img_path = "img.jpg")

Embeddings can be plotted as below. Each slot is corresponding to a dimension value and dimension value is emphasized with colors. Similar to 2D barcodes, vertical dimension stores no information in the illustration.

Face recognition models - Demo

DeepFace is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face , FaceNet, OpenFace, DeepFace, DeepID, ArcFace, Dlib, SFace, GhostFaceNet and Buffalo_L. The default configuration uses VGG-Face model.

```python models = [ "VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib", "SFace", "GhostFaceNet", "Buffalo_L", ]

result = DeepFace.verify( img1path = "img1.jpg", img2path = "img2.jpg", model_name = models[0] )

dfs = DeepFace.find( imgpath = "img1.jpg", dbpath = "C:/mydb", modelname = models[1] )

embeddings = DeepFace.represent( imgpath = "img.jpg", modelname = models[2] ) ```

FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments - see BENCHMARKS for more details. You can find the measured scores of various models in DeepFace and the reported scores from their original studies in the following table.

| Model | Measured Score | Declared Score | | -------------- | -------------- | ------------------ | | Facenet512 | 98.4% | 99.6% | | Human-beings | 97.5% | 97.5% | | Facenet | 97.4% | 99.2% | | Dlib | 96.8% | 99.3 % | | VGG-Face | 96.7% | 98.9% | | ArcFace | 96.7% | 99.5% | | GhostFaceNet | 93.3% | 99.7% | | SFace | 93.0% | 99.5% | | OpenFace | 78.7% | 92.9% | | DeepFace | 69.0% | 97.3% | | DeepID | 66.5% | 97.4% |

Conducting experiments with those models within DeepFace may reveal disparities compared to the original studies, owing to the adoption of distinct detection or normalization techniques. Furthermore, some models have been released solely with their backbones, lacking pre-trained weights. Thus, we are utilizing their re-implementations instead of the original pre-trained weights.

Similarity - Demo

Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.

Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance or L2 normalized Euclidean. The default configuration uses cosine similarity. According to experiments, no distance metric is overperforming than other.

```python metrics = ["cosine", "euclidean", "euclidean_l2"]

result = DeepFace.verify( img1path = "img1.jpg", img2path = "img2.jpg", distance_metric = metrics[1] )

dfs = DeepFace.find( imgpath = "img1.jpg", dbpath = "C:/mydb", distancemetric = metrics[2] ) ```

Facial Attribute Analysis - Demo

DeepFace also comes with a strong facial attribute analysis module including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) predictions. Result is going to be the size of faces appearing in the source image.

python objs = DeepFace.analyze( img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'] )

Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.

Face Detection and Alignment - Demo

Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that detection increases the face recognition accuracy up to 42%, while alignment increases it up to 6%. OpenCV, Ssd, Dlib, MtCnn, Faster MtCnn, RetinaFace, MediaPipe, Yolo, YuNet and CenterFace detectors are wrapped in deepface.

All deepface functions accept optional detector backend and align input arguments. You can switch among those detectors and alignment modes with these arguments. OpenCV is the default detector and alignment is on by default.

```python backends = [ 'opencv', 'ssd', 'dlib', 'mtcnn', 'fastmtcnn', 'retinaface', 'mediapipe', 'yolov8', 'yolov11s', 'yolov11n', 'yolov11m', 'yunet', 'centerface', ] detector = backends[3] align = True

obj = DeepFace.verify( img1path = "img1.jpg", img2path = "img2.jpg", detector_backend = detector, align = align )

dfs = DeepFace.find( imgpath = "img.jpg", dbpath = "mydb", detectorbackend = detector, align = align )

embeddingobjs = DeepFace.represent( imgpath = "img.jpg", detector_backend = detector, align = align )

demographies = DeepFace.analyze( imgpath = "img4.jpg", detectorbackend = detector, align = align )

faceobjs = DeepFace.extractfaces( imgpath = "img.jpg", detectorbackend = detector, align = align ) ```

Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.

RetinaFace and MtCnn seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.

The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.


The Yellow Angels - Fenerbahce Women's Volleyball Team

You can find out more about RetinaFace on this repo.

Real Time Analysis - Demo, React Demo part-i, React Demo part-ii

You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequentially 5 frames. Then, it shows results 5 seconds.

python DeepFace.stream(db_path = "C:/database")

Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.

bash user ├── database │ ├── Alice │ │ ├── Alice1.jpg │ │ ├── Alice2.jpg │ ├── Bob │ │ ├── Bob.jpg

If you intend to perform face verification or analysis tasks directly from your browser, deepface-react-ui is a separate repository built using ReactJS depending on deepface api.

Face Anti Spoofing - Demo

DeepFace also includes an anti-spoofing analysis module to understand given image is real or fake. To activate this feature, set the anti_spoofing argument to True in any DeepFace tasks.

```python

anti spoofing test in face detection

faceobjs = DeepFace.extractfaces(imgpath="dataset/img1.jpg", antispoofing = True) assert all(faceobj["isreal"] is True for faceobj in faceobjs)

anti spoofing test in real time analysis

DeepFace.stream(dbpath = "C:/database", antispoofing = True) ```

API - Demo, Docker Demo

DeepFace serves an API as well - see api folder for more details. You can clone deepface source code and run the api with the following command. It will use gunicorn server to get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.

```shell cd script

run the service directly

./service.sh

run the service via docker

./dockerize.sh ```

Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Default service endpoints will be http://localhost:5005/verify for face recognition, http://localhost:5005/analyze for facial attribute analysis, and http://localhost:5005/represent for vector representation. The API accepts images as file uploads (via form data), or as exact image paths, URLs, or base64-encoded strings (via either JSON or form data), providing versatile options for different client requirements. Here, you can find a postman project to find out how these methods should be called.

Large Scale Facial Recognition - Playlist

If your task requires facial recognition on large datasets, you should combine DeepFace with a vector index or vector database. This setup will perform approximate nearest neighbor searches instead of exact ones, allowing you to identify a face in a database containing billions of entries within milliseconds. Common vector index solutions include Annoy, Faiss, Voyager, NMSLIB, ElasticSearch. For vector databases, popular options are Postgres with its pgvector extension and RediSearch.

Conversely, if your task involves facial recognition on small to moderate-sized databases, you can adopt use relational databases such as Postgres or SQLite, or NoSQL databases like Mongo, Redis or Cassandra to perform exact nearest neighbor search.

Encrypt Embeddings - Demo with PHE, Tutorial for PHE, Demo with FHE, Tutorial for FHE

Even though vector embeddings are not reversible to original images, they still contain sensitive information such as fingerprints, making their security critical. Encrypting embeddings is essential for higher security applications to prevent adversarial attacks that could manipulate or extract sensitive information. Traditional encryption methods like AES are very safe but limited in securely utilizing cloud computational power for distance calculations. Herein, homomorphic encryption, allowing calculations on encrypted data, offers a robust alternative.

```python from lightphe import LightPHE

build an additively homomorphic cryptosystem (e.g. Paillier) on-prem

cs = LightPHE(algorithm_name = "Paillier", precision = 19)

define plain vectors for source and target

alpha = DeepFace.represent("img1.jpg")[0]["embedding"] beta = DeepFace.represent("target.jpg")[0]["embedding"]

encrypt source embedding on-prem - private key not required

encrypted_alpha = cs.encrypt(alpha)

dot product of encrypted & plain embedding in cloud - private key not required

encryptedcosinesimilarity = encrypted_alpha @ beta

decrypt similarity on-prem - private key required

calculatedsimilarity = cs.decrypt(encryptedcosine_similarity)[0]

verification

print("same person" if calculated_similarity >= 1 - threshold else "different persons")

proof of work

assert abs(calculated_similarity - sum(x * y for x, y in zip(alpha, beta))) < 1e-2 ```

In this scheme, we leverage the computational power of the cloud to compute encrypted cosine similarity. However, the cloud has no knowledge of the actual calculations it performs. That's the magic of homomorphic encryption! Only the secret key holder on the on-premises side can decrypt the encrypted cosine similarity and determine whether the pair represents the same person or different individuals. Check out LightPHE library to find out more about partially homomorphic encryption.

Contribution

Pull requests are more than welcome! If you are planning to contribute a large patch, please create an issue first to get any upfront questions or design decisions out of the way first.

Before creating a PR, you should run the unit tests and linting locally by running make test && make lint command. Once a PR sent, GitHub test workflow will be run automatically and unit test and linting jobs will be available in GitHub actions before approval.

Support

There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏

If you do like this work, then you can support it financially on Patreon, GitHub Sponsors or Buy Me a Coffee. Also, your company's logo will be shown on README on GitHub if you become a sponsor in gold, silver or bronze tiers.

Citation

Please cite deepface in your publications if it helps your research.

S. Serengil and A. Ozpinar, "A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules", Journal of Information Technologies, vol. 17, no. 2, pp. 95-107, 2024. ```BibTeX @article{serengil2024lightface, title = {A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules}, author = {Serengil, Sefik and Ozpinar, Alper}, journal = {Journal of Information Technologies}, volume = {17}, number = {2}, pages = {95-107}, year = {2024}, doi = {10.17671/gazibtd.1399077}, url = {https://dergipark.org.tr/en/pub/gazibtd/issue/84331/1399077}, publisher = {Gazi University} } ```
S. I. Serengil and A. Ozpinar, "LightFace: A Hybrid Deep Face Recognition Framework", 2020 Innovations in Intelligent Systems and Applications Conference (ASYU), 2020, pp. 23-27. ```BibTeX @inproceedings{serengil2020lightface, title = {LightFace: A Hybrid Deep Face Recognition Framework}, author = {Serengil, Sefik Ilkin and Ozpinar, Alper}, booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)}, pages = {23-27}, year = {2020}, doi = {10.1109/ASYU50717.2020.9259802}, url = {https://ieeexplore.ieee.org/document/9259802}, organization = {IEEE} } ```
S. I. Serengil and A. Ozpinar, "HyperExtended LightFace: A Facial Attribute Analysis Framework", 2021 International Conference on Engineering and Emerging Technologies (ICEET), 2021, pp. 1-4. ```BibTeX @inproceedings{serengil2021lightface, title = {HyperExtended LightFace: A Facial Attribute Analysis Framework}, author = {Serengil, Sefik Ilkin and Ozpinar, Alper}, booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)}, pages = {1-4}, year = {2021}, doi = {10.1109/ICEET53442.2021.9659697}, url = {https://ieeexplore.ieee.org/document/9659697}, organization = {IEEE} } ```

Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.

Licence

DeepFace is licensed under the MIT License - see LICENSE for more details.

DeepFace wraps some external face recognition models: VGG-Face, Facenet (both 128d and 512d), OpenFace, DeepFace, DeepID, ArcFace, Dlib, SFace, GhostFaceNet and Buffalo_L. Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning. Similarly, DeepFace wraps many face detectors: OpenCv, Ssd, Dlib, MtCnn, Fast MtCnn, RetinaFace, MediaPipe, YuNet, Yolo and CenterFace. Finally, DeepFace is optionally using face anti spoofing to determine the given images are real or fake. License types will be inherited when you intend to utilize those models. Please check the license types of those models for production purposes.

DeepFace logo is created by Adrien Coquet and it is licensed under Creative Commons: By Attribution 3.0 License.

master

Owner

  • Name: guagua
  • Login: LaniakeaDG
  • Kind: user

Citation (CITATION.md)

## Cite DeepFace Papers

Please cite deepface in your publications if it helps your research. Here are its BibTex entries:

### Facial Recognition

If you use deepface in your research for facial recogntion purposes, please cite these publications:

```BibTeX
@article{serengil2024lightface,
  title         = {A Benchmark of Facial Recognition Pipelines and Co-Usability Performances of Modules},
  author        = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  journal       = {Bilisim Teknolojileri Dergisi},
  volume        = {17},
  number        = {2},
  pages         = {95-107},
  year          = {2024},
  doi           = {10.17671/gazibtd.1399077},
  url           = {https://dergipark.org.tr/en/pub/gazibtd/issue/84331/1399077},
  publisher     = {Gazi University}
}
```

```BibTeX
@inproceedings{serengil2020lightface,
  title        = {LightFace: A Hybrid Deep Face Recognition Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
  pages        = {23-27},
  year         = {2020},
  doi          = {10.1109/ASYU50717.2020.9259802},
  url          = {https://ieeexplore.ieee.org/document/9259802},
  organization = {IEEE}
}
```

### Facial Attribute Analysis

If you use deepface in your research for facial attribute analysis purposes such as age, gender, emotion or ethnicity prediction, please cite the this publication.

```BibTeX
@inproceedings{serengil2021lightface,
  title        = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
  pages        = {1-4},
  year         = {2021},
  doi          = {10.1109/ICEET53442.2021.9659697},
  url          = {https://ieeexplore.ieee.org/document/9659697/},
  organization = {IEEE}
}
```

### Additional Papers

We have additionally released these papers within the DeepFace project for a multitude of purposes.

```BibTeX
@misc{serengil2025cipherface,
   title     = {CipherFace: A Fully Homomorphic Encryption-Driven Framework for Secure Cloud-Based Facial Recognition}, 
   author    = {Serengil, Sefik and Ozpinar, Alper},
   year      = {2025},
   publisher = {arXiv},
   url       = {https://arxiv.org/abs/2502.18514},
   doi       = {10.48550/arXiv.2502.18514}
}
```

```BibTeX
@misc{serengil2023db,
  title         = {An evaluation of sql and nosql databases for facial recognition pipelines},
  author        = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  year          = {2023},
  archivePrefix = {Cambridge Open Engage},
  doi           = {10.33774/coe-2023-18rcn},
  url           = {https://www.cambridge.org/engage/coe/article-details/63f3e5541d2d184063d4f569}
}
```

### Repositories

Also, if you use deepface in your GitHub projects, please add `deepface` in the `requirements.txt`. Thereafter, your project will be listed in its [dependency graph](https://github.com/serengil/deepface/network/dependents).

GitHub Events

Total
  • Delete event: 1
  • Push event: 2
  • Create event: 1
Last Year
  • Delete event: 1
  • Push event: 2
  • Create event: 1

Dependencies

.github/workflows/tests.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
Dockerfile docker
  • python 3.8.12 build
requirements.txt pypi
  • Flask >=1.1.2
  • Pillow >=5.2.0
  • fire >=0.4.0
  • flask_cors >=4.0.1
  • gdown >=3.10.1
  • gunicorn >=20.1.0
  • keras >=2.2.0
  • mtcnn >=0.1.0
  • numpy >=1.14.0
  • opencv-python >=4.5.5.64
  • pandas >=0.23.4
  • requests >=2.27.1
  • retina-face >=0.0.14
  • tensorflow >=1.9.0
  • tqdm >=4.30.0
requirements_additional.txt pypi
  • albumentations *
  • dlib >=19.20.0
  • facenet-pytorch >=2.5.3
  • insightface >=0.7.3
  • mediapipe >=0.8.7.3
  • onnxruntime >=1.9.0
  • opencv-contrib-python >=4.3.0.36
  • pydantic *
  • tf-keras *
  • torch >=2.1.2
  • typing-extensions *
  • ultralytics >=8.0.122
setup.py pypi