ai-attendance-system

An AI-powered attendance system using YOLOv5 for face detection and recognition.

https://github.com/yassine-mhirsi/ai-attendance-system

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.3%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

An AI-powered attendance system using YOLOv5 for face detection and recognition.

Basic Info
  • Host: GitHub
  • Owner: Yassine-Mhirsi
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 69.4 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 1 year ago · Last pushed 11 months ago
Metadata Files
Readme Contributing License Citation

README.md

AI Attendance System

An AI-powered attendance system using YOLOv5 for face detection and recognition. This project is designed to automate attendance tracking efficiently with high accuracy.


How to Run the Project

1. Install Python

Ensure Python 3.10.* is installed on your system. You can download it from python.org.

2. Install Dependencies

Install the required libraries using the provided requirements.txt: bash pip install -r requirements.txt

3. Prepare the Dataset

Organize your dataset in the dataset/ folder with the following structure: bash dataset/ ├── images/ # Extract all images here ├── labels/ # Already prepared ├── test/ # Optional test data └── data.yaml # Configuration file for YOLOv5

4. Testing the Model

Model Location:
The model is pre-trained and located in runs/train/exp15/weights/: - best.pt: The best weights achieved during training (optimal performance on validation data). - last.pt: The weights from the last training epoch (useful for continued training or testing).

For Live Video Feed (e.g., webcam):

Run the following command to use the webcam as the input source: bash python detect.py --weights runs/train/exp15/weights/last.pt --img 640 --source 0 - --source 0: Indicates live video feed from your webcam.

For Specific Images:

To test the model on a folder of images, run the following command: bash python detect.py --weights runs/train/exp15/weights/last.pt --img 640 --source dataset/images/

  • --source 0 dataset/images/: Specifies the folder containing images for testing.

Owner

  • Name: Yassine Mhirsi
  • Login: Yassine-Mhirsi
  • Kind: user

a gamer

Citation (CITATION.cff)

cff-version: 1.2.0
preferred-citation:
  type: software
  message: If you use YOLOv5, please cite it as below.
  authors:
  - family-names: Jocher
    given-names: Glenn
    orcid: "https://orcid.org/0000-0001-5950-6979"
  title: "YOLOv5 by Ultralytics"
  version: 7.0
  doi: 10.5281/zenodo.3908559
  date-released: 2020-5-29
  license: AGPL-3.0
  url: "https://github.com/ultralytics/yolov5"

GitHub Events

Total
  • Public event: 1
  • Push event: 5
  • Fork event: 1
Last Year
  • Public event: 1
  • Push event: 5
  • Fork event: 1