dl4pude

The framework automatically detects pushing behavior from videos of crowded event entrances.

https://github.com/pedestriandynamics/dl4pude

Science Score: 59.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: scholar.google, mdpi.com, zenodo.org
  • Committers with academic emails
    1 of 2 committers (50.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.8%) to scientific vocabulary

Keywords

artificial-intelegence convolutional-neural-networks crowd-behavior-analysis data-visualization deep-learning deep-optical-flow machine-learning
Last synced: 6 months ago · JSON representation

Repository

The framework automatically detects pushing behavior from videos of crowded event entrances.

Basic Info
  • Host: GitHub
  • Owner: PedestrianDynamics
  • License: other
  • Language: Jupyter Notebook
  • Default Branch: main
  • Homepage:
  • Size: 307 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 5
Topics
artificial-intelegence convolutional-neural-networks crowd-behavior-analysis data-visualization deep-learning deep-optical-flow machine-learning
Created about 4 years ago · Last pushed over 2 years ago
Metadata Files
Readme License Citation

README.md

DL4PuDe: A hybrid framework of deep learning and visualization for pushing behavior detection in pedestrian dynamics

DOI License Python 3.7 | 3.8 GPU RAM16GB

This repository is for the DL4PuDe framework, along with its published paper, which is as follows. Alia, Ahmed, Mohammed Maree, and Mohcine Chraibi. 2022. "A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics" Sensors 22, no. 11: 4040.

Content

  1. Framework aim.
  2. Framework Motivation.
  3. Pushing Behavior Defention.
  4. Framework Architecture.
  5. How to install and use the framework.
  6. Demo.
  7. Experiments Videos.
  8. CNN-based Classifiers
  9. List of papers that cited this work.

Aim of Dl4PuDe Framework

Dl4PuDe aims to automatically detect and annotate pushing behavior at the patch level in video recordings of human crowds.

Motivation of Dl4PuDe Framework

To assist researchers in the field of crowd dynamics in gaining a better understanding of pushing dynamics, which is crucial for effectively managing a comfortable and safe crowd.

Pushing Behavior Defention

In this article, pushing can be defined as a behavior that pedestrians use to reach a target faster.

An example of pushing strategy

Entering the event faster

The Architecture of DL4PuDe

DL4PuDe mainly relied on the power of EfficientNet-B0-based classifier, RAFT and wheel visualization methods.

Kindly note that we use the [RAFT repository] for optical flow estimation in our project.

Example

Input video Output video *
* The framework detects pushing patches every 12 frames (12/25 s), the red boxes refer to the pushing patches.

Installation

  1. Clone the repository in your directory. git clone https://github.com/PedestrianDynamics/DL4PuDe.git
  2. Install the required libraries. pip install -r libraries.txt
  3. Run the framework. python3 run.py --video [input video path] --roi ["x coordinate of left-top ROI corner" "y coordinate of left-top ROI corner" "x coordinate of right-bottom ROI corner" "y coordinate of right-bottom ROI corner" ] --patch [rows cols] --ratio [scale of video] --angle [angle in degrees for rotating the input video to make crowd flow direction from left to right ---> ]
    ## Demo

Run the following command

python3 run.py --video ./videos/150.mp4 --roi 380 128 1356 1294 --patch 3 3 --ratio 0.5 --angle 0

Then, you will see the following details.

When the progress of the framework is complete, it will generate the annotated video in the framework directory. Please note that the "150 annotated video" is available on the directory root under the "150-demo.mp4" name.

Experiments Videos

The original experiments videos that are used in this work are available through the Pedestrian Dynamics Data Archive hosted by the Forschungszentrum Juelich. Also, the undistorted videos are available by this link.

CNN-based Classifiers

We use four CNN-based classifiers for building and evaluating our classifier, including EfficientNet-B0, MobileNet, InceptionV3, and ResNet50. The source code for building, training and evaluating the CNN-based classifiers, as well as the trained classifiers are available in the below links. 1. Source code for building and training the CNN-based classifiers. * EfficientNet-B0-based classifier. * MobileNet-based classifier. * InceptionV3-based classifier. * ResNet50-based classifier. 2. Trained CNN-based classifiers. 3. CNN-based classifiers Evaluation. * Patch-based medium RAFT-MIM12 dataset. * Patch-based medium RAFT-MIM25 dataset. * Patch-based small RAFT-MIM12 dataset. * Patch-based small RAFT-MIM25 dataset. * Patch-based medium FB-MIM12 dataset. * Frame-based RAFT-MIM12 dataset. * Frame-based RAFT-MIM25 dataset. 4. Patch-based MIM test sets. 5. MIM training and validation sets are available from the corresponding authors upon request.

List of papers that cited this work

To access the list of papers citing this work, kindly click on this link.

Citation

If you utilize this framework or the generated dataset in your work, please cite it using the following BibTex entry: Alia, Ahmed, Mohammed Maree, and Mohcine Chraibi. 2022. "A Hybrid Deep Learning and Visualization Framework for Pushing Behavior Detection in Pedestrian Dynamics" Sensors 22, no. 11: 4040.

Acknowledgments

  • This work was funded by the German Federal Ministry of Education and Research (BMBF: funding number 01DH16027) within the Palestinian-German Science Bridge project framework, and partially by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)491111487.

  • Thanks to the Forschungszentrum Juelich, Institute for Advanced Simulation-7, for making the Pedestrian Dynamics Data Archive publicly accessible under the CC Attribution 4.0 International license.

  • Thanks to Anna Sieben, Helena Lgering, and Ezel sten for developing the rating system and annotating the pushing behavior in the video experiments.

  • Thanks to the authors of the paper titled ``RAFT: Recurrent All Pairs Field Transforms for Optical Flow'' for making the RAFT source code available.

Owner

  • Name: Pedestrian Dynamics
  • Login: PedestrianDynamics
  • Kind: organization
  • Location: Germany

Pedestrian Dynamics

GitHub Events

Total
Last Year

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 132
  • Total Committers: 2
  • Avg Commits per committer: 66.0
  • Development Distribution Score (DDS): 0.008
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
abualia4 a****4@n****u 131
Ozaq O****q 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 0
  • Total pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: about 4 hours
  • Total issue authors: 0
  • Total pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • Ozaq (1)
Top Labels
Issue Labels
Pull Request Labels