vcnn4pude
VCNN4PuDe is a framework for identifying the persons who engage in pushing within videos of crowds
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.9%) to scientific vocabulary
Keywords
Repository
VCNN4PuDe is a framework for identifying the persons who engage in pushing within videos of crowds
Basic Info
Statistics
- Stars: 1
- Watchers: 2
- Forks: 0
- Open Issues: 0
- Releases: 4
Topics
Metadata Files
README.md
VCNN4PuDe: A Novel Voronoi-based CNN Framework for Pushing Person Detection in Crowd Videos
This repository is for the submited paper:
Alia, Ahmed, et al. "A Novel Voronoi-based Convolutional Neural Network Framework
for Pushing Person Detection in Crowd Videos". 2023
Table of Contents
- Goal
- Motivation
- Architicture of VCNN4PuDe
- Codes of VCNN4PuDe
- Samples
- Framework Installing
- Framework Running
- Codes for CNN Architectures and Training
- Trained CNN models
- Test Sets
- Codes for Trained CNN Models Evaluation
Goal
The main goal of this article is to introduce a framework (VCNN4PuDe) for identify the persons who engage in pushing within videos of crowds.
Motivation
Detecting pushing persons within videos of crowded event entrances is crucial for understanding pushing dynamics, thereby designing and managing more comfortable and safer entrances.
Architicture of VCNN4PuDe Framework
Codes of VCNN4PuDe Framework
Samples
Input video with its trajectory data
You can access them by this link.
Note: They were taken from Pedestrian Dynamics Data Archive hosted by FZJ.
Annotated Video produced by **VCNN4PuDe Framework**
VCNN4PuDe Installing on Google Colab
- Create a directory named VCNN4PuDe on your drive.
- Access VCNN4PuDe directory.
Add new notebook and run the follwing commands
a. Mount Google Drive
from google.colab import drive drive.mount('/content/gdrive')b. Access VCNN4PuDe directory Folder%cd /content/drive/My Drive/VCNN4PuDe/c. Clone VCNN4PuDe Frameworkgit clone https://github.com/abualia4/VCNN4PuDe.gitd. Install keras-preprocessing module!pip install keras-preprocessing
VCNN4PuDe Running

Open the run notebook and follow the instructions in the notebook, and the annotated Video.mp4 will be stored in the annotated folder.
Note: If some libraries are required for running the framework, use the following command to install it
!pip install module/library name
Codes for CNN Architectures and Training
Trained Models
All trained models produced in this article are available at this link
Test Sets
Two test sets are available at this link
Codes for Trained Models Evaluation
Two test sets are available at this link
Acknowledgement
- Thanks to the authors of voronoi_finite_polygons_2d function.
- Thanks to the author of Createrandompolygon class.
citation
Soon
Owner
- Name: Pedestrian Dynamics
- Login: PedestrianDynamics
- Kind: organization
- Location: Germany
- Repositories: 5
- Profile: https://github.com/PedestrianDynamics
Pedestrian Dynamics
GitHub Events
Total
Last Year
Issues and Pull Requests
Last synced: 10 months ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0