ebpersons
A Dataset for Person Detection at the Edges of Buildings
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.6%) to scientific vocabulary
Repository
A Dataset for Person Detection at the Edges of Buildings
Basic Info
- Host: GitHub
- Owner: BiBiKo219
- License: apache-2.0
- Language: Python
- Default Branch: main
- Size: 3.4 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
EBPersons: A Dataset for Person Detection at the Edges of Buildings
Introduction
EBPersons is a novel dataset specifically designed for Person Detection at the Edges of Buildings (PDEB). With the increasing prevalence of buildings, incidents of falling from heights have become more frequent, making the accurate detection of individuals at the edges of buildings crucial for timely intervention and accident prevention. This dataset provides a rich and challenging benchmark for PDEB research, comprising 1,314 videos captured across over 300 diverse building scenes with diverse lighting conditions.
Examples
Below are examples of images from the EBPersons dataset, showcasing the diversity in illumination, scene, and human pose. The captured person instances are usually small and occluded.

Data Collection
The EBPersons dataset was constructed using a two-pronged data collection strategy: 1. Original Footage: Captured by staging scenarios with volunteers performing various actions near the edges of buildings, using wide dynamic range cameras with CMOS sensors and dot matrix LED infrared lamps. 2. Publicly Available Videos: Selected from platforms like YouTube and Baidu, depicting individuals at the edges of buildings, filmed from tilt-upward angles.
Data Annotation
Each person in an image is annotated with two bounding boxes: one for the visible region of the person’s body and the other for the full body. Annotators estimate the location of occluded body parts when drawing the full bounding box. People depicted in posters, statues, mannequins, and reflections are marked as ignored regions and are not annotated.

Download Dataset
The EBPersons dataset and baseline code are publicly available for non-commercial research use under the CC BY-NC-SA 4.0 license. You can access the dataset and download it from the official website: EBPersons Dataset Download
How to Use
To use the EBPersons dataset, follow these steps:
1. Download the dataset from the provided link.
2. Extract the contents of the downloaded archive. The folder structure will be as following:
├── data
│ ├── cocovid_all
│ │ ├── Data
| │ │ ├── train
| │ │ ├── val
│ │ ├── annotations
There are 2 JSON files in data/cocovid_all/annotations:
coco_vid_annotations_train.json: JSON file contains the annotations information of the training set in EBPersons dataset.
coco_vid_annotations_val.json: JSON file contains the annotations information of the validation set in EBPersons dataset.
- Follow the instructions in 'run.sh' and quick_run for specific usage guidelines and any additional requirements. The weight of the baseline method can be downloaded in baseline weights
Ethical Considerations
All individuals appearing in the videos shot by us are volunteers who provided informed consent after receiving a full explanation of the project’s purpose and data collection procedures. Necessary authorization was obtained from relevant organizations to conduct video shooting within the vicinity of the buildings. All corresponding authorization documentation is available on the dataset website.
Owner
- Login: BiBiKo219
- Kind: user
- Repositories: 1
- Profile: https://github.com/BiBiKo219
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMTracking Contributors" title: "OpenMMLab Video Perception Toolbox and Benchmark" date-released: 2021-01-04 url: "https://github.com/open-mmlab/mmtracking" license: Apache-2.0
GitHub Events
Total
- Push event: 11
- Create event: 2
Last Year
- Push event: 11
- Create event: 2
Dependencies
- asynctest *
- attributee *
- codecov *
- cython *
- dotty_dict *
- einops *
- flake8 *
- interrogate *
- isort ==4.3.21
- kwarray *
- lap *
- matplotlib *
- mmcls <1.0.0,>=0.16.0
- mmcv-full <1.7.0,>=1.6.1
- mmdet <3.0.0,>=2.19.1
- motmetrics *
- numpy *
- packaging *
- pandas <=1.3.5
- pycocotools *
- pytest *
- scipy <=1.7.3
- seaborn *
- terminaltables *
- tqdm *
- ubelt *
- xdoctest >=0.10.0
- yapf *
- cython *
- numpy *
- myst_parser *
- sphinx ==4.0.2
- sphinx-copybutton *
- sphinx_markdown_tables *
- mmcls >=0.16.0,<1.0.0
- mmcv-full >=1.6.1,<1.7.0
- mmdet >=2.19.1,<3.0.0
- mmcls *
- mmcv *
- mmdet *
- torch *
- torchvision *
- attributee *
- dotty_dict *
- einops *
- lap *
- matplotlib *
- mmcls >=0.16.0,<1.0.0
- motmetrics *
- packaging *
- pandas <=1.3.5
- pycocotools *
- scipy <=1.7.3
- seaborn *
- terminaltables *
- tqdm *
- asynctest * test
- codecov * test
- flake8 * test
- interrogate * test
- isort ==4.3.21 test
- kwarray * test
- pytest * test
- ubelt * test
- xdoctest >=0.10.0 test
- yapf * test