dual-attention-mechanism
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (6.6%) to scientific vocabulary
Last synced: 7 months ago
·
JSON representation
·
Repository
Basic Info
- Host: GitHub
- Owner: Hualpower
- License: agpl-3.0
- Language: Python
- Default Branch: main
- Size: 8.13 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Created almost 2 years ago
· Last pushed almost 2 years ago
Metadata Files
Readme
Contributing
License
Citation
README.md
README
This folder is the DAM's source code and dataset of the paper titled ''Dual Attention Mechanism for Multi-scale Low-altitude UAV Detection''

BFAIRD Dataset:
You can get our dataset by this link: https://pan.baidu.com/s/1rYN0pNg8xV3U1zes2Z7WPw?pwd=63so
Environment requirements:
Jinja2 == 3.1.3
matplotlib == 3.7.4
networkx == 3.1
numpy == 1.24.4
pandas == 2.0.3
pillow == 10.2.0
python == 3.8.18
scipy == 1.10.1
seaborn == 0.13.1
torch == 2.1.2
torchvision == 0.16.2
tqdm == 4.66.1
ultralytics == 8.1.3
Setup:
- Prepare the environment and download our datasets.
- Place the images and corresponding txt files in the dataset into the 'dataset/bvn/images' and 'labels' folders respectively. Put the pictures into the 'images' folder, and the txt files into the 'labels' folder. Create 'train' and 'val' folders in their respective folders and put the corresponding number of pictures according to your own preferences, and ensure that the pictures Corresponds to txt file.
- Adjust those parameters in the main.py file.
- Run python main.py to start it.
Owner
- Login: Hualpower
- Kind: user
- Repositories: 1
- Profile: https://github.com/Hualpower
Citation (CITATION.cff)
cff-version: 1.2.0
preferred-citation:
type: software
message: If you use this software, please cite it as below.
authors:
- family-names: Jocher
given-names: Glenn
orcid: "https://orcid.org/0000-0001-5950-6979"
- family-names: Chaurasia
given-names: Ayush
orcid: "https://orcid.org/0000-0002-7603-6750"
- family-names: Qiu
given-names: Jing
orcid: "https://orcid.org/0000-0003-3783-7069"
title: "Ultralytics YOLO"
version: 8.0.0
# doi: 10.5281/zenodo.3908559 # TODO
date-released: 2023-1-10
license: AGPL-3.0
url: "https://github.com/ultralytics/ultralytics"
GitHub Events
Total
Last Year
Dependencies
examples/YOLOv8-ONNXRuntime-Rust/Cargo.toml
cargo
docker/Dockerfile
docker
- pytorch/pytorch 2.1.0-cuda12.1-cudnn8-runtime build
pyproject.toml
pypi
- matplotlib >=3.3.0
- numpy >=1.22.2
- opencv-python >=4.6.0
- pandas >=1.1.4
- pillow >=7.1.2
- psutil *
- py-cpuinfo *
- pyyaml >=5.3.1
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- thop >=0.1.1
- torch >=1.8.0
- torchvision >=0.9.0
- tqdm >=4.64.0
ultralytics.egg-info/requires.txt
pypi
- albumentations >=1.0.3
- check-manifest *
- comet *
- coremltools >=7.0
- coverage *
- duckdb *
- dvclive >=2.12.0
- hub-sdk >=0.0.2
- ipython *
- lancedb *
- matplotlib >=3.3.0
- mkdocs-jupyter *
- mkdocs-material *
- mkdocs-redirects *
- mkdocs-ultralytics-plugin >=0.0.34
- mkdocstrings *
- numpy >=1.22.2
- onnx >=1.12.0
- opencv-python >=4.6.0
- openvino-dev >=2023.0
- pandas >=1.1.4
- pillow >=7.1.2
- pre-commit *
- psutil *
- py-cpuinfo *
- pycocotools >=2.0.6
- pytest *
- pytest-cov *
- pyyaml >=5.3.1
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- streamlit *
- tensorboard >=2.13.0
- tensorflow <=2.13.1
- tensorflowjs >=3.9.0
- thop >=0.1.1
- torch >=1.8.0
- torchvision >=0.9.0
- tqdm >=4.64.0