Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: sciencedirect.com
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.6%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: Object-Detection-01
  • License: agpl-3.0
  • Language: Python
  • Default Branch: main
  • Size: 9.46 MB
Statistics
  • Stars: 2
  • Watchers: 2
  • Forks: 0
  • Open Issues: 5
  • Releases: 0
Created over 2 years ago · Last pushed 7 months ago
Metadata Files
Readme Contributing License Citation

README.md

YOLO-DC: Enhancing object detection with deformable convolutions and contextual mechanism

Introduction

YOLO-DC outperforms numerous state-of-the-art (SOTA) algorithms, including YOLOv8, while maintaining a comparable level of computation and parameter count. For more details, please refer to our report on Github. The related paper has been accepted for publication in Signal Processing: Image Communication.

In the figure above, (a) and (b) depict comparisons of computational and parameter counts among the models on the COCO 2017 dataset, respectively.

Benchmark

|Model |size |APval
0.5:0.95 |APval
0.5 | Params
(M) |FLOPs
(G)| | ------ |:---: | :---: | :---: |:---: | :---: | |YOLO-DC-N |640 |40.8 |56.9 |3.9 | 8.9 | |YOLO-DC-S |640 |46.6 |63.5 |13.9 | 29.2 | |YOLO-DC-M |640 |50.4 |67.3 |32.9 | 70.9 |

Table Notes

  • Results of the AP and speed are evaluated on COCO val2017 dataset with the input resolution of 640×640.
  • All experiments are based on NVIDIA 3090 GPU.

Environment

  • python requirements

shell pip install -r requirements.txt If you are prompted that a package is missing, follow the corresponding prompts to follow that package. - data:

prepare COCO dataset, YOLO format coco labels and specify dataset paths in data.yaml (data.yaml is located at ". /ultralytics/datasets/coco.yaml").

Train

### 1. command-line mode - See train.py for more information on how to use it.

shell python ./train.py --yaml ./ultralytics/models/v8/YOLO-DC.yaml --data ./ultralytics/datasets/coco.yaml --weight path --epoch 500 --device 0,1,2,3,4 --batch 128 ### 2. python Writing direct Python code such as main.py is an example. ```shell from ultralytics import YOLO def main(): # 加载模型 model = YOLO("./ultralytics/models/v8/yolov8n-DC.yaml") # 从头开始构建新模型 # model = YOLO("./runs/detect/train_500/weights/best.pt") # 加载预训练模型(建议用于训练)

# 使用模型
model.train(data="./ultralytics/datasets/coco.yaml",
            epochs=500, device='cuda:0',
            batch=48,
            save_period=50,
            verbose=True,
            project="COCO",
            name="train_DC_n_500",
            profile=True,)  # 训练模型
metrics = model.val(name="val_DC_n_500")  # 在验证集上评估模型性能

if name == 'main': main()

```

Cite

If you feel our work is helpful to you, please cite the following paper: ``` @INPROCEEDINGS{YOLO-DC, author={Zhang, Dengyong and Xu, Chuanzhen and Chen, Jiaxin and Deng, Bin and Liao, Xin}, booktitle={2024 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)}, title={YOLO-DC: Enhancing object detection with deformable convolutions and contextual mechanism}, year={2024}, volume={}, number={}, pages={1-6}, keywords={Convolutional codes;Deformable models;Convolution;Source coding;Data preprocessing;Object detection;Information processing;Feature extraction;Data models;Detection algorithms}, doi={10.1109/APSIPAASC63619.2025.10848905}}

@article{ZHANG2025117373, title = {YOLO-DC: Integrating deformable convolution and contextual fusion for high-performance object detection}, journal = {Signal Processing: Image Communication}, pages = {117373}, year = {2025}, issn = {0923-5965}, doi = {https://doi.org/10.1016/j.image.2025.117373}, url = {https://www.sciencedirect.com/science/article/pii/S0923596525001195}, author = {Dengyong Zhang and Chuanzhen Xu and Jiaxin Chen and Lei Wang and Bin Deng}, keywords = {Object detection, YOLO, Deformable convolutions, Contextual mechanisms, Deep learning} } ```

Acknowledgement

The implementation is based on YOLOv8. Thanks for their open source code.

Owner

  • Name: xcz
  • Login: Object-Detection-01
  • Kind: user
  • Location: changsha

Citation (CITATION.cff)

cff-version: 1.2.0
preferred-citation:
  type: software
  message: If you use this software, please cite it as below.
  authors:
  - family-names: Jocher
    given-names: Glenn
    orcid: "https://orcid.org/0000-0001-5950-6979"
  - family-names: Chaurasia
    given-names: Ayush
    orcid: "https://orcid.org/0000-0002-7603-6750"
  - family-names: Qiu
    given-names: Jing
    orcid: "https://orcid.org/0000-0003-3783-7069"
  title: "YOLO by Ultralytics"
  version: 8.0.0
  # doi: 10.5281/zenodo.3908559  # TODO
  date-released: 2023-1-10
  license: AGPL-3.0
  url: "https://github.com/ultralytics/ultralytics"

GitHub Events

Total
  • Watch event: 1
  • Push event: 10
Last Year
  • Watch event: 1
  • Push event: 10

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch 2.0.0-cuda11.7-cudnn8-runtime build
requirements.txt pypi
  • Pillow >=7.1.2
  • PyYAML >=5.3.1
  • matplotlib >=3.2.2
  • opencv-python >=4.6.0
  • pandas >=1.1.4
  • psutil *
  • requests >=2.23.0
  • scipy >=1.4.1
  • seaborn >=0.11.0
  • torch >=1.7.0
  • torchvision >=0.8.1
  • tqdm >=4.64.0
setup.py pypi