global-and-dual-attention-mechanisms

yolov5改进Enhanced Vehicle Detection in SAR Images via Global and Dual Attention Mechanisms

https://github.com/ynlsj/global-and-dual-attention-mechanisms

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (7.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

yolov5改进Enhanced Vehicle Detection in SAR Images via Global and Dual Attention Mechanisms

Basic Info
Statistics
  • Stars: 3
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Created about 1 year ago · Last pushed about 1 year ago
Metadata Files
Readme Contributing License Citation

README.md

Improvements to YOLOv5

Module 1

Refer to $A^2-Nets$,which can be regarded as an evolved version of SE.

Double Attention Method

Double Attention

Module 2

The SPPF of YOLOv5 has been improved. While maintaining the same receptive field, the speed of the model is further enhanced.

SPPF

SPPF

SPPFCSPC

SPPFCSPC

Module 3

Refer to the improvements to YOLO by others on GitHub,"GAM_Attention" to the feature maps at different levels of the prediction head. This attention method can, to some extent, solve problems such as occlusion and cross - overlap'''

class GAM_Attention(nn.Module):  
    def __init__(self, in_channels, out_channels, rate=4):  
        super(GAM_Attention, self).__init__()  

        self.channel_attention = nn.Sequential(  
            nn.Linear(in_channels, int(in_channels / rate)),  
            nn.ReLU(inplace=True),  
            nn.Linear(int(in_channels / rate), in_channels)  
        )  

        self.spatial_attention = nn.Sequential(  
            nn.Conv2d(in_channels, int(in_channels / rate), kernel_size=7, padding=3),  
            nn.BatchNorm2d(int(in_channels / rate)),  
            nn.ReLU(inplace=True),  
            nn.Conv2d(int(in_channels / rate), out_channels, kernel_size=7, padding=3),  
            nn.BatchNorm2d(out_channels)  
        )  

    def forward(self, x):  
        b, c, h, w = x.shape  
        x_permute = x.permute(0, 2, 3, 1).view(b, -1, c)  
        x_att_permute = self.channel_attention(x_permute).view(b, h, w, c)  
        x_channel_att = x_att_permute.permute(0, 3, 1, 2)  

        x = x * x_channel_att  

        x_spatial_att = self.spatial_attention(x).sigmoid()  
        out = x * x_spatial_att  

        return out  

'''

Brief Overview of the Above Implementation: Channel Attention: Learn a set of weights through a Multi - Layer Perceptron (MLP) and perform a weighted calculation with the original input. Spatial Attention: Use a Convolutional Neural Network to map the input features weighted by the channels. Map the result to the range of 0 - 1 through the sigmoid activation function, and then perform a weighted calculation with the original input, enabling the model to focus more on the regions of interest.

Overall Improvements

Explanation of Improvement Locations

Author Shengjie LEI , Zhiyong WEI , Yulian ZHANG , Meihua FANG , Baowen WU Subject: "Enhanced Vehicle Detection in SAR Images via Global and Dual Attention Mechanisms" The Visual Computer

Owner

  • Login: ynlsj
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
preferred-citation:
  type: software
  message: If you use YOLOv5, please cite it as below.
  authors:
  - family-names: Jocher
    given-names: Glenn
    orcid: "https://orcid.org/0000-0001-5950-6979"
  title: "YOLOv5 by Ultralytics"
  version: 7.0
  doi: 10.5281/zenodo.3908559
  date-released: 2020-5-29
  license: AGPL-3.0
  url: "https://github.com/ultralytics/yolov5"

GitHub Events

Total
  • Watch event: 3
  • Push event: 8
  • Fork event: 1
  • Create event: 3
Last Year
  • Watch event: 3
  • Push event: 8
  • Fork event: 1
  • Create event: 3