triple-i-net-tinet
Official code for "Illumination-guided RGBT Object Detection with Inter- and Intra-modality Fusion"
Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.5%) to scientific vocabulary
Repository
Official code for "Illumination-guided RGBT Object Detection with Inter- and Intra-modality Fusion"
Basic Info
- Host: GitHub
- Owner: NNNNerd
- License: apache-2.0
- Language: Python
- Default Branch: main
- Size: 15.3 MB
Statistics
- Stars: 17
- Watchers: 1
- Forks: 2
- Open Issues: 5
- Releases: 0
Metadata Files
README.md
Triple-I Net (TINet)
Official code for "Illumination-guided RGBT Object Detection with Inter- and Intra-modality Fusion"

Installation
Please refer to https://github.com/open-mmlab/mmdetection/tree/2.x
Results
Below is the ablation study for our TINet on FLIR-aligned (the training and testing splits follow the official splits). Note that in our paper the results are given by a different train/test data distribution. If you intend to include our results, please make sure that the data distribution is aligned.
| IGFW | Inter-MA | Intra-MA | AP50 | mAP | |-------------|-----------------|-----------------|--------------|--------------| | | | | 75.19 | 35.88 | | √ | | | 74.94 | 36.41 | | | √ | | 75.00 | 36.21 | | | | √ | 74.96 | 36.07 | | √ | √ | | 75.27 | 36.70 | | √ | | √ | 75.42 | 36.61 | | | √ | √ | 75.32 | 36.06 | | √ | √ | √ | 76.07 | 36.54 |
Dataset
For KAIST dataset we use the sanitized annotation provided by Li et al in Illumination-aware faster RCNN for robust multispectral pedestrian detection. We upload it to our google drive since the original link is invalid. Cleaned JSON-format annotations are at our google drive. The FLIR-align dataset can be downloaded at (http://shorturl.at/ahAY4), which is provided by Zhang et al in Multispectral Fusion for Object Detection with Cyclic Fuse-and-Refine Blocks. The coco-json format annotation file is at our google drive.
DayNight Labels
We insert the DayNight labels into the filename of every visible image in the training set. "1" stands for daytime images and "3" stands for nighttime images. During image loading, we read the filename and extract the label. Below is the script that we rename the FLIR-aligned training set. Note that it runs on a Windows system. ``` def illrename(): imgfiles = glob.glob('F:\data\FLIR\FLIRaligned\format\train\visible\*.jpeg') xmlfiles = glob.glob('F:\data\FLIR\FLIRaligned\format\train\annotation\*.xml') dayids = list(range(0, 70)) + list(range(84, 2245)) + list(range(2367, 3476)) + list(range(3583, 3675)) + \ list(range(4085, 4129)) night_ids = list(range(70, 84)) + list(range(2245, 2367)) + list(range(3476, 3583)) + list(range(3675, 4085))
for day_id in day_ids:
img_file = img_files[day_id]
xml_file = xml_files[day_id]
filename = os.path.split(xml_file)[-1]
new_filename = filename[:4]+'0'+filename[4:]
fname = os.path.splitext(filename)[0]
os.rename(img_file, os.path.join('train', 'visible', new_filename.replace('.xml', '.jpeg')))
os.rename(os.path.join('train', 'thermal', filename.replace('.xml', '.jpeg')),
os.path.join('train', 'thermal', new_filename.replace('.xml', '.jpeg')))
xml = os.path.join('train', 'annotation', fname+'.xml')
tree = ET.parse(xml)
root = tree.getroot()
filename = root[3]
filename.text = new_filename.replace('.xml', '.jpeg')
tree.write(xml)
os.rename(xml, os.path.join('train', 'annotation', new_filename))
for night_id in night_ids:
img_file = img_files[night_id]
xml_file = xml_files[night_id]
filename = os.path.split(xml_file)[-1]
new_filename = filename[:4]+'3'+filename[4:]
fname = os.path.splitext(filename)[0]
os.rename(img_file, os.path.join('train', 'visible', new_filename.replace('.xml', '.jpeg')))
os.rename(os.path.join('train', 'thermal', filename.replace('.xml', '.jpeg')),
os.path.join('train', 'thermal', new_filename.replace('.xml', '.jpeg')))
xml = os.path.join('train', 'annotation', fname+'.xml')
tree = ET.parse(xml)
root = tree.getroot()
filename = root[3]
filename.text = new_filename.replace('.xml', '.jpeg')
tree.write(xml)
os.rename(xml, os.path.join('train', 'annotation', new_filename))
```
Citation
@ARTICLE{tinet,
author={Zhang, Yan and Yu, Huai and He, Yujie and Wang, Xinya and Yang, Wen},
journal={IEEE Transactions on Instrumentation and Measurement},
title={Illumination-Guided RGBT Object Detection With Inter- and Intra-Modality Fusion},
year={2023},
volume={72},
number={},
pages={1-13},
doi={10.1109/TIM.2023.3251414}}
Owner
- Name: CarolineZ
- Login: NNNNerd
- Kind: user
- Repositories: 1
- Profile: https://github.com/NNNNerd
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - name: "MMDetection Contributors" title: "OpenMMLab Detection Toolbox and Benchmark" date-released: 2018-08-22 url: "https://github.com/open-mmlab/mmdetection" license: Apache-2.0
GitHub Events
Total
- Issues event: 1
- Watch event: 1
- Issue comment event: 1
Last Year
- Issues event: 1
- Watch event: 1
- Issue comment event: 1
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- codecov/codecov-action v1.0.10 composite
- codecov/codecov-action v2 composite
- actions/checkout v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/stale v4 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
- albumentations >=0.3.2
- cython *
- numpy *
- docutils ==0.16.0
- markdown >=3.4.0
- myst-parser *
- sphinx ==5.3.0
- sphinx-copybutton *
- sphinx_markdown_tables >=0.0.17
- sphinx_rtd_theme *
- mmcv-full >=1.3.17
- cityscapesscripts *
- imagecorruptions *
- scikit-learn *
- mmcv *
- scipy *
- torch *
- torchvision *
- matplotlib *
- numpy *
- pycocotools *
- scipy *
- six *
- terminaltables *
- asynctest * test
- codecov * test
- flake8 * test
- interrogate * test
- isort ==4.3.21 test
- kwarray * test
- onnx ==1.7.0 test
- onnxruntime >=1.8.0 test
- protobuf <=3.20.1 test
- pytest * test
- ubelt * test
- xdoctest >=0.10.0 test
- yapf * test