https://github.com/chen-yang-liu/promptcc
PyTorch implementation of 'A Decoupling Paradigm With Prompt Learning for Remote Sensing Image Change Captioning'
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, scholar.google, ieee.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.9%) to scientific vocabulary
Keywords
Repository
PyTorch implementation of 'A Decoupling Paradigm With Prompt Learning for Remote Sensing Image Change Captioning'
Basic Info
Statistics
- Stars: 29
- Watchers: 2
- Forks: 2
- Open Issues: 6
- Releases: 0
Topics
Metadata Files
README.md
A Decoupling Paradigm with Prompt Learning for Remote Sensing Image Change Captioning
**[Chenyang Liu](https://chen-yang-liu.github.io/), [Rui Zhao](https://ruizhaocv.github.io), [Jianqi Chen](https://windvchen.github.io/), [Zipeng Qi](https://scholar.google.com/citations?user=KhMtmBsAAAAJ), [Zhengxia Zou](https://scholar.google.com.hk/citations?hl=en&user=DzwoyZsAAAAJ), and [Zhenwei Shi*✉](https://scholar.google.com.hk/citations?hl=en&user=kNhFWQIAAAAJ)**Welcome to our repository!
This repository contains the PyTorch implementation of our PromptCC model in the paper: "A Decoupling Paradigm with Prompt Learning for Remote Sensing Image Change Captioning".
For more information, please see our published paper in [IEEE] (Accepted by TGRS 2023)
🥳 New
🔥 Our survey "Remote Sensing Temporal Vision-Language Models: A Comprehensive Survey": Arxiv || Github 🔥
Overview
- Considering the specificity of the RSICC task, PromptCC employs a novel decoupling paradigm and deeply integrates prompt learning and pre-trained large language models.
- This repository will encompass all aspects of our code, including training, inference, computation of evaluation metrics, as well as the tokenization and word mapping used in our work.
Installation and Dependencies
python
git clone https://github.com/Chen-Yang-Liu/PromptCC.git
cd PromptCC
conda create -n PromptCC_env python=3.9
conda activate PromptCC_env
pip install -r requirements.txt
Data preparation
Firstly, download the image pairs of LEVIRCC dataset from the [Repository]. Extract images pairs and put them in `./data/LEVIRCC/as follows:
python
./data/LEVIR_CC:
├─LevirCCcaptions_v1.json (one new json file with changeflag, different from the old version from the above Download link)
├─images
├─train
│ ├─A
│ ├─B
├─val
│ ├─A
│ ├─B
├─test
│ ├─A
│ ├─B
`
Then preprocess dataset as follows:
python
python create_input_files.py
After that, you can find some resulted .pkl files in ./data/LEVIR_CC/.
Of course, you can use our provided resulted .pkl files directly in [Hugging face].
NOTE
Please modify the source code of 'CLIP' package, please modify CLIP.model.VisionTransformer.forward() like [this].
Inference Demo
You can download our pretrained model here: [Hugging face]
After downloaded the model, put cls_model.pth.tar in ./checkpoints/classification_model/ and put BEST_checkpoint_ViT-B_32.pth.tar in ./checkpoints/cap_model/.
Then, run a demo to get started as follows:
python
python caption_beams.py
Train
Make sure you performed the data preparation above. Then, start training as follows:
python
python train.py
Evaluate
python
python eval2.py
We recommend training 5 times to get an average score.
Note: - It's important to note that, before model training and evaluation, a sentence needs to undergo tokenization and mapping of words to indices. For instance, in the case of the word “difference”, GPT would tokenize it as ['diff', 'erence'] using its subword-based tokenization mechanism and map them to [26069, 1945] using its word mapping. Different tokenization and word mapping will influence the scores of the evaluation metrics. Therefore, to ensure a fair performance comparison, it is essential to utilize the same tokenization and word mapping when calculating evaluation metrics for all comparison methods. - For all comparison methods, we have retrained and evaluated model performance using the publicly available tokenizer and word mapping of GPT, which are more comprehensive and widely acknowledged. We also recommend that future researchers follow this. - Comparison with SOTA:
Citation & Acknowledgments
If you find this paper useful in your research, please consider citing:
@ARTICLE{10271701,
author={Liu, Chenyang and Zhao, Rui and Chen, Jianqi and Qi, Zipeng and Zou, Zhengxia and Shi, Zhenwei},
journal={IEEE Transactions on Geoscience and Remote Sensing},
title={A Decoupling Paradigm With Prompt Learning for Remote Sensing Image Change Captioning},
year={2023},
volume={61},
number={},
pages={1-18},
doi={10.1109/TGRS.2023.3321752}}
Owner
- Name: Liu Chenyang
- Login: Chen-Yang-Liu
- Kind: user
- Location: Beijing
- Website: https://Chen-Yang-Liu.github.io
- Repositories: 15
- Profile: https://github.com/Chen-Yang-Liu
Liu Chenyang
GitHub Events
Total
- Issues event: 5
- Watch event: 3
- Issue comment event: 11
- Push event: 1
- Fork event: 1
Last Year
- Issues event: 5
- Watch event: 3
- Issue comment event: 11
- Push event: 1
- Fork event: 1
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 2
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 2
- Total pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 2
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 2
- Pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- cassiopeiaip (1)
- yhyyds521 (1)
- w-k-art (1)
- Mickey-Boy (1)
- Serendi-hi (1)
- Cemm23333 (1)
- ruzcko (1)
- mynameismahu (1)
Pull Request Authors
- Serendi-hi (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- 2to3 ==1.0
- Pillow ==9.2.0
- PyYAML ==6.0
- Pygments ==2.13.0
- Shapely ==1.8.5.post1
- asttokens ==2.1.0
- async-generator ==1.10
- attrs ==22.2.0
- backcall ==0.2.0
- brotlipy ==0.7.0
- charset-normalizer ==2.1.1
- click ==8.1.3
- cog ==0.0.3
- cytoolz ==0.11.0
- decorator ==5.1.1
- exceptiongroup ==1.1.0
- executing ==1.2.0
- filelock ==3.8.0
- fonttools ==4.25.0
- ftfy ==6.1.1
- h11 ==0.14.0
- huggingface-hub ==0.9.1
- idna ==3.4
- imgaug ==0.4.0
- ipython ==8.6.0
- jedi ==0.18.1
- jieba ==0.42.1
- joblib ==1.2.0
- matplotlib-inline ==0.1.6
- mkl-fft ==1.3.1
- mkl-service ==2.4.0
- munkres ==1.1.4
- numpy ==1.22.0
- opencv-python ==4.8.0.74
- outcome ==1.2.0
- pandas ==2.0.3
- parso ==0.8.3
- pickleshare ==0.7.5
- prompt-toolkit ==3.0.31
- ptflops ==0.7
- pure-eval ==0.2.2
- pynvml ==11.4.1
- pytz ==2023.3
- regex ==2022.9.13
- sacremoses ==0.0.53
- scikit-image ==0.18.1
- selenium ==4.7.2
- sniffio ==1.3.0
- sortedcontainers ==2.4.0
- stack-data ==0.6.0
- thop ==0.1.1.post2209072238
- tokenizers ==0.10.3
- torch ==1.12.1
- torchaudio ==0.12.1
- torchinfo ==1.8.0
- torchstat ==0.0.7
- torchsummary ==1.5.1
- torchvision ==0.13.1
- tqdm ==4.64.1
- traitlets ==5.5.0
- transformers ==4.10.3
- trio ==0.22.0
- trio-websocket ==0.9.2
- tzdata ==2023.3
- urllib3 ==1.26.12
- wcwidth ==0.2.5
- wincertstore ==0.2
- wsproto ==1.2.0