Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 15 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (4.3%) to scientific vocabulary
Last synced: 6 months ago
·
JSON representation
Repository
Basic Info
- Host: GitHub
- Owner: oOAmanOo
- Language: Jupyter Notebook
- Default Branch: main
- Size: 3.65 GB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Created over 1 year ago
· Last pushed 6 months ago
Metadata Files
Citation
https://github.com/oOAmanOo/GAMC/blob/main/
# GAMC - Adaptable Advertising Meme Caption Generation Model
## Data
### 1. Oxford_HIC
[](https://doi.org/10.1109/ICCV51070.2023.01856)
[](https://github.com/runjiali-rl/Oxford_HIC)
[](https://drive.google.com/drive/folders/1BDuUcMeaWrFD8TwgHLhFPkuAwmoHaVNQ)
### 2. Instagram
[](Data/Instagram/instagram.ipynb)
[](../Data/Instagram/)
* `Original_File/Home_xxx.csv` >> Instagram Post in home page (text == img alt)
* `Original_File/Done_xxx.csv` >> Instagram Post in post page (text == caption)
* `CaptionID_xxx.csv` >> Full Data with humor score
* `Filter_xxx.csv` >> Remove missing images
* `Generate_xxx.csv` >> After caption augmentation
### 3. Emotion-Sentiment-Humor Description by MiniGPT4
[](https://doi.org/10.48550/arXiv.2304.10592)
[](https://github.com/Vision-CAIR/MiniGPT-4.git)
[](Citations/minigpt4/demo_getdata.py)
## Code
### GAMC
* Step 0 : `Preprocess`
* Data reformat & Augmentation:
* [tool.ipynb](Main/ipynb_tools/tool.ipynb)
* [test_model.ipynb](Main/ipynb_tools/test_model.ipynb)
* Data parsing:
* [parse_oxford.py](Data/Oxford_HIC/parse_oxford.py)
* [parse_ins.py](Data/Instagram/parse_ins.py)[oxford_hic_dataset.zip](Main/oxford_hic_dataset.zip)
* Step 1 : `Main model`
* code: [oxford_train_BLEU_minigpt4.py](Main/oxford_train_BLEU_minigpt4.py)
* checkpoint: [checkpoint-004.pt](Main/Model/final/GAMC/20250421_oxford_3000_only1_300_82_ESH_bert_cross_concat/checkpoint-004.pt)
* Step 2 : `Adaptation`
* code: [oxford_train_BLEU_adapter_minigpt4.py](Main/oxford_train_BLEU_adapter_minigpt4.py)
* checkpoint-MC: [checkpoint-002.pt](Main/Model/final/GAMC/MC/100up_only200_lessNotFunImg_53_171_passlength_12_MC_/all/checkpoint-002.pt)
* checkpoint-SD: [checkpoint-011.pt](Main/Model/final/GAMC/SD/100up_only200_lessNotFunImg_169_55_passlength_10_SD_/all/checkpoint-011.pt)
* Step 3 : `Test result`
* code: [oxford_predict_minigpt4.py](Main/oxford_predict_minigpt4.py)
* organization: [result.ipynb](Main/ipynb_tools/result.ipynb)
### Evaluation
* `Humor Score` >> [humor_score.py](Main/humor_score.py)
* `Benign Score` >> Vilio [](https://doi.org/10.48550/arXiv.2012.07788)
[](https://github.com/Muennighoff/vilio)
* `Fluency Score` >> Parrot [](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser)
* `Diversity Score` >> [result.ipynb](Main/ipynb_tools/result.ipynb) (cosine similarity of image caption pairs clip embeddings)
## Framework

## Baseline
### 1. ClipCap
[](https://doi.org/10.48550/arXiv.2111.09734)
[](https://github.com/rmokady/CLIP_prefix_caption.git)
### 2. BITA
[](https://doi.org/10.1109/TGRS.2024.3359316)
[](https://github.com/yangcong356/BITA.git)
Note: Due to the import packages, BITA can only run on linux system.
Owner
- Login: oOAmanOo
- Kind: user
- Repositories: 2
- Profile: https://github.com/oOAmanOo
GitHub Events
Total
- Delete event: 4
- Push event: 7
Last Year
- Delete event: 4
- Push event: 7