image-captioning-mobilenet-llama3
Image Captioning With MobileNet-LLaMA 3
https://github.com/reshalfahsi/image-captioning-mobilenet-llama3
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.0%) to scientific vocabulary
Keywords
Repository
Image Captioning With MobileNet-LLaMA 3
Basic Info
Statistics
- Stars: 6
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Image Captioning With MobileNet-LLaMA 3
MobileNet V3 + LLaMA 3 architecture.
Image captioning is one of the problems in computer vision, constituting two kinds of modalities, i.e., image and text. Given a particular image, a caption regarding it is automatically generated. One can easily leverage a CNN-based architecture to draw the numerical representation out of the image. When interacting with the text, the long-range dependencies method has to be employed. Uplifted by the recent success of LLaMA 3, this project utilizes its computational block called the LLaMA 3 Transformer block. This block comprises RMSNorm, Grouped Multi-Query Attention, Feed Forward SwiGLU, and Rotary Position Embedding. Anyhow, in the original implementation, the Transformer block was only used as the decoder. In this project, the Transformer block is used as both the encoder and the decoder. In the encoder, before image data is funneled into the architecture, a CNN-based architecture, MobileNet-V3, is leveraged, acting similarly to the text embedding. Therefore, this architecture is dubbed MobileNet-LLaMA 3. To get knowledge on the performance of the model, the Flickr-8k dataset is used. The dataset is separated into the train, validation, and test sets in the 80-10-10 rule. Quantitatively, the performance of the model is measured via the ROUGE score, to be precise, the ROUGE-1 F-measure.
Experiment
Proceed to this notebook to vacate and answer your confusion and questions about this project by contemplating each line of code.
Result
Quantitative Result
The MobileNet-LLaMA3 performance on the test set is quantitatively displayed by the following table.
Test Metric | Score ----------------------------- | ------------- ROUGE-1 F-measure | 36.69%
Loss Curve
Loss curves of the MobileNet-LLaMA 3 model on the train and validation sets.
Qualitative Result
The following image shows the qualitative results of MobileNet-LLaMA 3 on the test set.

The image-caption pairs yielded from MobileNet-LLaMA 3.
The MobileNet-LLaMA 3 model is also assessed in the wild.

The result of MobileNet-LLaMA 3 in the wild.
Citation
Feel free to cite this repository:
@misc{mobilenet-llama3,
title = {Image Captioning With MobileNet-LLaMA 3},
url = {https://github.com/reshalfahsi/image-captioning-mobilenet-llama3},
author = {Resha Dwika Hefni Al-Fahsi},
}
Credit
- Introducing Meta Llama 3: The most capable openly available LLM to date
- Llama 2: Open Foundation and Fine-Tuned Chat Models
- Root Mean Square Layer Normalization
- GLU Variants Improve Transformer
- Roformer: Enhanced Transformer With Rotary Position Embedding
- GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
- Efficiently Scaling Transformer Inference
- Transformers Optimization: Part 1 - KV Cache
- Searching for MobileNetV3
- ROUGE: A Package for Automatic Evaluation of Summaries
- torchtune
- Exploring and building the LLaMA 3 Architecture : A Deep Dive into Components, Coding, and Inference Techniques
- LLaMA 2 from scratch 🦙
- aladdinpersson's Image Captioning
- Keras' Image Captioning
- Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics (Extended Abstract)
- jbrownlee's Flickr8k Dataset
- PyTorch Lightning
Owner
- Name: Resha Dwika Hefni Al-Fahsi
- Login: reshalfahsi
- Kind: user
- Location: Yogyakarta, Indonesia
- Website: reshalfahsi.github.io
- Twitter: reshalfahsi
- Repositories: 80
- Profile: https://github.com/reshalfahsi
Experienced Tensorbender Strolling in the Latent Space
Citation (CITATION.cff)
cff-version: 1.2.0
message: "Feel free to cite this repository:"
title: "Image Captioning With MobileNet-LLaMA 3"
authors:
- family-names: Al-Fahsi
given-names: Resha Dwika Hefni
url: https://github.com/reshalfahsi/image-captioning-mobilenet-llama3
GitHub Events
Total
- Watch event: 2
Last Year
- Watch event: 2
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0