visualroberta

The first public Vietnamese visual linguistic foundation model(s)

https://github.com/dinhanhx/visualroberta

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: springer.com
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.3%) to scientific vocabulary

Keywords

image-captioning image-text python python-3 python3 vietnamese-nlp visual-linguistic visual-question-answering
Last synced: 4 months ago · JSON representation ·

Repository

The first public Vietnamese visual linguistic foundation model(s)

Basic Info
  • Host: GitHub
  • Owner: dinhanhx
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 98.6 KB
Statistics
  • Stars: 3
  • Watchers: 1
  • Forks: 2
  • Open Issues: 1
  • Releases: 0
Topics
image-captioning image-text python python-3 python3 vietnamese-nlp visual-linguistic visual-question-answering
Created over 3 years ago · Last pushed about 2 years ago
Metadata Files
Readme License Citation

README.md

REFACTOR IN PROCESS

No I'm serious. Don't touch this.

VisualRoBERTa

Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)

forthebadge

forthebadge

forthebadge

Introduction

The first public Vietnamese visual linguistic foundation model(s). This work was carried out only by myself under supervision of Dr Pham Quang Nhat Minh @ Aimesoft and Dr Tran Giang Son @ USTH. Thanks to Mr Nguyen Anh Duong @ VietAI for TPU supports.

Keywords: computer vision, natural language processing, visual linguistic, image text, pretrain, Vietnamese, foundation, multi-modal, machine learning

Results

On UIT-ViIC test set

| | BLEU 1 | BLEU 2 | BLEU 3 | BLEU 4 | RougeL | |------------|--------|--------|--------|--------|--------| | Baseline 1 | 0.7100 | 0.5750 | 0.4760 | 0.3940 | 0.6260 | | Baseline 2 | 0.6820 | 0.5610 | 0.4110 | 0.3270 | 0.5990 | | IC model | 0.8764 | 0.7943 | 0.7247 | 0.6685 | 0.6320 |

Baseline models are the best models in UIT-ViIC paper.

On VQA test set

| | Acc | BLEU 1 | BLEU 2 | BLEU 3 | BLEU 4 | RougeL | |:---------:|:------:|:------:|:------:|:------:|:------:|:------:| | Baseline | 0.3496 | - | - | - | - | - | | VQA model | 0.3449 | 0.4526 | 0.4082 | 0.3997 | 0.4173 | 0.4390 |

Baseline model is the best model in IC paper.

Citation

To cite this repos or the models' weights or the theory, @software{dinhanhx_VisualRoBERTa_2022, title = {{VisualRoBERTa}}, author = {dinhanhx}, year = 2022, month = 9, url = {https://github.com/dinhanhx/VisualRoBERTa} }

⚠ This entry will be updated when the white paper is published or released to the public.

Setup Dependencies

  • For TPU, you just can pip install requirements.txt
  • For GPU, besides reading requirements.txt, you gotta remove any command related to TPU, XLA, then follow original PyTorch docs.

Download Dataset

In training (run) files (such as run_ptrain.py), paths to data folders are hardcoded

TranslateCOCO2017 also contains json files from UIT-ViIC.

Download links: - MS COCO - Translate COCO 2017 this work - ViVQA - UIT-ViIC

You are encouraged to read src/data.py to understand dataset structure and renamed paths to something suitable for your systems.

Train models

It's quite simple, just simple go with bash python -m exp.run_<task_name_go_here>.py

for example, python run_pretrain.py will pretrain the model.

You are encouraged to read these files to understand what they do before training.

  • For TPU, just run it like normal
  • For GPU, you gotta remove/modify anything related to TPU such as xla, tpu, xm, xla_spawn_debug, DistributedSampler...

⚠ Hardcoded file paths might be updated.

Kill leftover processes bash pgrep -f "python -m exp.run_pretrain" | xargs kill -9

Evaluate models

It's also simple, just simple go with bash python -m exp.eval_<dataset_go_here>.py

for example, python eval_vqa.py will infer the models to produce the answers, NOT to compute metrics.

You are encouraged to read these files to understand what they do before evaluation.

⚠ Hardcoded file paths might be updated.

Owner

  • Name: dinhanhx
  • Login: dinhanhx
  • Kind: user
  • Location: Hanoi, Vietnam

A Python dev :/

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this source code or model weights or theory, please cite it as below."
authors:
- given-names: "dinhanhx"
title: "VisualRoBERTa"
date-released: 2022-09-30
url: "https://github.com/dinhanhx/VisualRoBERTa"

GitHub Events

Total
Last Year

Committers

Last synced: about 1 year ago

All Time
  • Total Commits: 70
  • Total Committers: 1
  • Avg Commits per committer: 70.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 3
  • Committers: 1
  • Avg Commits per committer: 3.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
dinhanhx d****x@g****m 70

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 1
  • Total pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 minute
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 1.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • dinhanhx (1)
Pull Request Authors
  • dinhanhx (1)
Top Labels
Issue Labels
Pull Request Labels