https://github.com/amazon-science/mix-generation

MixGen: A New Multi-Modal Data Augmentation

https://github.com/amazon-science/mix-generation

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.2%) to scientific vocabulary

Keywords

data-augmentation data-efficiency multimodal pretraining vision-language
Last synced: 5 months ago · JSON representation

Repository

MixGen: A New Multi-Modal Data Augmentation

Basic Info
  • Host: GitHub
  • Owner: amazon-science
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 9.09 MB
Statistics
  • Stars: 124
  • Watchers: 3
  • Forks: 7
  • Open Issues: 2
  • Releases: 0
Topics
data-augmentation data-efficiency multimodal pretraining vision-language
Created over 3 years ago · Last pushed about 3 years ago
Metadata Files
Readme Contributing License Code of conduct

README.md

MixGen: A New Multi-Modal Data Augmentation

This is the official PyTorch implementation of MixGen, which is a joint data augmentation technique for vision-language representation learning to improve data efficiency.

Here are some image-text pairs generated by MixGen,

How to use

MixGen is an input-level data augmentation technique, which can be plugged-and-played into existing vision-language learning methods with minimal code change.

Here we adopt ALBEF, NeurIPS'21 as an illustrating example. We only need to add one line between dataloader and model forward here.

That is, change from

for i, (image, text) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): optimizer.zero_grad()

to import mixgen as mg for i, (image, text) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): image, text = mg.mixgen(image, text, num=16) optimizer.zero_grad()

And that's it!!! No more changes needed to be made. You can simply kicoff training just like ALBEF does,

python -m torch.distributed.launch --nproc_per_node=8 --use_env Pretrain.py

Citation

If you find MixGen useful in your research, please kindly consider to cite the following paper. @InProceedings{Hao_2023_WACV, author = {Hao, Xiaoshuai and Zhu, Yi and Appalaraju, Srikar and Zhang, Aston and Zhang, Wanqian and Li, Bo and Li, Mu}, title = {MixGen: A New Multi-Modal Data Augmentation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) Workshops}, month = {January}, year = {2023}, pages = {379-389} }

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

Owner

  • Name: Amazon Science
  • Login: amazon-science
  • Kind: organization

GitHub Events

Total
  • Watch event: 10
Last Year
  • Watch event: 10

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 4
  • Total pull requests: 2
  • Average time to close issues: 14 days
  • Average time to close pull requests: 9 minutes
  • Total issue authors: 4
  • Total pull request authors: 1
  • Average comments per issue: 0.75
  • Average comments per pull request: 0.0
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • overwhelmedxh (1)
  • 1024er (1)
  • 4fee8fea (1)
  • Aki-Tomoya (1)
Pull Request Authors
  • bryanyzhu (2)
Top Labels
Issue Labels
Pull Request Labels