glip

Centered Masking for Language-Image Pre-training

https://github.com/mingliangliang3/glip

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (6.3%) to scientific vocabulary

Keywords

deep-learning foundation-models image-masking multimodal-learning representation-learning zero-shot-classification
Last synced: 6 months ago · JSON representation ·

Repository

Centered Masking for Language-Image Pre-training

Basic Info
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
deep-learning foundation-models image-masking multimodal-learning representation-learning zero-shot-classification
Created almost 2 years ago · Last pushed 11 months ago
Metadata Files
Readme Changelog License Citation

README.md

GLIP: Centered Masking for Language-Image Pre-Training

GLIP

Our paper is accepted by ECML 2024.

Abstract

We introduce Gaussian masking for Language-Image Pre-Training (GLIP) a novel, straightforward, and effective technique for masking image patches during pre-training of a vision-language model. GLIP builds on Fast Language-Image Pre-Training (FLIP), which randomly masks image patches while training a CLIP model. GLIP replaces random masking with centered masking, that uses a Gaussian distribution and is inspired by the importance of image patches at the center of the image. GLIP retains the same computational savings as FLIP, while improving performance across a range of downstream datasets and tasks, as demonstrated by our experimental results. We show the benefits of GLIP to be easy to obtain, requiring no delicate tuning of the Gaussian, and also applicable to data sets containing images without an obvious center focus.

Method

| (a) Random Masking | (b) Gaussian Masking Sigma=0.1 | (c) Gaussian Masking Sigma=0.2 | (d) Gaussian Masking Sigma=0.8 | |:-------------------------:|:--------------------------:|:--------------------------:|:--------------------------:| | Random Masking | Gaussian Masking Sigma=0.1 | Gaussian Masking Sigma=0.2 | Gaussian Masking Sigma=0.8 |

Comparison of Random and Gaussian Masking Strategies. Image (a) demonstrates a random masking strategy with uniform masking probability. Images (b), (c), and (d) illustrate Gaussian masking with increasing standard deviations ($\sigma$), showcasing the effect of masking that is focused in the centerand gradually spreads to the edges.

Results

We will pre-train the model based on the following code and settings Open_clip

Zero-shot accuracy on ImageNet1K classification. We pre-trained the model for 30 epochs on the CC12M dataset by different image patch mask ratios with ViT-B/16 as the image encoder. Then we fine-tuned the FLIP and GLIP by an additional epoch.

| Method | Masking Ratio | Inference | Unmasking | After Tuning | |--------|---------------|-----------|-----------|--------------| | CLIP | - | - | 36.5 | - | | RECLIP | 160 x 160 | - | 36.6 | 37.4 | | FLIP | 50% | 33.9 | 36.1 | 36.2 | | GLIP | 50% | 35.4 | 37.1 | 37.2 | |--------|---------------|-----------|-----------|--------------| | RECLIP | 112 x 112 | - | 33.0 | 33.4 | | FLIP | 75% | 28.2 | 32.0 | 32.6 | | GLIP | 75% | 30.8 | 33.9 | 34.0 | |--------|---------------|-----------|-----------|--------------| | RECLIP | 64 x 64 | - | 24.9 | 25.6 | | FLIP | 91.8% | 17.9 | 23.5 | 24.5 | | GLIP | 91.8% | 22.1 | 18.6 | 28.0 | |--------|---------------|-----------|-----------|--------------|

Pre-trained on LAION-400M

We pre-trained the model for 6 epochs on the LAION400M dataset by 91.8% image patch mask ratios with ViT-B/16 as the image encoder. Then we fine-tuned the RECLIP, FLIP, and GLIP by a 0.4 epoch. For the LAION400M dataset, we successfully download 297M data. We pre-trained and fine-tuned the models on 4 H100 GPU with amp_bf16 precision.

| Method | Masking Ratio | Image Token | Inference | Unmasking | After Tuning | |--------|---------------|-------------|-----------|-----------|--------------| | RECLIP | 64 x 64 | 17 | - | 49.1 | 56.8 | | FLIP | 91.8% | 17 | 34.2 | 42.5 | 52.7 | | GLIP | 91.8% | 17 | 38.7 | 23.5 | 56.6 |

GLIP can also be applied to images of a small size. The setting is the same as before.

| Method | Image Size | Sample Seen | Masking Ratio | Image Token | Before Tuning | After Tuning | |----------------|------------|-------------|---------------|-------------|---------------|--------------| | RECLIP | 112 x 112 | 1.9B | 0% | 50 | 57.4 | 61.2 | | RECLIP + FLIP | 112 x 112 | 1.9B | 50% | 25 | 55.0 | 58.7 | | RECLIP + GLIP | 112 x 112 | 1.9B | 50% | 25 | 55.7 | 59.7 |

Pre-training

Follow the instruction of OpenCLIP to pre-train the model with Patch Dropout.

Pre-training FLIP

bash cd open_clip/src torchrun --nproc_per_node=4 \ -m training.main \ --train-data '/data/cc12m/cc12m-train-{0000..2175}.tar' \ --train-num-samples 10968539 \ --dataset-type webdataset \ --model=ViT-B-16 \ --aug-cfg scale='(0.50, 1.0)' \ --batch-size 320 \ --force-patch-dropout 0.50 \ --lr 1e-3 \ --precision amp \ --workers 4 \ --imagenet-val /data/imagenet/validation/

Pre-training RELIP

bash cd open_clip/src torchrun --nproc_per_node=4 \ -m training.main \ --train-data '/data/cc12m/cc12m-train-{0000..2175}.tar' \ --train-num-samples 10968539 \ --dataset-type webdataset \ --model=ViT-B-16 \ --aug-cfg scale='(0.50, 1.0)' \ --batch-size 320 \ --force-image-size 160 \ --lr 1e-3 \ --precision amp \ --workers 4 \ --imagenet-val /data/imagenet/validation/

Pre-training GLIP

bash cd open_clip/src torchrun --nproc_per_node=4 \ -m training.main \ --train-data '/data/cc12m/cc12m-train-{0000..2175}.tar' \ --train-num-samples 10968539 \ --dataset-type webdataset \ --model=ViT-B-16 \ --aug-cfg scale='(0.50, 1.0)' \ --batch-size 320 \ --force-patch-dropout 0.50 \ --lr 1e-3 \ --normal-masking \ --precision amp \ --workers 4 \ --imagenet-val /data/imagenet/validation/

Unmasked tuning

bash cd open_clip/src torchrun --nproc_per_node=4 \ -m training.main \ --train-data '/data/cc12m/cc12m-train-{0000..2175}.tar' \ --train-num-samples 10968539 \ --dataset-type webdataset \ --model=ViT-B-16 \ --aug-cfg scale='(0.50, 1.0)' \ --pretrained /path/to/checkpoints/epoch_K.pt --batch-size 160 \ --lr 1e-5 \ --precision amp \ --workers 4 \ --imagenet-val /data/imagenet/validation/

Evaluation

We use CLIP_benchmark to evaluate CLIP, FLIP and GLIP on a standard set of datasets on different tasks.

Owner

  • Name: Mingliang Liang
  • Login: MingliangLiang3
  • Kind: user
  • Location: Netherlands
  • Company: Radboud University

Researcher in Multimedia retrieval

Citation (CITATION.cff)

cff-version: 1.1.0
message: If you use this software, please cite it as below.
authors:
  - family-names: Ilharco
    given-names: Gabriel
  - family-names: Wortsman
    given-names: Mitchell
  - family-names: Wightman
    given-names: Ross
  - family-names: Gordon
    given-names: Cade   
  - family-names: Carlini
    given-names: Nicholas
  - family-names: Taori
    given-names: Rohan
  - family-names: Dave
    given-names: Achal
  - family-names: Shankar
    given-names: Vaishaal
  - family-names: Namkoong
    given-names: Hongseok
  - family-names: Miller
    given-names: John
  - family-names: Hajishirzi
    given-names: Hannaneh
  - family-names: Farhadi
    given-names: Ali
  - family-names: Schmidt
    given-names: Ludwig
title: OpenCLIP
version: v0.1
doi: 10.5281/zenodo.5143773
date-released: 2021-07-28

GitHub Events

Total
  • Watch event: 2
  • Push event: 9
Last Year
  • Watch event: 2
  • Push event: 9