Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.2%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: Shuichiro-labo
  • License: apache-2.0
  • Language: Python
  • Default Branch: nomura/main
  • Size: 13 MB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 8 months ago · Last pushed 8 months ago
Metadata Files
Readme License Citation Support

README.md

Enabling the finetuning of the latest Large Multimodal Models

Active maintainer: Yuqian Hong

Initial maintainers: Jingyang Zhang, Yueqian Lin

About

More and more large multimodal models (LMMs) are being released from time to time, but the finetuning of these models is not always straightforward. This codebase aims to provide a unified, minimal structure for LMM finetuning. Key design ideas include: - the components of the finetuning process (e.g., model loading, data collating) are abstracted, allowing one to easily integrate the latest LMMs into this codebase and finetune them with minimal effort; - for all LMMs the 🤗huggingface's official implementation is used, so that after finetuning one can do inference and everything else in the exact same way as earlier with the HF model; - the codebase is kept as simple/lightweight as possible, so that it is easy to understand and modify.

The codebase is quite flexible. It supports the finetuning of various types of LMMs, including: - :citysunrise: single image models: LLaVA-1.5, LLaVA-1.6/NeXT, Phi-3-Vision, Llama-3.2-Vision - :bookmarktabs: multiple/interleaved image models: Qwen-VL-Chat, Qwen2-VL-Instruct, LLaVA-NeXT-Interleave, Qwen2.5-VL-Instruct - :movie_camera: video models: LLaVA-NeXT-Video - :rocket: unified models: LLaVA-Onevision

See supported_models.md for the full list of supported models. For training strategy, 1) full-finetuning, 2) lora, and 3) q-lora are supported for the LLM component, while 1) full-finetuning and 2) lora are supported for the vision encoder/backbone.

What's different from other training frameworks, e.g., LLaMA-Factory, xtuner, swift? These are great projects/frameworks with large scale and high-degree optimization. However, due to their scale and complexity, they could be less transparent and less easy to get started (e.g., I personally feel quite lost when trying to use those frameworks, with a bunch of questions like "how should I format my data"). This codebase (lmms-finetune) is instead designed to be lightweight and simple, meaning that it's much more likely for you to quickly get started and be able to know almost every detail of the training process if you want. In other words, this is a minimal workable codebase that supports LMM finetuning, while facilitating quick experiments, flexible modifications, and easy integrations of new models.

News

  • 2025/01/27: Qwen2.5 family is supported in the transformers-4.49.0.dev0 branch. At the moment you would need to install the latest transformers from github.
  • 2024/12/16: Thanks to the contribution from lavinal712 (Yuqian), training with Llama-3.2-Vision is now supported. Also there is a useful script merge_lora_weights.py added.
  • 2024/10/16: We added LLaVA-Onevision. See a caveat when using LLaVA-Onevision here. Also we updated the collators to stay in line with the new processing of LLaVA models in transformers.
  • 2024/08/28: Finetuning with gradio webui interface is supported. Try python webui.py.
  • 2024/07/30: Finetuning of vision encoder and projector is now supported.
  • 2024/07/25: Several things are improved. We have 1) released a colab notebook demonstrating a full, successful training run with LLaVA-NeXT-Video-7B (happy to hear from people that they succeeded in their cases too); 2) supported having text-only samples in the training set (see this for one note).
  • 2024/07/20: Initial release of the codebase. More models and optimizations are coming soon. Stay tuned!

Installation

```bash

clone this repo

git clone https://github.com/zjysteven/lmms-finetune.git

set up a conda environment

conda create -n lmms-finetune python=3.10 -y conda activate lmms-finetune

this will install the latest version of torch

feel free to change it to a specific version

python -m pip install -r requirements.txt

optionally install flash attention

python -m pip install --no-cache-dir --no-build-isolation flash-attn ```

Usage

A workable example training run (of LLaVA-NeXT-Video-7B) is showcased in this colab notebook, which is a good starting point to get a sense of how to use this codebase. The following sections provide a more detailed guide on how to finetune a model.

0. See if the model you want to finetune is supported Browse [supported_models.md](docs/supported_models.md). Or run `python supported_models.py`, which will for example show things like ``` Supported models: Model ID : HuggingFace Path ------------------------------------------------ llava-1.5-7b : llava-hf/llava-1.5-7b-hf llava-1.5-13b : llava-hf/llava-1.5-13b-hf llava-next-video-7b : llava-hf/LLaVA-NeXT-Video-7B-hf llava-next-video-7b-32k : llava-hf/LLaVA-NeXT-Video-7B-32K-hf llava-next-video-34b : llava-hf/LLaVA-NeXT-Video-34B-hf llava-interleave-qwen-0.5b : llava-hf/llava-interleave-qwen-0.5b-hf llava-interleave-qwen-7b : llava-hf/llava-interleave-qwen-7b-hf llava-onevision-0.5b-ov : llava-hf/llava-onevision-qwen2-0.5b-ov-hf llava-onevision-7b-ov : llava-hf/llava-onevision-qwen2-7b-ov-hf llava-onevision-72b-ov : llava-hf/llava-onevision-qwen2-72b-ov-hf qwen-vl-chat : Qwen/Qwen-VL-Chat phi3-v : microsoft/Phi-3-vision-128k-instruct qwen2-vl-2b-instruct : Qwen/Qwen2-VL-2B-Instruct qwen2-vl-7b-instruct : Qwen/Qwen2-VL-7B-Instruct llama-3.2-11b-vision-instruct : meta-llama/Llama-3.2-11B-Vision-Instruct llama-3.2-90b-vision-instruct : meta-llama/Llama-3.2-90B-Vision-Instruct ``` :raised_hand: Don't see the one you want? Check out this [guide](docs/add_new_model.md) for step-by-step instructions on how to add a new model.
1. Prepare your finetuning data Similar to LLaVA, we expect the data to be in a json file containing a list of dictionaries, where each dictionary is a sample. ```json [ { "system_prompt": "You are a helpful assistant.", "video": "path/to/video1.mp4", "conversations": [ { "from": "human", "value": "
2. Perform finetuning Modify the sample training bash script, [example_video.sh](./example_scripts/example_video.sh) or [example_image.sh](example_image.sh) (there are no differences other than different model ID and dataset filepath), to specify arguments including the target model, data path, etc. There are comments that explain each argument's meaning. Then simply kick off the training by running the bash script `bash example_scripts/example_video.sh` or `bash example_scripts/example_image.sh`. Note that to exactly run the provided [example_video.sh](./example_scripts/example_video.sh), you will need to download the video clips from ShareGPT4Video; see [here](example_data/videos/ego4d/README.md) for instructions. :chart_with_upwards_trend:*If you prefer graphical interface*, simply run `python webui.py` to lauch the gradio interface for finetuning.
3. Inference with finetuned model The key here is to correctly load the finetuned model, after that everything is the same as how you would do inference with the corresponding model from huggingface. Refer to the [inference documentation](docs/inference.md) for more details, including how to use `merge_lora_weights.py` to easily obtain a standalone model. Again you can refer to [this colab](https://colab.research.google.com/drive/139XypY8_wdLgyLXYE_Zve7Hjd809fVpK?usp=sharing) for a complete example.

Acknowledgements

We want to thank the huggingface team for actively integrating newest models in the transformers library. Also, the example finetuning scripts (e.g., this, this, and this) made by HF staff, Niels Rogge and Raushan Turganbay, are very helpful and lay the foundation for this codebase. We also especially thank Raushan Turganbay for her generous discussions and feedbacks on this project.

The codebase borrows from, is inspired by, or builds upon the following code, repos, and/or libraries: LLaVA, Qwen, transformers, etc.

Citation

If you use lmms-finetune in your research/project, we'd be very happy if you could 1) give us a star, 2) share this repo with others, or 3) cite this codebase: @software{Zhang_lmms-finetune, author = {Zhang, Jingyang and Lin, Yueqian}, license = {Apache-2.0}, title = {{lmms-finetune}}, url = {https://github.com/zjysteven/lmms-finetune} }

Owner

  • Name: Shuichiro NOMURA
  • Login: Shuichiro-labo
  • Kind: user

Gifu University

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: lmms-finetune
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Jingyang
    family-names: Zhang
    email: zhjy227@gmail.com
    orcid: 'https://orcid.org/0000-0002-9771-5111'
  - given-names: Yueqian
    family-names: Lin
    email: yueqian.lin@duke.edu
    affiliation: Duke University
    orcid: 'https://orcid.org/0000-0003-1473-8981'
repository-code: 'https://github.com/zjysteven/lmms-finetune'
abstract: >-
  lmms-finetune is a lightweight, unified codebase for
  finetuning multiple latest multi-modal LLMs including
  llava-1.5/1.6/interleave/next-video/onevision,
  qwen-vl(-2), and phi3-v.
keywords:
  - LLM
  - foundation model
  - multi-modal LLM
license: Apache-2.0

GitHub Events

Total
  • Push event: 1
  • Create event: 2
Last Year
  • Push event: 1
  • Create event: 2

Dependencies

requirements.txt pypi
  • accelerate *
  • av *
  • bitsandbytes *
  • deepspeed ==0.14.4
  • gradio *
  • matplotlib *
  • peft *
  • protobuf *
  • sentencepiece *
  • tiktoken *
  • torch *
  • torchvision *
  • transformers ==4.45.2
  • transformers_stream_generator *
  • wandb *