Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.3%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: liuxiangwin
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 419 KB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 9 months ago · Last pushed 9 months ago
Metadata Files
Readme Contributing License Citation

README.md

tool icon LLM Compressor

llmcompressor is an easy-to-use library for optimizing models for deployment with vllm, including:

  • Comprehensive set of quantization algorithms for weight-only and activation quantization
  • Seamless integration with Hugging Face models and repositories
  • safetensors-based file format compatible with vllm
  • Large model support via accelerate

✨ Read the announcement blog here! ✨

LLM Compressor Flow

🚀 What's New!

Big updates have landed in LLM Compressor! Check out these exciting new features:

  • Preliminary FP4 Quantization Support: Quantize weights and activations to FP4 and seamlessly run the compressed model in vLLM. Model weights and activations are quantized following the NVFP4 configuration. See examples of weight-only quantization and fp4 activation support. Support is currently preliminary and additional support will be added for MoEs.
  • Axolotl Sparse Finetuning Integration: Easily finetune sparse LLMs through our seamless integration with Axolotl. Learn more here.
  • AutoAWQ Integration: Perform low-bit weight-only quantization efficiently using AutoAWQ, now part of LLM Compressor. Note: This integration should be considered experimental for now. Enhanced support, including for MoE models and improved handling of larger models via layer sequential pipelining, is planned for upcoming releases. See the details.
  • Day 0 Llama 4 Support: Meta utilized LLM Compressor to create the FP8-quantized Llama-4-Maverick-17B-128E, optimized for vLLM inference using compressed-tensors format.

Supported Formats

  • Activation Quantization: W8A8 (int8 and fp8)
  • Mixed Precision: W4A16, W8A16, NVFP4 (W4A4 and W4A16 support)
  • 2:4 Semi-structured and Unstructured Sparsity

Supported Algorithms

  • Simple PTQ
  • GPTQ
  • AWQ
  • SmoothQuant
  • SparseGPT

When to Use Which Optimization

Please refer to docs/schemes.md for detailed information about available optimization schemes and their use cases.

Installation

bash pip install llmcompressor

Get Started

End-to-End Examples

Applying quantization with llmcompressor: * Activation quantization to int8 * Activation quantization to fp8 * Activation quantization to fp4 * Weight only quantization to fp4 * Weight only quantization to int4 using GPTQ * Weight only quantization to int4 using AWQ * Quantizing MoE LLMs * Quantizing Vision-Language Models * Quantizing Audio-Language Models

User Guides

Deep dives into advanced usage of llmcompressor: * Quantizing with large models with the help of accelerate

Quick Tour

Let's quantize TinyLlama with 8 bit weights and activations using the GPTQ and SmoothQuant algorithms.

Note that the model can be swapped for a local or remote HF-compatible checkpoint and the recipe may be changed to target different quantization algorithms or formats.

Apply Quantization

Quantization is applied by selecting an algorithm and calling the oneshot API.

```python from llmcompressor.modifiers.smoothquant import SmoothQuantModifier from llmcompressor.modifiers.quantization import GPTQModifier from llmcompressor import oneshot

Select quantization algorithm. In this case, we:

* apply SmoothQuant to make the activations easier to quantize

* quantize the weights to int8 with GPTQ (static per channel)

* quantize the activations to int8 (dynamic per token)

recipe = [ SmoothQuantModifier(smoothingstrength=0.8), GPTQModifier(scheme="W8A8", targets="Linear", ignore=["lmhead"]), ]

Apply quantization using the built in open_platypus dataset.

* See examples for demos showing how to pass a custom calibration set

oneshot( model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", dataset="openplatypus", recipe=recipe, outputdir="TinyLlama-1.1B-Chat-v1.0-INT8", maxseqlength=2048, numcalibrationsamples=512, ) ```

Inference with vLLM

The checkpoints created by llmcompressor can be loaded and run in vllm:

Install:

bash pip install vllm

Run:

python from vllm import LLM model = LLM("TinyLlama-1.1B-Chat-v1.0-INT8") output = model.generate("My name is")

Questions / Contribution

  • If you have any questions or requests open an issue and we will add an example or documentation.
  • We appreciate contributions to the code, examples, integrations, and documentation as well as bug reports and feature requests! Learn how here.

Citation

If you find LLM Compressor useful in your research or projects, please consider citing it:

bibtex @software{llmcompressor2024, title={{LLM Compressor}}, author={Red Hat AI and vLLM Project}, year={2024}, month={8}, url={https://github.com/vllm-project/llm-compressor}, }

Owner

  • Name: alanliuxiang
  • Login: liuxiangwin
  • Kind: user
  • Location: Beijing

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - name: Red Hat AI
  - name: vLLM Project
title: "LLM Compressor"
date-released: 2024-08-08
url: https://github.com/vllm-project/llm-compressor

GitHub Events

Total
  • Push event: 1
  • Create event: 1
Last Year
  • Push event: 1
  • Create event: 1

Dependencies

llmcompressor/requirements (2).txt pypi
  • kfp *
  • kfp-kubernetes *
pyproject.toml pypi
setup.py pypi
  • accelerate >=0.20.3,
  • compressed-tensors ==0.10.1
  • datasets *
  • else *
  • if *
  • loguru *
  • numpy >=1.17.0,<2.0
  • pillow *
  • pynvml *
  • pyyaml >=5.0.0
  • requests >=2.0.0
  • torch >=1.7.0
  • tqdm >=4.0.0
  • transformers >4.0,<5.0