openllm-cljs

ClojureScript frontend for OpenLLM. All backend credit goes to the folks over at BentoML and their contributors.

https://github.com/gutzufusss/openllm-cljs

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.1%) to scientific vocabulary
Last synced: 8 months ago · JSON representation ·

Repository

ClojureScript frontend for OpenLLM. All backend credit goes to the folks over at BentoML and their contributors.

Basic Info
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 6
  • Releases: 0
Created over 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme Changelog License Citation Codeowners Security

README.md

Banner for OpenLLM

🦾 OpenLLM

pypi_status ci Twitter Discord
python_version Hatch
Ruff

An open platform for operating large language models (LLMs) in production.
Fine-tune, serve, deploy, and monitor any LLMs with ease.

📖 Introduction

With OpenLLM, you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps.

🚂 State-of-the-art LLMs: built-in supports a wide range of open-source LLMs and model runtime, including Llama 2,StableLM, Falcon, Dolly, Flan-T5, ChatGLM, StarCoder and more.

🔥 Flexible APIs: serve LLMs over RESTful API or gRPC with one command, query via WebUI, CLI, our Python/Javascript client, or any HTTP client.

⛓️ Freedom To Build: First-class support for LangChain, BentoML and Hugging Face that allows you to easily create your own AI apps by composing LLMs with other models and services.

🎯 Streamline Deployment: Automatically generate your LLM server Docker Images or deploy as serverless endpoint via ☁️ BentoCloud.

🤖️ Bring your own LLM: Fine-tune any LLM to suit your needs with LLM.tuning(). (Coming soon)

Gif showing OpenLLM Intro


🏃 Getting Started

To use OpenLLM, you need to have Python 3.8 (or newer) and pip installed on your system. We highly recommend using a Virtual Environment to prevent package conflicts.

You can install OpenLLM using pip as follows:

bash pip install openllm

To verify if it's installed correctly, run:

``` $ openllm -h

Usage: openllm [OPTIONS] COMMAND [ARGS]...

██████╗ ██████╗ ███████╗███╗ ██╗██╗ ██╗ ███╗ ███╗ ██╔═══██╗██╔══██╗██╔════╝████╗ ██║██║ ██║ ████╗ ████║ ██║ ██║██████╔╝█████╗ ██╔██╗ ██║██║ ██║ ██╔████╔██║ ██║ ██║██╔═══╝ ██╔══╝ ██║╚██╗██║██║ ██║ ██║╚██╔╝██║ ╚██████╔╝██║ ███████╗██║ ╚████║███████╗███████╗██║ ╚═╝ ██║ ╚═════╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝╚══════╝╚══════╝╚═╝ ╚═╝

An open platform for operating large language models in production. Fine-tune, serve, deploy, and monitor any LLMs with ease. ```

Starting an LLM Server

To start an LLM server, use openllm start. For example, to start a OPT server, do the following:

bash openllm start opt

Following this, a Web UI will be accessible at http://localhost:3000 where you can experiment with the endpoints and sample input prompts.

OpenLLM provides a built-in Python client, allowing you to interact with the model. In a different terminal window or a Jupyter Notebook, create a client to start interacting with the model:

python import openllm client = openllm.client.HTTPClient('http://localhost:3000') client.query('Explain to me the difference between "further" and "farther"')

You can also use the openllm query command to query the model from the terminal:

bash export OPENLLM_ENDPOINT=http://localhost:3000 openllm query 'Explain to me the difference between "further" and "farther"'

Visit http://localhost:3000/docs.json for OpenLLM's API specification.

OpenLLM seamlessly supports many models and their variants. Users can also specify different variants of the model to be served, by providing the --model-id argument, e.g.:

bash openllm start flan-t5 --model-id google/flan-t5-large

Note that openllm also supports all variants of fine-tuning weights, custom model path as well as quantized weights for any of the supported models as long as it can be loaded with the model architecture. Refer to supported models section for models' architecture.

Use the openllm models command to see the list of models and their variants supported in OpenLLM.

🧩 Supported Models

The following models are currently supported in OpenLLM. By default, OpenLLM doesn't include dependencies to run all models. The extra model-specific dependencies can be installed with the instructions below:

Model Architecture Model Ids Installation
chatglm ChatGLMForConditionalGeneration ```bash pip install "openllm[chatglm]" ```
dolly-v2 GPTNeoXForCausalLM ```bash pip install openllm ```
falcon FalconForCausalLM ```bash pip install "openllm[falcon]" ```
flan-t5 T5ForConditionalGeneration ```bash pip install "openllm[flan-t5]" ```
gpt-neox GPTNeoXForCausalLM ```bash pip install openllm ```
llama LlamaForCausalLM ```bash pip install "openllm[llama]" ```
mpt MPTForCausalLM ```bash pip install "openllm[mpt]" ```
opt OPTForCausalLM ```bash pip install "openllm[opt]" ```
stablelm GPTNeoXForCausalLM ```bash pip install openllm ```
starcoder GPTBigCodeForCausalLM ```bash pip install "openllm[starcoder]" ```
baichuan BaiChuanForCausalLM ```bash pip install "openllm[baichuan]" ```

Runtime Implementations (Experimental)

Different LLMs may have multiple runtime implementations. For instance, they might use Pytorch (pt), Tensorflow (tf), or Flax (flax).

If you wish to specify a particular runtime for a model, you can do so by setting the OPENLLM_{MODEL_NAME}_FRAMEWORK={runtime} environment variable before running openllm start.

For example, if you want to use the Tensorflow (tf) implementation for the flan-t5 model, you can use the following command:

bash OPENLLM_FLAN_T5_FRAMEWORK=tf openllm start flan-t5

Note For GPU support on Flax, refers to Jax's installation to make sure that you have Jax support for the corresponding CUDA version.

Quantisation

OpenLLM supports quantisation with bitsandbytes and GPTQ

bash openllm start mpt --quantize int8

To run inference with gptq, simply pass --quantize gptq:

bash openllm start falcon --model-id TheBloke/falcon-40b-instruct-GPTQ --quantize gptq --device 0

Note: to run GPTQ, make sure to install with pip install "openllm[gptq]". The weights of all supported models should be quantized before serving. See GPTQ-for-LLaMa for more information on GPTQ quantisation.

Fine-tuning support (Experimental)

One can serve OpenLLM models with any PEFT-compatible layers with --adapter-id:

bash openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6-7b-quotes

It also supports adapters from custom paths:

bash openllm start opt --model-id facebook/opt-6.7b --adapter-id /path/to/adapters

To use multiple adapters, use the following format:

bash openllm start opt --model-id facebook/opt-6.7b --adapter-id aarnphm/opt-6.7b-lora --adapter-id aarnphm/opt-6.7b-lora:french_lora

By default, the first adapter-id will be the default Lora layer, but optionally users can change what Lora layer to use for inference via /v1/adapters:

bash curl -X POST http://localhost:3000/v1/adapters --json '{"adapter_name": "vn_lora"}'

Note that for multiple adapter-name and adapter-id, it is recommended to update to use the default adapter before sending the inference, to avoid any performance degradation

To include this into the Bento, one can also provide a --adapter-id into openllm build:

bash openllm build opt --model-id facebook/opt-6.7b --adapter-id ...

Note: We will gradually roll out support for fine-tuning all models. The following models contain fine-tuning support: OPT, Falcon, LlaMA.

Integrating a New Model

OpenLLM encourages contributions by welcoming users to incorporate their custom LLMs into the ecosystem. Check out Adding a New Model Guide to see how you can do it yourself.

Embeddings

OpenLLM tentatively provides embeddings endpoint for supported models. This can be accessed via /v1/embeddings.

To use via CLI, simply call openllm embed:

bash openllm embed --endpoint http://localhost:3000 "I like to eat apples" -o json { "embeddings": [ 0.006569798570126295, -0.031249752268195152, -0.008072729222476482, 0.00847396720200777, -0.005293501541018486, ...<many embeddings>... -0.002078012563288212, -0.00676426338031888, -0.002022686880081892 ], "num_tokens": 9 }

To invoke this endpoint, use client.embed from the Python SDK:

```python import openllm

client = openllm.client.HTTPClient("http://localhost:3000")

client.embed("I like to eat apples") ```

Note: Currently, the following model framily supports embeddings: Llama, T5 (Flan-T5, FastChat, etc.), ChatGLM

⚙️ Integrations

OpenLLM is not just a standalone product; it's a building block designed to integrate with other powerful tools easily. We currently offer integration with BentoML, LangChain, and Transformers Agents.

BentoML

OpenLLM models can be integrated as a Runner in your BentoML service. These runners have a generate method that takes a string as a prompt and returns a corresponding output string. This will allow you to plug and play any OpenLLM models with your existing ML workflow.

```python import bentoml import openllm

model = "opt"

llmconfig = openllm.AutoConfig.formodel(model) llmrunner = openllm.Runner(model, llmconfig=llm_config)

svc = bentoml.Service( name=f"llm-opt-service", runners=[llm_runner] )

@svc.api(input=Text(), output=Text()) async def prompt(inputtext: str) -> str: answer = await llmrunner.generate(input_text) return answer ```

LangChain

To quickly start a local LLM with langchain, simply do the following:

```python from langchain.llms import OpenLLM

llm = OpenLLM(modelname="dolly-v2", modelid='databricks/dolly-v2-7b', device_map='auto')

llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?") ```

langchain.llms.OpenLLM has the capability to interact with remote OpenLLM Server. Given there is an OpenLLM server deployed elsewhere, you can connect to it by specifying its URL:

```python from langchain.llms import OpenLLM

llm = OpenLLM(serverurl='http://44.23.123.1:3000', servertype='grpc') llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?") ```

To integrate a LangChain agent with BentoML, you can do the following:

python llm = OpenLLM( model_name='flan-t5', model_id='google/flan-t5-large', embedded=False, ) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION ) svc = bentoml.Service("langchain-openllm", runners=[llm.runner]) @svc.api(input=Text(), output=Text()) def chat(input_text: str): return agent.run(input_text)

Note You can find out more examples under the examples folder.

Transformers Agents

OpenLLM seamlessly integrates with Transformers Agents.

Warning The Transformers Agent is still at an experimental stage. It is recommended to install OpenLLM with pip install -r nightly-requirements.txt to get the latest API update for HuggingFace agent.

```python import transformers

agent = transformers.HfAgent("http://localhost:3000/hf/agent") # URL that runs the OpenLLM server

agent.run("Is the following text positive or negative?", text="I don't like how this models is generate inputs") ```

Note Only starcoder is currently supported with Agent integration. The example above was also run with four T4s on EC2 g4dn.12xlarge

If you want to use OpenLLM client to ask questions to the running agent, you can also do so:

```python import openllm

client = openllm.client.HTTPClient("http://localhost:3000")

client.ask_agent( task="Is the following text positive or negative?", text="What are you thinking about?", ) ```

Gif showing Agent integration

🚀 Deploying to Production

There are several ways to deploy your LLMs:

🐳 Docker container

  1. Building a Bento: With OpenLLM, you can easily build a Bento for a specific model, like dolly-v2, using the build command.:

bash openllm build dolly-v2

A Bento, in BentoML, is the unit of distribution. It packages your program's source code, models, files, artefacts, and dependencies.

  1. Containerize your Bento

bash bentoml containerize <name:version> This generates a OCI-compatible docker image that can be deployed anywhere docker runs. For best scalability and reliability of your LLM service in production, we recommend deploy with BentoCloud。

☁️ BentoCloud

Deploy OpenLLM with BentoCloud, the serverless cloud for shipping and scaling AI applications.

  1. Create a BentoCloud account: sign up here for early access

  2. Log into your BentoCloud account:

bash bentoml cloud login --api-token <your-api-token> --endpoint <bento-cloud-endpoint>

Note: Replace <your-api-token> and <bento-cloud-endpoint> with your specific API token and the BentoCloud endpoint respectively.

  1. Bulding a Bento: With OpenLLM, you can easily build a Bento for a specific model, such as dolly-v2:

bash openllm build dolly-v2

  1. Pushing a Bento: Push your freshly-built Bento service to BentoCloud via the push command:

bash bentoml push <name:version>

  1. Deploying a Bento: Deploy your LLMs to BentoCloud with a single bentoml deployment create command following the deployment instructions.

👥 Community

Engage with like-minded individuals passionate about LLMs, AI, and more on our Discord!

OpenLLM is actively maintained by the BentoML team. Feel free to reach out and join us in our pursuit to make LLMs more accessible and easy to use 👉 Join our Slack community!

🎁 Contributing

We welcome contributions! If you're interested in enhancing OpenLLM's capabilities or have any questions, don't hesitate to reach out in our discord channel.

Checkout our Developer Guide if you wish to contribute to OpenLLM's codebase.

🍇 Telemetry

OpenLLM collects usage data to enhance user experience and improve the product. We only report OpenLLM's internal API calls and ensure maximum privacy by excluding sensitive information. We will never collect user code, model data, or stack traces. For usage tracking, check out the code.

You can opt out of usage tracking by using the --do-not-track CLI option:

bash openllm [command] --do-not-track

Or by setting the environment variable OPENLLM_DO_NOT_TRACK=True:

bash export OPENLLM_DO_NOT_TRACK=True

📔 Citation

If you use OpenLLM in your research, we provide a citation to use:

bibtex @software{Pham_OpenLLM_Operating_LLMs_2023, author = {Pham, Aaron and Yang, Chaoyu and Sheng, Sean and Zhao, Shenyang and Lee, Sauyon and Jiang, Bo and Dong, Fog and Guan, Xipeng and Ming, Frost}, license = {Apache-2.0}, month = jun, title = {{OpenLLM: Operating LLMs in production}}, url = {https://github.com/bentoml/OpenLLM}, year = {2023} }

Owner

  • Login: GutZuFusss
  • Kind: user
  • Location: Aachen, Germany
  • Company: Full time GitHub influencer artist

rester than the hard

Citation (CITATION.cff)

cff-version: 1.2.0
title: 'OpenLLM: Operating LLMs in production'
message: >-
  If you use this software, please cite it using these
  metadata.
type: software
authors:
  - given-names: Aaron
    family-names: Pham
    email: aarnphm@bentoml.com
    orcid: 'https://orcid.org/0009-0008-3180-5115'
  - given-names: Chaoyu
    family-names: Yang
    email: chaoyu@bentoml.com
  - given-names: Sean
    family-names: Sheng
    email: ssheng@bentoml.com
  - given-names: Shenyang
    family-names: ' Zhao'
    email: larme@bentoml.com
  - given-names: Sauyon
    family-names: Lee
    email: sauyon@bentoml.com
  - given-names: Bo
    family-names: Jiang
    email: jiang@bentoml.com
  - given-names: Fog
    family-names: Dong
    email: fog@bentoml.com
  - given-names: Xipeng
    family-names: Guan
    email: xipeng@bentoml.com
  - given-names: Frost
    family-names: Ming
    email: frost@bentoml.com
repository-code: 'https://github.com/bentoml/OpenLLM'
url: 'https://bentoml.com/'
abstract: >-
  OpenLLM is an open platform for operating large language
  models (LLMs) in production. With OpenLLM, you can run
  inference with any open-source large-language models,
  deploy to the cloud or on-premises, and build powerful AI
  apps. It has built-in support for a wide range of
  open-source LLMs and model runtime, including StableLM,
  Falcon, Dolly, Flan-T5, ChatGLM, StarCoder and more.
  OpenLLM helps serve LLMs over RESTful API or gRPC with one
  command or query via WebUI, CLI, our Python/Javascript
  client, or any HTTP client. It provides first-class
  support for LangChain, BentoML and Hugging Face that
  allows you to easily create your own AI apps by composing
  LLMs with other models and services. Last but not least,
  it automatically generates LLM server OCI-compatible
  Container Images or easily deploys as a serverless
  endpoint via BentoCloud.
keywords:
  - MLOps
  - LLMOps
  - LLM
  - Infrastructure
  - Transformers
  - LLM Serving
  - Model Serving
  - Serverless Deployment
license: Apache-2.0
date-released: '2023-06-13'

GitHub Events

Total
  • Delete event: 3
  • Issue comment event: 3
  • Pull request event: 4
  • Create event: 3
Last Year
  • Delete event: 3
  • Issue comment event: 3
  • Pull request event: 4
  • Create event: 3

Issues and Pull Requests

Last synced: about 2 years ago

All Time
  • Total issues: 0
  • Total pull requests: 44
  • Average time to close issues: N/A
  • Average time to close pull requests: 8 days
  • Total issue authors: 0
  • Total pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.14
  • Merged pull requests: 41
  • Bot issues: 0
  • Bot pull requests: 41
Past Year
  • Issues: 0
  • Pull requests: 44
  • Average time to close issues: N/A
  • Average time to close pull requests: 8 days
  • Issue authors: 0
  • Pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.14
  • Merged pull requests: 41
  • Bot issues: 0
  • Bot pull requests: 41
Top Authors
Issue Authors
Pull Request Authors
  • dependabot[bot] (79)
  • GutZuFusss (5)
Top Labels
Issue Labels
Pull Request Labels
dependencies (79) github_actions (68) javascript (6) python (5)

Dependencies

.github/actions/setup-repo/action.yml actions
  • actions/cache v3 composite
  • actions/setup-python v4 composite
  • rlespinasse/github-slug-action v4.4.1 composite
.github/workflows/auto-bot.yml actions
  • lewagon/wait-on-check-action v1.3.1 composite
.github/workflows/binary-releases.yml actions
  • ./.github/actions/setup-repo * composite
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/setup-python v4 composite
  • actions/upload-artifact v3 composite
  • dtolnay/rust-toolchain stable composite
  • taiki-e/install-action v2 composite
.github/workflows/build.yml actions
  • aarnphm/ec2-github-runner main composite
  • actions/checkout v3 composite
  • aquasecurity/trivy-action master composite
  • aws-actions/configure-aws-credentials v2 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2.2.0 composite
  • docker/metadata-action v4.6.0 composite
  • docker/setup-buildx-action v2.9.1 composite
  • docker/setup-qemu-action v2.2.0 composite
  • github/codeql-action/upload-sarif v2 composite
  • rlespinasse/github-slug-action v4.4.1 composite
  • sigstore/cosign-installer v3.1.1 composite
.github/workflows/ci.yml actions
  • ./.github/actions/setup-repo * composite
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/upload-artifact v3 composite
  • marocchino/sticky-pull-request-comment v2 composite
  • pre-commit/action v3.0.0 composite
  • re-actors/alls-green release/v1 composite
.github/workflows/cleanup.yml actions
  • actions/checkout v3 composite
.github/workflows/create-releases.yml actions
  • ./.github/actions/setup-repo * composite
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • crazy-max/ghaction-import-gpg v5 composite
  • pypa/gh-action-pypi-publish release/v1 composite
.github/workflows/release-notes.yml actions
  • ./.github/actions/setup-repo * composite
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • softprops/action-gh-release v1 composite
.github/workflows/update-changelog.yml actions
  • ./.github/actions/setup-repo * composite
  • actions/checkout v3 composite
  • crazy-max/ghaction-import-gpg v5 composite
  • peter-evans/create-pull-request v5 composite
contrib/clojure-ui/Dockerfile docker
  • node 18.16.1 build
src/openllm/bundle/oci/Dockerfile docker
  • base-container latest build
  • debian bullseye-slim build
  • kernel-builder latest build
  • nvidia/cuda 11.8.0-cudnn8-runtime-ubuntu22.04 build
  • pytorch-install latest build
contrib/clojure-ui/package.json npm
  • autoprefixer ^10.4.12 development
  • cssnano ^6.0.0 development
  • npm-run-all ^4.1.5 development
  • postcss ^8.4.23 development
  • postcss-cli ^10.1.0 development
  • shadow-cljs 2.25.2 development
  • tailwindcss ^3.3.2 development
  • @emotion/react ^11.10.6
  • @emotion/styled ^11.10.6
  • @mui/base 5.0.0-alpha.120
  • @mui/icons-material 5.11.16
  • @mui/material 5.11.12
  • @mui/x-data-grid 6.0.0
  • @mui/x-date-pickers 6.0.0
  • @tailwindcss/forms ^0.5.3
  • create-react-class 15.7.0
  • cross-env ^7.0.3
  • highlight.js 11.5.1
  • react ^18.2.0
  • react-dom ^18.2.0
  • react-transition-group ^4.4.5
package.json npm
  • pyright ^1.1.310 development
  • typescript ^5.0.4 development
  • turbo ^1.9.3
src/openllm_js/package.json npm
examples/langchain-chains-demo/requirements.txt pypi
  • BeautifulSoup4 *
  • langchain >=0.0.212
  • openllm *
  • pydantic *
examples/langchain-tools-demo/requirements.txt pypi
  • google-search-results *
  • langchain *
  • openllm *
nightly-requirements-gpu.txt pypi
nightly-requirements.txt pypi
pyproject.toml pypi
  • GitPython *
  • attrs >=23.1.0
  • bentoml [grpc,io]>=1.0.25
  • bitsandbytes <0.42
  • cattrs >=23.1.0
  • click >=8.1.6
  • cuda-python platform_system!="Darwin"
  • httpx *
  • inflection *
  • optimum *
  • orjson *
  • safetensors *
  • tabulate [widechars]>=0.9.0
  • transformers [torch,tokenizers,accelerate]>=4.29.0
  • typing_extensions *