https://github.com/automatic1111/llm-launcher

Gradio UI for launching llama.cpp/TabbyAPI

https://github.com/automatic1111/llm-launcher

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (6.5%) to scientific vocabulary
Last synced: 6 months ago · JSON representation

Repository

Gradio UI for launching llama.cpp/TabbyAPI

Basic Info
  • Host: GitHub
  • Owner: AUTOMATIC1111
  • Language: Python
  • Default Branch: master
  • Size: 54.7 KB
Statistics
  • Stars: 5
  • Watchers: 0
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Created 7 months ago · Last pushed 7 months ago
Metadata Files
Readme

README.md

LLM Launcher

This is a gradio UI that allows to launch server process for llama.cpp or TabbyAPI, track its stats, restart if it crashes, select which model to use in UI, view model properties (including layer list and templates), and download models from huggingface.

Note: The code is for my personal use and there is no support. llama.cpp and/or TabbyAPI are assumed to be already installed.

Installation

  • have python and git installed
  • clone the repository and chdir to its path
  • to install dependencies, run: pip install -r requirements.txt
  • to start the program, run: python main.py
  • after running for the first time, go to settings tab and set paths for llama.cpp/TabbyAPI.

Owner

  • Login: AUTOMATIC1111
  • Kind: user

GitHub Events

Total
  • Watch event: 2
  • Push event: 5
  • Create event: 2
Last Year
  • Watch event: 2
  • Push event: 5
  • Create event: 2

Issues and Pull Requests

Last synced: 7 months ago


Dependencies

requirements.txt pypi
  • Jinja2 ==3.1.6
  • gguf_parser ==0.1.1
  • gradio ==5.39.0
  • requests ==2.32.4
  • safetensors ==0.5.3