https://github.com/floneum/floneum
Instant, controllable, local pre-trained AI models in Rust
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
1 of 16 committers (6.3%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.7%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
Instant, controllable, local pre-trained AI models in Rust
Basic Info
- Host: GitHub
- Owner: floneum
- License: apache-2.0
- Language: Rust
- Default Branch: main
- Homepage: http://floneum.com/kalosm
- Size: 259 MB
Statistics
- Stars: 2,005
- Watchers: 26
- Forks: 111
- Open Issues: 51
- Releases: 5
Topics
Metadata Files
README.md
Floneum
Floneum is an ecosystem of crates that make it easy to develop applications that use local or remote AI models. There are three main projects in this repo:
- Kalosm: A simple interface for pre-trained models in rust
- Floneum Editor (preview): A graphical editor for local AI workflows. See the user documentation or plugin documentation for more information.
- Fusor: A runtime for quantized ML inference. Fusor uses WGPU to run models on any accelerator natively or in the browser
Kalosm
Kalosm is a simple interface for pre-trained models in Rust that backs Floneum. It makes it easy to interact with pre-trained, language, audio, and image models.
Model Support
Kalosm supports a variety of models. Here is a list of the models that are currently supported:
| Model | Modality | Size | Description | Quantized | CUDA + Metal Accelerated | Example | | ---------------- | -------- | ---------- | -------------------------------------- | --------- | ------------------------ | ---------------------------------------------------------------------------- | | Llama | Text | 1b-70b | General purpose language model | ✅ | ✅ | llama 3 chat | | Mistral | Text | 7-13b | General purpose language model | ✅ | ✅ | mistral chat | | Phi | Text | 2b-4b | Small reasoning focused language model | ✅ | ✅ | phi 3 chat | | Whisper | Audio | 20MB-1GB | Audio transcription model | ✅ | ✅ | live whisper transcription | | RWuerstchen | Image | 5gb | Image generation model | ❌ | ✅ | rwuerstchen image generation | | TrOcr | Image | 3gb | Optical character recognition model | ❌ | ✅ | Text Recognition | | Segment Anything | Image | 50MB-400MB | Image segmentation model | ❌ | ❌ | Image Segmentation | | Bert | Text | 100MB-1GB | Text embedding model | ❌ | ✅ | Semantic Search |
Utilities
Kalosm also supports a variety of utilities around pre-trained models. These include:
- Extracting, formatting and retrieving context for LLMs: Extract context from txt/html/docx/md/pdf chunk that context then search for relevant context with vector database integrations
- Transcribing audio from your microphone or file
- Crawling and scraping content from web pages
Performance
Kalosm uses the candle machine learning library to run models in pure rust. It supports quantized and accelerated models with performance on par with llama.cpp:
Mistral 7b | Accelerator | Kalosm | llama.cpp | | ------ | --------- | --------- | | Metal (M2) | 39 t/s | 27 t/s |
Structured Generation
Kalosm supports structured generation with arbitrary parsers. It uses a custom parser engine and sampler and structure-aware acceleration to make structure generation even faster than uncontrolled text generation. You can take any rust type and add #[derive(Parse, Schema)] to make it usable with structured generation:
```rust use kalosm::language::*;
/// A fictional character
[derive(Parse, Schema, Clone, Debug)]
struct Character { /// The name of the character #[parse(pattern = "[A-Z][a-z]{2,10} [A-Z][a-z]{2,10}")] name: String, /// The age of the character #[parse(range = 1..=100)] age: u8, /// A description of the character #[parse(pattern = "[A-Za-z ]{40,200}")] description: String, }
[tokio::main]
async fn main() { // First create a model. Chat models tend to work best with structured generation let model = Llama::phi3().await.unwrap(); // Then create a task with the parser as constraints let task = model.task("You generate realistic JSON placeholders for characters") .typed(); // Finally, run the task let mut stream = task(&"Create a list of random characters", &model); stream.tostd_out().await.unwrap(); let characters: [Character; 10] = stream.await.unwrap(); println!("{characters:?}"); } ```
https://github.com/user-attachments/assets/8900f57d-55c8-4d4a-a67b-73beab1e5155
In addition to regex, you can provide your own grammar to generate structured data. This lets you constrain the response to any structure you want including complex data structures like JSON, HTML, and XML.
Kalosm Quickstart!
This quickstart will get you up and running with a simple chatbot. Let's get started!
A more complete guide for Kalosm is available on the Kalosm website, and examples are available in the examples folder.
- Install rust
- Create a new project:
sh
cargo new kalosm-hello-world
cd ./kalosm-hello-world
- Add Kalosm as a dependency
```sh
You can use --features language,metal, --features language,cuda, or --features language,mkl if your machine supports an accelerator
cargo add kalosm --features language cargo add tokio --features full ```
- Add this code to your
main.rsfile
```rust, no_run use kalosm::language::*;
[tokio::main]
async fn main() -> Result<(), Box
loop { chat(&promptinput("\n> ")?) .tostd_out() .await?; } } ```
- Run your application with:
sh
cargo run --release
Fusor
⚠️ Fusor is still early in development and is not ready for production use. Fusor will serve as the backend for Kalosm and Floneum in the 0.5 release to enable web and AMD support
Fusor is a WGPU runtime for quantized ML inference. Fusor works with the gguf file format to load quantized models. It targets uses WebGpu to target many different accelerators including Nvidia GPUs, AMD GPUs, and Metal. Most ML frameworks contain hand optimized kernels that perform a series of operations together. Fusor uses a kernel fusion compiler to make merge custom operation chains into an optimized kernel without dropping down to the shader code. This compiles to a single kernel:
rust, ignore
fn exp_add_one(tensor: Tensor<2, f32>) -> Tensor<2, f32> {
1. + (-tensor).exp()
}
Community
If you are interested in either project, you can join the discord to discuss the project and get help.
Contributing
- Report issues on our issue tracker.
- Help other users in the discord
- If you are interested in contributing, feel free to reach out on discord
Owner
- Name: floneum
- Login: floneum
- Kind: organization
- Repositories: 6
- Profile: https://github.com/floneum
GitHub Events
Total
- Create event: 60
- Commit comment event: 1
- Release event: 1
- Issues event: 61
- Watch event: 498
- Delete event: 49
- Issue comment event: 52
- Push event: 651
- Pull request review event: 4
- Pull request event: 113
- Fork event: 38
Last Year
- Create event: 60
- Commit comment event: 1
- Release event: 1
- Issues event: 61
- Watch event: 498
- Delete event: 49
- Issue comment event: 52
- Push event: 651
- Pull request review event: 4
- Pull request event: 113
- Fork event: 38
Committers
Last synced: 6 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Evan Almloff | e****f@g****m | 1,119 |
| ealmloff | e****f@u****m | 110 |
| flavio | n****a@h****t | 17 |
| Evan Almloff | e****1@s****u | 8 |
| KerfuffleV2 | k****e@k****e | 8 |
| dependabot[bot] | 4****]@u****m | 7 |
| Tuareg | c****d@g****m | 3 |
| Xin Hao | h****t@g****m | 3 |
| Alex Araujo | a****o@g****m | 2 |
| Yevgnen | Y****n@u****m | 2 |
| Alex Boehm | k****6@s****g | 1 |
| Daniel Frederico Lins Leite | x****j@h****m | 1 |
| Ikko Eltociear Ashimine | e****r@g****m | 1 |
| Matus Faro | m****o@u****m | 1 |
| Oscar T Giles | o****s@g****m | 1 |
| Vinay Narayana | n****r@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 116
- Total pull requests: 358
- Average time to close issues: 22 days
- Average time to close pull requests: 2 days
- Total issue authors: 48
- Total pull request authors: 16
- Average comments per issue: 0.6
- Average comments per pull request: 0.12
- Merged pull requests: 314
- Bot issues: 0
- Bot pull requests: 22
Past Year
- Issues: 47
- Pull requests: 142
- Average time to close issues: 8 days
- Average time to close pull requests: 5 days
- Issue authors: 31
- Pull request authors: 11
- Average comments per issue: 0.47
- Average comments per pull request: 0.22
- Merged pull requests: 111
- Bot issues: 0
- Bot pull requests: 13
Top Authors
Issue Authors
- ealmloff (57)
- yujonglee (4)
- InAnYan (3)
- rectalogic (3)
- DaveyUS (2)
- usametov (2)
- prabirshrestha (2)
- joshka (2)
- dbkblk (2)
- tyressk (1)
- NewtonChutney (1)
- iganev (1)
- LuckyTurtleDev (1)
- aminnasiri (1)
- katopz (1)
Pull Request Authors
- ealmloff (304)
- dependabot[bot] (22)
- newfla (8)
- aaraujo (4)
- joshka (2)
- fourlexboehm (2)
- Liberxue (2)
- matusfaro (2)
- haoxins (2)
- OscartGiles (2)
- DogeDark (2)
- xunilrj (2)
- Yevgnen (1)
- LafCorentin (1)
- Vinay26k (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 20
-
Total downloads:
- cargo 235,525 total
-
Total dependent packages: 33
(may contain duplicates) -
Total dependent repositories: 0
(may contain duplicates) - Total versions: 135
- Total maintainers: 1
crates.io: kalosm-model-types
Shared types for Kalosm models
- Documentation: https://docs.rs/kalosm-model-types/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: kalosm-parse-macro
A macro to derive kalosm parsing traits
- Documentation: https://docs.rs/kalosm-parse-macro/
- License: MIT/Apache-2.0
-
Latest release: 0.4.1
published 12 months ago
Rankings
Maintainers (1)
crates.io: kalosm-common
Helpers for kalosm downloads and candle utilities
- Documentation: https://docs.rs/kalosm-common/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: kalosm-language-model
A common interface for language models/transformers
- Documentation: https://docs.rs/kalosm-language-model/
- License: MIT/Apache-2.0
-
Latest release: 0.4.1
published 12 months ago
Rankings
Maintainers (1)
crates.io: rmistral
A simple interface for Mistral models
- Documentation: https://docs.rs/rmistral/
- License: MIT/Apache-2.0
-
Latest release: 0.1.0
published about 2 years ago
Rankings
Maintainers (1)
crates.io: kalosm-ocr
A simple interface for pretrained OCR models
- Documentation: https://docs.rs/kalosm-ocr/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: kalosm
A simple interface for pretrained AI models
- Documentation: https://docs.rs/kalosm/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: kalosm-vision
A set of pretrained vision models
- Documentation: https://docs.rs/kalosm-vision/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: rbert
A simple interface for Bert embeddings
- Documentation: https://docs.rs/rbert/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: rwuerstchen
A simple interface for RWuerstchen image generation models models
- Documentation: https://docs.rs/rwuerstchen/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: kalosm-sound
A set of pretrained audio models
- Documentation: https://docs.rs/kalosm-sound/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: kalosm-learning
A simplified machine learning library for building off of pretrained models.
- Documentation: https://docs.rs/kalosm-learning/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: kalosm-sample
A common interface for token sampling and helpers for structered llm sampling
- Documentation: https://docs.rs/kalosm-sample/
- License: MIT/Apache-2.0
-
Latest release: 0.4.1
published 12 months ago
Rankings
Maintainers (1)
crates.io: kalosm-llama
A simple interface for Llama models
- Documentation: https://docs.rs/kalosm-llama/
- License: MIT/Apache-2.0
-
Latest release: 0.4.3
published 7 months ago
Rankings
Maintainers (1)
crates.io: kalosm-learning-macro
A macro to derive kalosm learning traits
- Documentation: https://docs.rs/kalosm-learning-macro/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: segment-anything-rs
A simple interface for Segment Anything models
- Documentation: https://docs.rs/segment-anything-rs/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: rwhisper
A simple interface for Whisper transcription models in Rust
- Documentation: https://docs.rs/rwhisper/
- License: MIT/Apache-2.0
-
Latest release: 0.4.1
published 12 months ago
Rankings
Maintainers (1)
crates.io: kalosm-streams
A set of streams for pretrained models in Kalosm
- Documentation: https://docs.rs/kalosm-streams/
- License: MIT/Apache-2.0
-
Latest release: 0.4.0
published about 1 year ago
Rankings
Maintainers (1)
crates.io: rphi
A simple interface for Phi models
- Documentation: https://docs.rs/rphi/
- License: MIT/Apache-2.0
-
Latest release: 0.3.2
published over 1 year ago
Rankings
Maintainers (1)
crates.io: kalosm-language
A set of pretrained language models
- Documentation: https://docs.rs/kalosm-language/
- License: MIT/Apache-2.0
-
Latest release: 0.4.2
published 7 months ago