Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.8%) to scientific vocabulary
Repository
Light LLM
Basic Info
- Host: GitHub
- Owner: erdogant
- License: mit
- Language: Jupyter Notebook
- Default Branch: main
- Size: 8.66 MB
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 4
Metadata Files
README.md
LLMlight
LLMlight is a Python package for running Large Language Models (LLMs) locally with minimal dependencies. It provides a simple interface to interact with various LLM models, including support for GGUF models and local API endpoints.
🌟 Key Features
- Local LLM Support: Run LLMs locally with minimal dependencies
- Full Promp Control:
- Query
- Instructions
- System
- Context
- Response Format
- Automatic formatting
- Temperature
- Top P
- Single Endpoint will Connect All Local Models: Compatible with various models including:
- Hermes-3-Llama-3.2-3B
- Mistral-7B-Grok
- OpenHermes-2.5-Mistral-7B
- Gemma-2-9B-IT
- Flexible Embedding Methods: Support for multiple embedding approaches:
- TF-IDF for structured documents
- Bag of Words (BOW)
- BERT for free text
- BGE-Small
- Advanced Retrieval Methods:
- Naive RAG with fixed chunking
- RSE (Relevant Segment Extraction)
- Advanced Preprocessing Methods: Advanced reasoning capabilities for complex queries.
- Global-reasoning
- chunk-wise
- Local Memory:
- Video memory for storage
- PDF Processing: Built-in support for reading and processing PDF documents
📚 Documentation & Resources
🚀 Quick Start
Installation
```bash
Install from PyPI
pip install LLMlight
Install from GitHub
pip install git+https://github.com/erdogant/LLMlight ```
Basic Usage with Endpoint
```python from LLMlight import LLMlight
Initialize with default settings
client = LLMlight(endpoint='http://localhost:1234/v1/chat/completions')
Run a simple query
response = client.prompt('What is the capital of France?', context='The capital of France is Amsterdam.', instructions='Do not argue with the information in the context. Only return the information from the context.') print(response)
According to the provided context, the capital of France is Amsterdam.
```
📊 Examples
1. Basic Usage with Local GGUF
```python from LLMlight import LLMlight
Use with a local GGUF client
client = LLMlight(endpoint='path/to/your/client.gguf')
Run a simple query
response = client.prompt('What is the capital of France?', context='The capital of France is Amsterdam.', instructions='Do not argue with the information in the context. Only return the information from the context.') print(response)
According to the provided context, the capital of France is Amsterdam.
```
2. Using with LM Studio
```python from LLMlight import LLMlight
Initialize with LM Studio endpoint
client = LLMlight(endpoint="http://localhost:1234/v1/chat/completions")
Run queries
response = client.prompt('Explain quantum computing in simple terms') ```
3. Check Available Models at Endpoint
```python from LLMlight import LLMlight
Initialize client
from LLMlight import LLMlight client = LLMlight(verbose='info')
modelnames = client.getavailablemodels(validate=False) print(modelnames)
```
3. Query against PDF files
```python from LLMlight import LLMlight
Initialize client
client = LLMlight()
Read PDF
context = client.readpdf(r'path/to/document.pdf', returntype='string')
Query the document
response = client.prompt('Summarize the main points of this document', context=context)
print(response)
```
4. Global Reasoning
```python from LLMlight import LLMlight
Initialize client
client = LLMlight(preprocessing='global_reasoning')
Read PDF
context = client.readpdf(r'path/to/document.pdf', returntype='string')
Query about the document
response = client.prompt('Summarize the main points of this document', context=context, instructions='Do not argue with the information in the context. Only return the information from the context.')
print(response)
```
5. Creating Local Memory Database
```python
Import library
from LLMlight import LLMlight
Initialize with default settings
client = LLMlight(preprocessing=None, retrieval_method=None)
Load existing video memory
client.memoryinit(pathtomemory="knowledgebase.mp4")
Append more documents: PDF/txt/etc files
filepaths = [r'c://pathtoyourfiles//article1.pdf', r'c://pathtoyourfiles//myfile.txt'] client.memoryadd(inputfiles=filepaths)
Add text chunks if you like
client.memory_add(text=['Apes like USB sticks', 'Trees are mainly yellow'])
Save Memory to disk. You can either create new one or overwite existing one.
client.memorysave(filepath="knowledgebasewithmore_data.mp4", overwrite=False)
Run a simple query
response = client.prompt('What do apes like?', instructions='Only return the information from the context. Answer with maximum of 3 words, and starts with "Apes like: "') print(response)
response = client.prompt('What is the capital of France?', context='The capital of France is Amsterdam.', instructions='Do not argue with the information in the context. Only return the information from the context.') print(response)
response = client.prompt('Provide a summary of HyperSpectral from the pdf or text file.', instructions='Do not argue with the information in the context. Only return the information from the context.') print(response)
```
6. Load Local Memory Database
```python
Import library
from LLMlight import LLMlight
Initialize with default settings
client = LLMlight(preprocessing=None, retrievalmethod=None, pathtomemory="knowledgebase.mp4")
Create queries
response = client.prompt('What do apes like?', instructions='Only return the information from the context. Answer with maximum of 3 words, and starts with "Apes like: "') print(response)
```
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
👥 Contributors
👨💻 Maintainer
- Erdogan Taskesen (@erdogant)
☕ Support
This library is free and open source. If you find it useful, consider supporting its development:
📝 License
This project is licensed under the MIT License - see the LICENSE file for details.
Owner
- Name: Erdogan
- Login: erdogant
- Kind: user
- Location: Den Haag
- Website: https://erdogant.github.io/
- Repositories: 51
- Profile: https://github.com/erdogant
Machine Learning | Statistics | Bayesian | D3js | Visualizations
Citation (CITATION.cff)
# YAML 1.2
---
authors:
-
family-names: Taskesen
given-names: Erdogan
orcid: "https://orcid.org/0000-0002-3430-9618"
cff-version: "1.1.0"
date-released: 2020-10-07
keywords:
- "python"
- "LLMlight"
license: "MIT"
message: "If you use this software, please cite it using these metadata."
repository-code: "https://github.com/erdogant/LLMlight"
title: "LLMlight"
version: "0.1.0"
...
GitHub Events
Total
- Release event: 3
- Push event: 17
- Create event: 3
Last Year
- Release event: 3
- Push event: 17
- Create event: 3
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 31 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 4
- Total maintainers: 1
pypi.org: llmlight
LLMlight is a Python library for ...
- Homepage: https://erdogant.github.io/LLMlight
- Documentation: https://llmlight.readthedocs.io/
- License: MIT License Copyright (c) 2025 Erdogan Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
Latest release: 0.3.0
published 7 months ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v2 composite
- github/codeql-action/analyze v1 composite
- github/codeql-action/autobuild v1 composite
- github/codeql-action/init v1 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- json-repair *
- llama-cpp-python *
- pymupdf *
- scikit-learn *
- sentence_transformers *
- irelease * development
- numpy * development
- pytest * development
- rst2pdf * development
- sphinx * development
- sphinx_rtd_theme * development
- sphinxcontrib-fulltoc * development
- tabulate * development
- accelerate *
- bitsandbytes *
- flash_attn *
- json-repair *
- llama-cpp-python *
- pymupdf *
- scikit-learn *
- sentence_transformers *