llmorchestrator
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.5%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: trisanths
- License: mit
- Language: Python
- Default Branch: main
- Size: 790 KB
Statistics
- Stars: 2
- Watchers: 1
- Forks: 1
- Open Issues: 6
- Releases: 0
Metadata Files
README.md
LLMOrchestrator
A powerful framework for orchestrating and enhancing Large Language Model (LLM) outputs through multi-model orchestration, chain-of-thought reasoning, and diverse perspectives.
Overview
LLMOrchestrator is designed to enhance LLM outputs by: - Orchestrating multiple LLMs in various configurations - Implementing chain-of-thought reasoning with multiple perspectives - Leveraging different models for their specific strengths - Introducing controlled randomness and diversity in outputs - Providing robust verification and validation mechanisms
Features
- Multi-Model Orchestration: Chain different LLMs in various orders
- Chain-of-Thought Reasoning: Break down complex problems into steps
- Diverse Perspectives: Combine outputs from different models
- Adaptive Learning: Optimize prompts based on performance
- Parallel Processing: Handle multiple requests efficiently
- Caching: Improve response times for repeated queries
- Monitoring: Track performance and quality metrics
- Custom Verification: Implement domain-specific validation
Installation
bash
pip install llm-orchestrator
Quick Start
```python from LLMOrchestrator.models import OpenAIModel, LocalModel from LLMOrchestrator.controller import Controller, PromptTemplate
Initialize models
generator = OpenAIModel( modelname="gpt-3.5-turbo", apikey="your-api-key" ) verifier = LocalModel( model_name="facebook/opt-125m", device="cpu" )
Create a prompt template
template = PromptTemplate( "Please provide a detailed analysis of: {prompt}" )
Initialize controller
controller = Controller( generator=generator, verifier=verifier, maxiterations=3, maxverifications=2, parallelprocessing=True, cacheenabled=True, adaptivelearning=True, monitoringenabled=True, prompt_template=template )
Execute with a prompt
result = controller.execute( prompt="Analyze the impact of artificial intelligence on healthcare.", stop_early=False )
Get performance metrics
metrics = controller.getvalidationmetrics() print(f"Quality Score: {metrics.qualityscore}") print(f"Confidence: {metrics.confidencescore}") print(f"Processing Time: {metrics.processing_time}s") ```
Advanced Usage
Chain-of-Thought Reasoning
```python
Configure for complex reasoning tasks
controller = Controller( generator=generator, verifier=verifier, maxiterations=5, # Allow more iterations for complex reasoning prompttemplate=PromptTemplate( "Let's solve this step by step:\n1. First, let's analyze...\n2. Then, we can consider...\n3. Finally, we can conclude...\n\nProblem: {prompt}" ) )
result = controller.execute( prompt="Explain the relationship between quantum computing and cryptography", stop_early=False ) ```
Parallel Processing with Multiple Models
```python
Process multiple prompts with different models
prompts = [ "Generate a technical specification for a new API", "Create a user interface design document", "Write a security assessment report" ]
results = controller.executeparallel( prompts=prompts, maxworkers=2, stop_early=False )
Get performance report
report = controller.getperformancereport() print(f"Total processing time: {report['totaltime']}s") print(f"Average quality score: {report['avgquality_score']}") ```
Custom Controller Implementation
```python from LLMOrchestrator.controller import CustomController
def custom_processing(output: str) -> str: # Add custom processing logic return output.upper()
controller = CustomController( customfunc=customprocessing, generator=generator, verifier=verifier, parallel_processing=True )
result = controller.execute( prompt="Generate a response", n=3 # Generate 3 variations ) ```
Use Cases
LLMOrchestrator is particularly effective for:
- Complex Reasoning Tasks: Break down multi-step problems
- Content Generation: Combine different models for writing and editing
- Technical Documentation: Generate and validate technical content
- Research Analysis: Synthesize information from multiple perspectives
- Quality Assurance: Implement multiple verification layers
- Performance Optimization: Cache and adapt to improve response times
- Prompt Engineering: Optimize prompts through performance tracking
Configuration
Create a .env file with your API keys:
OPENAI_API_KEY=your-openai-key
Configure advanced settings in orchestration_config.json:
json
{
"max_iterations": 3,
"max_verifications": 2,
"parallel_processing": true,
"cache_enabled": true,
"adaptive_learning": true,
"monitoring_enabled": true
}
Testing
Run the test suite:
bash
pytest tests/ -v
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests
- Submit a pull request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Citation
If you use this software in your research, please cite:
bibtex
@software{llmorchestrator2024,
author = {Srinivasan, Trisanth and Patapati, Santosh},
title = {LLMOrchestrator: A Multi-Model LLM Orchestration Framework for Reducing Bias and Iterative Reasoning},
year = {2025},
publisher = {GitHub},
url = {https://github.com/builtbypyro/LLMOrchestrator}
}
Documentation
Full documentation is available in the docs directory.
Owner
- Name: trisanth
- Login: trisanths
- Kind: user
- Repositories: 1
- Profile: https://github.com/trisanths
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: Srinivasan
given-names: Trisanth
orcid: 0009-0009-7588-7498
title: "LLMOrchestrator: A Framework for Orchestrating and Optimizing Large Language Model Interactions"
version: 0.1.0
date-released: 2024-03-30
url: "https://github.com/builtbypyro/LLMOrchestrator"
license: MIT
type: software
GitHub Events
Total
- Delete event: 1
- Issue comment event: 2
- Push event: 5
- Pull request event: 3
- Create event: 2
Last Year
- Delete event: 1
- Issue comment event: 2
- Push event: 5
- Pull request event: 3
- Create event: 2
Dependencies
- EndBug/add-and-commit v9 composite
- actions/checkout v4 composite
- actions/upload-artifact v4 composite
- openjournals/openjournals-draft-action master composite
- actions/checkout v4 composite
- actions/setup-python v5 composite
- accelerate >=0.20.0
- openai >=1.0.0
- python-dotenv >=0.19.0
- torch >=2.0.0
- transformers >=4.30.0
- pytest >=7.0.0 test
- pytest-cov >=4.0.0 test
- pytest-mock >=3.10.0 test
- Jinja2 ==3.1.6
- MarkupSafe ==3.0.2
- PyYAML ==6.0.2
- annotated-types ==0.7.0
- anyio ==4.9.0
- certifi ==2025.1.31
- charset-normalizer ==3.4.1
- distro ==1.9.0
- exceptiongroup ==1.2.2
- filelock ==3.18.0
- fsspec ==2025.3.0
- h11 ==0.14.0
- httpcore ==1.0.7
- httpx ==0.28.1
- huggingface-hub ==0.29.3
- idna ==3.10
- jiter ==0.9.0
- mpmath ==1.3.0
- networkx ==3.2.1
- numpy ==2.0.2
- openai ==1.69.0
- packaging ==24.2
- pydantic ==2.11.1
- pydantic_core ==2.33.0
- pytest >=7.0.0
- pytest-cov >=4.0.0
- pytest-mock >=3.10.0
- regex ==2024.11.6
- requests ==2.32.3
- safetensors ==0.5.3
- sniffio ==1.3.1
- sympy ==1.13.1
- tokenizers ==0.21.1
- torch ==2.6.0
- tqdm ==4.67.1
- transformers ==4.50.3
- typing-inspection ==0.4.0
- typing_extensions ==4.13.0
- urllib3 ==2.3.0
- accelerate >=0.20.0
- cachetools >=5.0.0
- numpy >=1.21.0
- openai >=1.0.0
- prompt_toolkit >=3.0.0
- pydantic >=2.0.0
- pytest >=7.0.0
- pytest-cov >=4.0.0
- pytest-mock >=3.10.0
- python-dotenv >=0.19.0
- rich >=10.0.0
- scikit-learn >=1.0.0
- tenacity >=8.0.0
- torch >=2.0.0
- tqdm >=4.65.0
- transformers >=4.30.0