apicenter
apicenter - Alish Chhetri's senior thesis project, Allegheny College, 2024-2025
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.5%) to scientific vocabulary
Keywords
Repository
apicenter - Alish Chhetri's senior thesis project, Allegheny College, 2024-2025
Basic Info
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 5
Topics
Metadata Files
README.md
APICenter
Universal Python interface for AI APIs. One consistent pattern for all your AI needs.
Overview
APICenter simplifies interactions with AI services by providing a standardized interface across multiple AI providers and modalities. Instead of learning different syntax for each API, you can use one consistent pattern regardless of whether you're working with OpenAI, Anthropic, or local models through Ollama.
Core Philosophy: Write once, use everywhere.
```python
Generate text with OpenAI
text = apicenter.text( provider="openai", model="gpt-4", prompt="Write a poem about birds" )
Switch to Anthropic with the same interface
text = apicenter.text( provider="anthropic", model="claude-3-sonnet-20240229", prompt="Write a poem about birds" )
Use local models with Ollama
text = apicenter.text( provider="ollama", model="llama2", prompt="Write a poem about birds" ) ```
Features
- Unified API: Consistent pattern across all providers and modalities
- Multiple Modes:
- Text Generation: OpenAI, Anthropic, Ollama (local models)
- Image Generation: OpenAI DALL-E, Stability AI
- Audio Generation: ElevenLabs
- Local Model Support: Integrate with locally-hosted models via Ollama
- Flexible Design: Pass any provider-specific parameters via kwargs
- Simple Credential Management: Easy API key configuration
- Type Safety: Full type hints for better development experience
- Extensible: Easily add new providers and modes
Installation
APICenter is currently in development and not available on PyPI. It can be used locally by:
```bash
Clone the repository
git clone https://github.com/alishchhetri/apicenter.git cd apicenter
Install using Poetry (recommended)
poetry install
Or add as a local dependency in your existing Poetry project
poetry add path/to/apicenter ```
Requirements
- Python 3.12 or higher
- Required packages are listed in pyproject.toml
- For local model support: Ollama
Quick Start
APICenter follows a simple, consistent pattern for all API calls:
```python from apicenter import apicenter
response = apicenter.
Text Generation
```python from apicenter import apicenter
OpenAI example
response = apicenter.text( provider="openai", model="gpt-4", prompt="Explain quantum computing in simple terms" ) print(response)
Anthropic example
response = apicenter.text( provider="anthropic", model="claude-3-sonnet-20240229", prompt="Write a short story about AI" ) print(response)
Ollama example (local model)
response = apicenter.text( provider="ollama", model="llama2", # Any model you've pulled locally prompt="What is the capital of France?" ) print(response) ```
Image Generation
```python
OpenAI DALL-E (returns a single URL string)
image_url = apicenter.image( provider="openai", model="dall-e-3", prompt="A serene mountain lake at sunset", size="1024x1024" )
Download and save the image
import requests response = requests.get(imageurl) with open("generatedimage.png", "wb") as f: f.write(response.content)
Stability AI (returns bytes directly)
image_bytes = apicenter.image( provider="stability", model="stable-diffusion-xl-1024-v1-0", prompt="A cyberpunk cityscape at night" )
Save image bytes directly
with open("stabilityimage.png", "wb") as f: f.write(imagebytes) ```
Audio Generation
```python
ElevenLabs
audiobytes = apicenter.audio( provider="elevenlabs", model="elevenmultilingualv2", prompt="Hello! This is a text-to-speech test.", voiceid="Adam" # Optional voice selection )
Save audio to file
with open("speech.mp3", "wb") as f: f.write(audio_bytes) ```
Configuration
APICenter uses a credentials.json file to store API keys. Place it in one of these locations:
- Current working directory
- Project root directory
- User's home directory (
~/.apicenter/credentials.json) - System config directory (
~/.config/apicenter/credentials.json) - Custom path specified by
APICENTER_CREDENTIALS_PATHenvironment variable
Example credentials.json:
json
{
"modes": {
"text": {
"providers": {
"openai": {
"api_key": "your-openai-api-key",
"organization": "your-org-id"
},
"anthropic": {
"api_key": "your-anthropic-api-key"
}
}
},
"image": {
"providers": {
"openai": {
"api_key": "your-openai-api-key",
"organization": "your-org-id"
},
"stability": {
"api_key": "your-stability-api-key"
}
}
},
"audio": {
"providers": {
"elevenlabs": {
"api_key": "your-elevenlabs-api-key"
}
}
}
}
}
Local Models
For Ollama (local model provider), no API keys are needed. Simply:
- Install Ollama from ollama.ai
- Pull your desired model(s):
ollama pull llama2 - Make sure the Ollama service is running
- Use with APICenter:
apicenter.text(provider="ollama", model="llama2", ...)
Advanced Usage
Chat Conversations
For chat-based models, use message lists:
```python
Chat conversation with OpenAI
messages = [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is the distance to the Moon?"} ]
response = apicenter.text( provider="openai", model="gpt-4", prompt=messages ) print(response)
Continue the conversation
messages.append({"role": "assistant", "content": response}) messages.append({"role": "user", "content": "How long would it take to travel there?"})
followup = apicenter.text( provider="openai", model="gpt-4", prompt=messages ) print(followup) ```
Advanced Chat with System Prompts
APICenter automatically handles the different message formats for each provider. You can use standard chat format with system prompts for all providers:
```python
OpenAI with system prompt
response = apicenter.text( provider="openai", model="gpt-4", prompt=[ {"role": "system", "content": "You are a helpful assistant specialized in science."}, {"role": "user", "content": "Explain the theory of relativity in simple terms."} ], temperature=0.7 )
Anthropic with system prompt (automatically handled correctly)
response = apicenter.text( provider="anthropic", model="claude-3-sonnet-20240229", prompt=[ {"role": "system", "content": "You are a helpful assistant that explains complex topics simply."}, {"role": "user", "content": "Explain quantum computing to me like I'm 10 years old."} ], temperature=0.3, max_tokens=800 )
Ollama with conversation history
response = apicenter.text( provider="ollama", model="llama2", prompt=[ {"role": "system", "content": "You are a friendly AI assistant."}, {"role": "user", "content": "What are the three laws of robotics?"}, {"role": "assistant", "content": "The three laws are..."}, {"role": "user", "content": "Who created these laws?"} ] ) ```
Provider-Specific Parameters
Pass any provider-specific parameters directly using kwargs:
```python
OpenAI with specific parameters
response = apicenter.text( provider="openai", model="gpt-4", prompt="Generate a poem about space", temperature=0.8, max_tokens=500 )
Image generation with specific parameters
image = apicenter.image( provider="stability", model="stable-diffusion-xl-1024-v1-0", prompt="A photorealistic portrait of a Viking warrior", steps=50, cfg_scale=7.0 ) ```
The flexibility of **kwargs allows you to pass any provider-specific parameters without needing to learn special syntax for each provider.
Documentation
For more detailed documentation, see the docs directory:
- API Reference - Complete API documentation
- Configuration Guide - How to configure APICenter
- Models Reference - Information about supported AI models
- Providers Guide - Details about supported providers
- Release Management - Guide for versioning and releases
- Examples - Various usage examples
Testing
APICenter includes a comprehensive test suite to ensure reliability and stability. The tests use mock objects to simulate API calls, so you don't need actual API keys to run the tests.
Running Tests
You can run the tests using the provided run_tests.py script:
```bash
Run all tests
python tests/run_tests.py
Run tests with coverage reporting
python tests/run_tests.py --coverage
Run tests and show slow tests (>0.1s)
python tests/run_tests.py --show-slow
Run only specific tests matching a pattern
python tests/runtests.py --pattern="testtext_*.py" ```
You can also run individual test files directly:
bash
python -m unittest tests/test_apicenter.py
Testing with Ollama
Ollama tests require a local Ollama installation and are skipped in CI environments. To run these tests locally:
- Install Ollama from ollama.ai
- Pull the test model:
ollama pull llama2 - Start the Ollama service
- Run the tests as normal
Test Coverage
For developers contributing to the project, we aim to maintain high test coverage. You can generate a coverage report by installing the coverage package and running the tests with the --coverage flag:
bash
pip install coverage
python tests/run_tests.py --coverage
For more information about testing, see the tests/README.md file.
Contributing
We welcome contributions to APICenter! See CONTRIBUTING.md for guidelines on how to contribute.
Releases and Versioning
APICenter uses GitHub Actions to automatically create GitHub releases when a new version tag is pushed. The process follows semantic versioning:
Release Process
- Create a version tag: ```bash # For a regular release (1.0.0, 2.1.0, etc) git tag v1.0.0 git push origin v1.0.0
# For pre-releases (alpha, beta, release candidate) git tag v1.0.0-beta1 git push origin v1.0.0-beta1 ```
Automatic GitHub Release:
When a tag matchingv*is pushed, GitHub Actions will:- Build the Python package
- Create a GitHub Release with the built packages as assets
- Generate automatic release notes from commit messages
Version determination:
The version is automatically determined from the Git tag using poetry-dynamic-versioning:v1.0.0becomes version1.0.0v1.0.0-beta1becomes version1.0.0b1v1.0.0-alpha2becomes version1.0.0a2
For more information on the release process, see Release Management.
License
APICenter is licensed under the MIT License - see LICENSE for details.
Owner
- Login: AlishChhetri
- Kind: user
- Repositories: 1
- Profile: https://github.com/AlishChhetri
Citation (CITATION.cff)
cff-version: 0.1.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Chhetri"
given-names: "Alish"
email: "chhetri01@allegheny.edu"
title: "APICenter"
version: 0.1.0
date-released: 2024-10-11
url: "https://github.com/AlishChhetri/apicenter"
GitHub Events
Total
- Release event: 4
- Member event: 1
- Push event: 40
- Create event: 7
Last Year
- Release event: 4
- Member event: 1
- Push event: 40
- Create event: 7
Dependencies
- anthropic ^0.36.1
- openai ^1.51.2
- python ^3.12
- python-dotenv ^1.0.1
- rich ^13.9.2