Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.8%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: Expedient
  • Language: Python
  • Default Branch: main
  • Size: 15.6 KB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Created 7 months ago · Last pushed 7 months ago
Metadata Files
Readme Citation

README.md

Expedient AI API Examples

This repository contains examples and documentation for integrating with the Expedient AI Chat API. Our gateway provides access to multiple AI providers including OpenAI, Claude, Gemini, and Perplexity models through a unified private and secure interface.

Quick Start

  1. Get your API key from Expedient AI
  2. Create and activate a virtual environment: ```bash # Create virtual environment python -m venv venv

# Activate virtual environment # On macOS/Linux: source venv/bin/activate

# On Windows: venv\Scripts\activate 3. **Install dependencies:** `pip install -r requirements.txt` 4. **Update the API key** in the example file that you are running 5. **Run the examples:** bash # Quick test (simplest) python quick_start.py

# Standard streaming models python example.py

# Reasoning models (for complex analysis) python reasoning_example.py

# Web access models with citations (for live web search) python citation_example.py

# OR use cURL commands directly (see curl.md) curl -X POST "[ ENTERCHATURLHERE ]/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer [ ENTERAPIKEYHERE ]" \ -d '{"model": "gpt-4.1", "messages": [{"role": "user", "content": "Hello!"}]}' ```

Files in this Repository

📄 quick_start.py

Minimal example for immediate testing. Features: - Ultra-simple code (30 lines) - No comments or explanations - Just the essentials for quick API testing - Perfect for copy-paste and quick modifications

📄 example.py

Complete Python script demonstrating streaming AI responses with standard models. Features: - Real-time streaming responses - Standard AI models (GPT, Claude, Gemini, Perplexity) - Detailed comments and explanations - Production-ready streaming implementation

📄 reasoning_example.py

Advanced reasoning models with animated thinking indicators. Features: - Complex analytical problem solving - Step-by-step reasoning processes - Animated thinking dots while processing - Models: o4-mini, o3, Claude 4, Gemini 2.5

📄 citation_example.py

Perplexity web reasoning models with real-time data and citations. Features: - Real-time web search during AI responses - Current events and fact-checking capabilities - Automatic citation detection and reporting - Animated search indicators

📄 postman.md

Complete Postman collection documentation with: - 8 ready-to-use API request examples - All supported AI models and providers - Parameter explanations and usage tips - Import instructions for Postman

📄 curl.md

Command-line cURL examples for developers and scripters. Features: - 8 cURL commands matching all Postman examples - Advanced options and streaming processing - Environment variable setup for security - Batch processing and error handling examples

📄 .gitignore

Python-specific gitignore file that excludes: - Virtual environments - API keys and sensitive files - Python cache files - IDE configuration files

Supported AI Providers

Our unified API gateway supports models from:

  • OpenAI: GPT-4.1 (default), o4-mini (reasoning), GPT-4o, GPT-4o-mini, GPT-3.5-turbo
  • Anthropic: Claude-4-Sonnet (reasoning), Claude-3.7-Sonnet, Claude-3.5-Sonnet, Claude-3-Haiku, Claude-3-Opus
  • Google: Gemini-2.5-Pro (reasoning), Gemini-2.5-Flash (reasoning), Gemini-1.5-Pro, Gemini-1.5-Flash
  • Perplexity: Sonar-Pro, Sonar-Reasoning-Pro, Sonar (all web-connected with real-time data)

Key Features

Unified Interface - One API for multiple AI providers
Real-time Streaming - See responses as they're generated
Multiple Models - Choose the best model for your use case
Web Access & Citations - Real-time data with source transparency
Quick Start - Ultra-simple 30-line example for immediate testing
Enterprise Ready - Production-grade error handling
Easy Integration - Simple REST API with JSON

API Endpoint

[ ENTER_CHAT_URL_HERE ]/chat/completions

Authentication

All requests require a Bearer token in the Authorization header:

Authorization: Bearer [ ENTER_API_KEY_HERE ]

Basic Usage Example

```python import requests import json

Simple streaming request

response = requests.post( "[ ENTERCHATURLHERE ]/chat/completions", headers={ "Content-Type": "application/json", "Authorization": "Bearer [ ENTERAPIKEYHERE ]" }, json={ "model": "gpt-4.1", "messages": [{"role": "user", "content": "Hello, world!"}], "stream": True }, stream=True )

Process streaming response

for line in response.iterlines(decodeunicode=True): if line and line.startswith("data: "): datastr = line[6:] if datastr.strip() == "[DONE]": break try: data = json.loads(data_str) if "choices" in data and data["choices"]: delta = data["choices"][0].get("delta", {}) if "content" in delta: print(delta["content"], end="", flush=True) except: continue ```

Getting Started

Prerequisites

  • Python 3.6 or higher
  • requests library (pip install requests)
  • Valid Expedient AI API key

Installation

  1. Clone this repository
  2. Create and activate a virtual environment: ```bash # Create virtual environment python -m venv venv

# Activate virtual environment # On macOS/Linux: source venv/bin/activate

# On Windows: venv\Scripts\activate 3. Install dependencies: bash pip install -r requirements.txt 4. Copy your API key from the Expedient AI dashboard 5. Update the `api_key` variable in the script you want to use 6. Run the examples: bash # Quick test python quick_start.py

# Standard streaming models
python example.py

# Reasoning models python reasoning_example.py

# Web search with citations python citation_example.py ```

Model Selection Guide

Choose the right model for your needs:

| Use Case | Recommended Models | |----------|-------------------| | Advanced Reasoning & Logic | o4-mini, claude-sonnet-4-20250514, gemini/gemini-2.5-pro | | Complex Analysis | gpt-4.1, gpt-4o, claude-3-7-sonnet-20250219, gemini/gemini-1.5-pro | | Large Text Processing | gemini/gemini-2.5-flash (1M+ token context) | | Creative Writing | claude-sonnet-4-20250514, claude-3-7-sonnet-20250219, gpt-4o | | Code Generation | gpt-4.1, claude-sonnet-4-20250514, gpt-4o | | Fast Responses | gpt-4o-mini, claude-3-haiku-20240307, gemini/gemini-1.5-flash | | Current Events & Web Data | perplexity/sonar-reasoning-pro, perplexity/sonar-pro | | General Purpose | gpt-4.1 (default), gpt-3.5-turbo |

Documentation

  • Postman Collection - Complete API testing examples
  • cURL Examples - Command-line examples for terminal and scripts
  • Quick Start - Minimal example for immediate testing
  • Standard Streaming Example - Detailed implementation with comments
  • Reasoning Example - Advanced reasoning with thinking animation
  • Citation Example - Web search with real-time citations
  • Live Swagger API Documentation - Visit [ ENTER_CHAT_URL_HERE ]/docs in your browser for interactive API documentation
  • API Documentation - Contact Expedient AI for detailed API docs

Support

For technical support and API access: - Website: Expedient AI - Documentation: See postman.md for comprehensive examples - Issues: Use this repository's issue tracker for code-related questions

Security Notes

🔒 Never commit API keys to version control
🔒 Use environment variables for production deployments
🔒 Rotate API keys regularly
🔒 Monitor API usage and costs

License

This example code is provided for demonstration purposes. Please check with Expedient AI for licensing terms for the API service itself.

Owner

  • Name: Expedient
  • Login: Expedient
  • Kind: organization
  • Location: Pittsburgh, PA

Citation (citation_example.py)

"""
Perplexity Web Reasoning Example - Python Client
================================================

This script demonstrates how to use Perplexity's reasoning models with web access.
These models can search the web in real-time and provide citations for their answers,
making them perfect for current events, research, and fact-checking.

NOTE: Your AI account must have web access models enabled.

For complete API documentation see postman.md
For standard models see example.py
"""

# Import required libraries for HTTP requests and JSON processing
import requests  # For making HTTP API calls
import json  # For parsing JSON data from streaming responses
import re  # For processing citations
import threading  # For animated thinking dots
import time  # For timing the dot animation

# =============================================================================
# ANIMATED THINKING INDICATOR
# =============================================================================


def animate_thinking():
    """Show animated dots while AI is searching and thinking"""
    dots = [".", "..", "..."]
    i = 0
    while thinking_active:
        print(f"\r🔍 Searching web and thinking{dots[i % 3]}   ", end="", flush=True)
        time.sleep(0.5)  # Update every 0.5 seconds
        i += 1


# =============================================================================
# API CONFIGURATION
# =============================================================================

# Define the base API endpoint for the Expedient AI Chat service
api_endpoint = "[ ENTER_CHAT_URL_HERE ]"

# Set your API authentication key (replace with your actual key)
# This key authorizes your requests to the AI service
api_key = "[ ENTER_API_KEY_HERE ]"  # Replace with your actual API key

# Perplexity Web Reasoning Models
# These models search the web and provide citations

model = "perplexity/sonar-reasoning-pro"  # Web access with reasoning (recommended)

# Alternative web reasoning model:
# model = "perplexity/sonar-reasoning"     # Standard web reasoning model

# Define prompts that benefit from real-time web data and reasoning
# These questions require current information and analysis
prompts = [
    "What are the latest developments in AI regulation in 2025? Analyze the potential impact on enterprise adoption.",
    "Compare the current market performance of major cloud providers (AWS, Azure, GCP) in Q4 2024 and explain the trends.",
    "What are the most recent cybersecurity threats businesses should be aware of? Provide specific examples and mitigation strategies.",
    "Analyze the latest trends in remote work technology and their impact on enterprise productivity in 2025.",
]

# Select which prompt to use (0-3)
selected_prompt = 0
prompt = prompts[selected_prompt]

# =============================================================================
# REQUEST PREPARATION
# =============================================================================

# Set up HTTP headers required for the API request
headers = {"Content-Type": "application/json", "Authorization": f"Bearer {api_key}"}

# Create the request payload optimized for web reasoning
data = {
    "model": model,  # Perplexity reasoning model with web access
    "messages": [
        {"role": "user", "content": prompt}
    ],  # Question requiring current web data
    "max_tokens": 1500,  # Higher limit for detailed analysis with citations
    "temperature": 0.4,  # Balanced temperature for factual yet analytical responses
    "stream": True,  # Enable streaming for real-time responses
}

# Construct the complete API URL
api_full = f"{api_endpoint}/chat/completions"

# =============================================================================
# API REQUEST EXECUTION
# =============================================================================

print("🌐 Perplexity Web Reasoning Example")
print("=" * 60)
print(f"Model: {model}")
print(f"Capabilities: Web Access + Reasoning + Citations")
print("=" * 60)
print(f"\n📋 Query: {prompt}")
print("\n" + "=" * 60)

# Start animated thinking indicator for web search + reasoning
print()  # Add newline for clean animation
thinking_active = True
thinking_thread = threading.Thread(target=animate_thinking, daemon=True)
thinking_thread.start()

# Make the HTTP POST request with extended timeout for web search + reasoning
response = requests.post(api_full, headers=headers, json=data, stream=True, timeout=120)

# =============================================================================
# RESPONSE PROCESSING WITH CITATION HANDLING
# =============================================================================

if response.status_code == 200:
    full_response = ""  # Store complete response for citation processing
    content_started = False  # Track if we've started receiving content

    # Process the streaming response
    for line in response.iter_lines(decode_unicode=True):
        if line and line.startswith("data: "):
            data_str = line[6:]  # Remove 'data: ' prefix

            # Check for completion signal
            if data_str.strip() == "[DONE]":
                break

            # Parse and display content
            try:
                data = json.loads(data_str)
                if "choices" in data and data["choices"]:
                    choice = data["choices"][0]

                    # Handle streaming content
                    if "delta" in choice:
                        delta = choice["delta"]
                        if "content" in delta:
                            # Clear thinking indicator and show header on first content
                            if not content_started:
                                thinking_active = False  # Stop animation
                                time.sleep(0.1)  # Brief pause to let animation stop
                                print(
                                    "\r🔍 Searching web and thinking... Found sources!     "
                                )  # Clear line
                                print("\n🔍 Web Search + Reasoning (streaming):")
                                print("-" * 60)
                                content_started = True

                            content = delta["content"]
                            print(content, end="", flush=True)
                            full_response += content

                    # Check for completion with citations
                    if choice.get("finish_reason"):
                        break

            except json.JSONDecodeError:
                continue  # Skip malformed JSON

    # If no content was received, still show completion
    if not content_started:
        thinking_active = False  # Stop animation
        time.sleep(0.1)  # Brief pause to let animation stop
        print("\r🔍 Searching web and thinking... Complete!     ")  # Clear line
        print("\n🔍 Web Search + Reasoning completed (no content received)")
        print("-" * 60)

    # Process and display citations if present
    print("\n\n" + "=" * 60)
    print("📚 SOURCES & CITATIONS")
    print("=" * 60)

    # Look for citation patterns in the response
    citations = re.findall(r"\[(\d+)\]", full_response)
    if citations:
        print(f"✅ Found {len(set(citations))} citation(s) in the response")
        print("\n💡 Citations are embedded in the text above as [1], [2], etc.")
        print("   These refer to web sources the AI accessed during research.")
    else:
        print(
            "ℹ️  No explicit citations found, but response is based on current web data"
        )

    print("\n" + "=" * 60)
    print("🎯 Web reasoning analysis completed.")

else:
    thinking_active = False  # Stop animation on error
    time.sleep(0.1)  # Brief pause to let animation stop
    print(f"\r❌ Request failed: {response.status_code}")
    print(f"Error details: {response.text}")
    if response.status_code == 403:
        print("\n💡 Note: Web access models may require special account permissions")

print("\n" + "=" * 60)
print("💡 ABOUT PERPLEXITY WEB REASONING:")
print("=" * 60)
print("✅ Real-time web search during response generation")
print("✅ Reasoning through complex, current information")
print("✅ Built-in citation system for source transparency")
print("✅ Perfect for current events, research, and fact-checking")
print("\n🔄 Try different prompts by changing 'selected_prompt' (0-3)")

print("\n" + "=" * 60)
print("📋 OTHER EXAMPLE QUERIES:")
print("=" * 60)
for i, example_prompt in enumerate(prompts):
    marker = "👉" if i == selected_prompt else "  "
    print(f"{marker} {i}: {example_prompt}")

GitHub Events

Total
  • Push event: 1
  • Create event: 2
Last Year
  • Push event: 1
  • Create event: 2

Dependencies

requirements.txt pypi
  • requests *