https://github.com/copyleftdev/tundr
A high-performance optimization server implementing the Model Context Protocol (MCP) for mathematical optimization tasks, with a focus on Bayesian Optimization using Gaussian Processes. Designed for reliability, scalability, and ease of integration in production environments.
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.7%) to scientific vocabulary
Keywords
Repository
A high-performance optimization server implementing the Model Context Protocol (MCP) for mathematical optimization tasks, with a focus on Bayesian Optimization using Gaussian Processes. Designed for reliability, scalability, and ease of integration in production environments.
Basic Info
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
TUNDR MCP Optimization Server
[](https://goreportcard.com/report/github.com/copyleftdev/TUNDR) [](https://pkg.go.dev/github.com/copyleftdev/TUNDR) [](https://www.gnu.org/licenses/agpl-3.0) [](https://github.com/copyleftdev/TUNDR/actions) [](https://coveralls.io/github/copyleftdev/TUNDR?branch=main) A high-performance optimization server implementing the Model Context Protocol (MCP) for mathematical optimization tasks, with a focus on Bayesian Optimization using Gaussian Processes. Designed for reliability, scalability, and ease of integration in production environments.🌟 Features
🎯 Key Features
Bayesian Optimization
- Multiple kernel support (Matern 5/2, RBF, Custom)
- Parallel evaluation of multiple points
- Constrained optimization support
- Efficient global optimization of expensive black-box functions
Real-World Use Cases
Hyperparameter Tuning
- Optimize machine learning model hyperparameters with minimal trials
- Supports both continuous and categorical parameters
- Ideal for deep learning, XGBoost, and other ML frameworks
Engineering Design Optimization
- Optimize product designs with multiple competing objectives
- Handle physical and operational constraints
- Applications in aerospace, automotive, and manufacturing
Scientific Research
- Optimize experimental parameters in chemistry and physics
- Minimize cost function evaluations in computationally expensive simulations
- Adaptive experimental design
Financial Modeling
- Portfolio optimization under constraints
- Algorithmic trading parameter optimization
- Risk management parameter tuning
Industrial Process Optimization
- Optimize manufacturing processes
- Energy consumption minimization
- Yield improvement in production lines
Expected Improvement acquisition function (with support for Probability of Improvement and UCB)
Support for both minimization and maximization problems
Parallel evaluation of multiple points
Constrained optimization support
MCP-compliant API endpoints
🛠️ Robust Implementation
- Comprehensive test coverage
- Graceful error handling and recovery
- Detailed structured logging with zap
- Context-aware cancellation and timeouts
- Memory-efficient matrix operations
- MCP protocol compliance
🚀 Performance Optimizations
- Fast matrix operations with gonum
- Efficient memory management with object pooling
- Optimized Cholesky decomposition with fallback to SVD
- Parallel batch predictions
📊 Monitoring & Observability
- Prometheus metrics endpoint
- Structured logging in JSON format
- Distributed tracing support (OpenTelemetry)
- Health check endpoints
- Performance profiling endpoints
Features
Bayesian Optimization with Gaussian Processes
- Multiple kernel support (Matern 5/2, RBF)
- Expected Improvement acquisition function
- Numerical stability with Cholesky decomposition and SVD fallback
- Support for both minimization and maximization problems
- Parallel evaluation of multiple points
Robust Implementation
- Comprehensive test coverage (>85%)
- Graceful error handling and recovery
- Detailed logging with structured logging (zap)
- Context-aware cancellation
API & Integration
- JSON-RPC 2.0 over HTTP/2 interface
- RESTful endpoints for common operations
- OpenAPI 3.0 documentation
- gRPC support (planned)
Monitoring & Observability
- Prometheus metrics endpoint
- Structured logging
- Distributed tracing (OpenTelemetry)
- Health checks
Scalability
- Stateless design
- Horizontal scaling support
- Multiple storage backends (SQLite, PostgreSQL)
- Caching layer (Redis)
🚀 Quick Start
MCP Protocol Support
This server implements the Model Context Protocol (MCP) for optimization tasks. The MCP provides a standardized way to:
- Define optimization problems
- Submit optimization tasks
- Monitor optimization progress
- Retrieve optimization results
The server exposes MCP-compatible endpoints for seamless integration with other MCP-compliant tools and services.
Prerequisites
- Go 1.21 or later
- Git (for version control)
- Make (for development tasks)
- (Optional) Docker and Docker Compose for containerized deployment
Installation
```bash
Clone the repository
git clone https://github.com/copyleftdev/TUNDR.git cd TUNDR
Install dependencies
go mod download
Build the server
go build -o bin/server ./cmd/server ```
Running the Server
```bash
Start the server with default configuration
./bin/server
Or with custom configuration
CONFIG_FILE=config/local.yaml ./bin/server ```
Using Docker
```bash
Build the Docker image
docker build -t tundr/mcp-optimization-server .
Run the container
docker run -p 8080:8080 tundr/mcp-optimization-server ```
📚 Documentation
MCP Integration
The server implements the following MCP-compatible endpoints:
REST API
POST /api/v1/optimize- Submit a new optimization taskGET /api/v1/status/{id}- Check the status of an optimization taskDELETE /api/v1/optimization/{id}- Cancel a running optimization task
JSON-RPC 2.0 Endpoint
POST /rpc- Unified endpoint for all JSON-RPC 2.0 operations
Available JSON-RPC Methods
optimization.start- Start a new optimizationjson { "jsonrpc": "2.0", "id": 1, "method": "optimization.start", "params": [{ "objective": "minimize", "parameters": [ {"name": "x", "type": "float", "bounds": [0, 10]}, {"name": "y", "type": "float", "bounds": [0, 10]} ] }] }optimization.status- Get status of an optimizationjson { "jsonrpc": "2.0", "id": 2, "method": "optimization.status", "params": ["optimization_id"] }optimization.cancel- Cancel an optimizationjson { "jsonrpc": "2.0", "id": 3, "method": "optimization.cancel", "params": ["optimization_id"] }
Error Responses
All endpoints return errors in the following format:
REST API
json
{
"error": {
"code": 400,
"message": "Invalid input parameters"
}
}
JSON-RPC 2.0
json
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32602,
"message": "Invalid params"
}
}
Common error codes:
- -32600 - Invalid Request
- -32601 - Method not found
- -32602 - Invalid params
- -32603 - Internal error
- -32000 to -32099 - Server error
API Reference
Check out the API Documentation for detailed information about the available methods and types.
Example: Basic Usage
```go package main
import ( "context" "fmt" "math"
"github.com/copyleftdev/TUNDR/internal/optimization"
"github.com/copyleftdev/TUNDR/internal/optimization/bayesian"
)
func main() { // Define the objective function (to be minimized) objective := func(x []float64) (float64, error) { // Example: Rosenbrock function return math.Pow(1-x[0], 2) + 100math.Pow(x[1]-x[0]x[0], 2), nil }
// Define parameter bounds
bounds := [][2]float64{{-5, 5}, {-5, 5}}
// Create optimizer configuration
config := optimization.OptimizerConfig{
Objective: objective,
Bounds: bounds,
MaxIterations: 50,
NInitialPoints: 10,
}
// Create and run the optimizer
optimizer, err := bayesian.NewBayesianOptimizer(config)
if err != nil {
panic(fmt.Sprintf("Failed to create optimizer: %v", err))
}
// Run the optimization
result, err := optimizer.Optimize(context.Background(), config)
if err != nil {
panic(fmt.Sprintf("Optimization failed: %v", err))
}
// Print results
fmt.Printf("Optimal parameters: %v\n", result.BestSolution.Parameters)
fmt.Printf("Optimal value: %f\n", result.BestSolution.Value)
fmt.Printf("Number of iterations: %d\n", result.Iterations)
fmt.Printf("Converged: %v\n", result.Converged)
} ```
Configuration
Create a config.yaml file to customize the server behavior:
```yaml server: port: 8080 env: development timeout: 30s
logging: level: info format: json output: stdout
optimization: maxconcurrent: 4 defaultkernel: "matern52" default_acquisition: "ei"
storage: type: "memory" # or "postgres" dsn: "" # Only needed for postgres
metrics: enabled: true path: "/metrics" namespace: "tundr"
tracing: enabled: false service_name: "mcp-optimization-server" endpoint: "localhost:4317" ```
🧪 Testing
Run the test suite:
```bash
Run all tests
go test ./...
Run tests with coverage
go test -coverprofile=coverage.out ./... && go tool cover -html=coverage.out
Run benchmarks
go test -bench=. -benchmem ./... ```
🤝 Contributing
Contributions are welcome! Please read our Contributing Guidelines for details on how to submit pull requests, report issues, or suggest new features.
📄 License
This project is part of the CopyleftDev ecosystem and is licensed under the GNU Affero General Public License v3.0 - see the LICENSE file for details.
📚 Resources
- Bayesian Optimization: A Tutorial
- Gaussian Processes for Machine Learning
- Model Context Protocol Specification (Coming Soon)
📬 Contact
For questions or support, please open an issue or contact the maintainers at [email protected]
Installation
Clone the repository:
bash git clone https://github.com/tundr/mcp-optimization-server.git cd mcp-optimization-serverInstall dependencies:
bash make depsBuild the binary:
bash make build
This will create a tundr binary in the bin directory.
Configuration
Environment Variables
Create a .env file in the project root with the following variables:
```env
Application
ENV=development LOGLEVEL=info HTTPPORT=8080
Database
DBTYPE=sqlite # sqlite or postgres DBDSN=file:data/tundr.db?cache=shared&_fk=1
Authentication
JWT_KEY=your-secure-key-change-in-production
Optimization
MAXCONCURRENTJOBS=10 JOB_TIMEOUT=30m
Monitoring
METRICSENABLED=true METRICSPORT=9090 ```
Configuration File
For more complex configurations, you can use a YAML configuration file (default: config/config.yaml):
```yaml server: env: development port: 8080 shutdown_timeout: 30s
database: type: sqlite dsn: file:data/tundr.db?cache=shared&fk=1 maxopenconns: 25 maxidleconns: 5 connmax_lifetime: 5m
optimization: maxconcurrentjobs: 10 jobtimeout: 30m defaultalgorithm: bayesian
bayesian: defaultkernel: matern52 defaultnoise: 1e-6 max_observations: 1000
cmaes: populationsize: auto # auto or number max_generations: 1000
monitoring: metrics: enabled: true port: 9090 path: /metrics
tracing: enabled: false endpoint: localhost:4317 sample_rate: 0.1
logging: level: info format: json enablecaller: true enablestacktrace: true ```
Running the Server
Development Mode
For development with hot reload:
bash
make dev
Production Mode
Build and run the server:
bash
make build
./bin/tundr serve --config config/production.yaml
Using Docker
```bash
Build the Docker image
docker build -t tundr-optimization .
Run the container
docker run -p 8080:8080 -v $(pwd)/data:/app/data tundr-optimization ```
The server will be available at http://localhost:8080
Usage Examples
Bayesian Optimization Example
```go package main
import ( "context" "fmt" "log" "math"
"github.com/tundr/mcp-optimization-server/internal/optimization"
"github.com/tundr/mcp-optimization-server/internal/optimization/bayesian"
"github.com/tundr/mcp-optimization-server/internal/optimization/kernels"
)
func main() { // Define the objective function (to be minimized) objective := func(x []float64) (float64, error) { // Example: Rosenbrock function return math.Pow(1-x[0], 2) + 100math.Pow(x[1]-x[0]x[0], 2), nil }
// Define parameter bounds
bounds := []optimization.Parameter{
{Name: "x1", Min: -5.0, Max: 10.0},
{Name: "x2", Min: -5.0, Max: 10.0},
}
// Create optimizer configuration
config := optimization.OptimizerConfig{
Objective: objective,
Parameters: bounds,
NInitialPoints: 10,
MaxIterations: 50,
Verbose: true,
}
// Create and configure the optimizer
optimizer, err := bayesian.NewBayesianOptimizer(config)
if err != nil {
log.Fatalf("Failed to create optimizer: %v", err)
}
// Run the optimization
result, err := optimizer.Optimize(context.Background())
if err != nil {
log.Fatalf("Optimization failed: %v", err)
}
// Print results
fmt.Printf("Best solution: %+v\n", result.BestSolution)
fmt.Printf("Best value: %f\n", result.BestSolution.Value)
fmt.Printf("Number of iterations: %d\n", len(result.History))
} ```
REST API Example
Start a new optimization job:
bash
curl -X POST http://localhost:8080/api/v1/optimize \
-H "Content-Type: application/json" \
-d '{
"algorithm": "bayesian",
"objective": "minimize",
"parameters": [
{"name": "x1", "type": "float", "bounds": [-5.0, 10.0]},
{"name": "x2", "type": "float", "bounds": [-5.0, 10.0]}
],
"max_iterations": 100,
"n_initial_points": 20,
"metadata": {
"name": "rosenbrock-optimization",
"tags": ["test", "demo"]
}
}'
Check optimization status:
bash
curl http://localhost:8080/api/v1/status/<job_id>
Configuration Reference
Bayesian Optimization Parameters
| Parameter | Type | Default | Description | |-----------|------|---------|-------------| | kernel | string | "matern52" | Kernel type ("matern52" or "rbf") | | lengthscale | float | 1.0 | Length scale parameter | | noise | float | 1e-6 | Observation noise | | xi | float | 0.01 | Exploration-exploitation trade-off | | ninitialpoints | int | 10 | Number of initial random points | | maxiterations | int | 100 | Maximum number of iterations | | random_seed | int | 0 | Random seed (0 for time-based) |
Environment Variables
| Variable | Default | Description | |----------|---------|-------------| | ENV | development | Application environment | | LOGLEVEL | info | Logging level | | HTTPPORT | 8080 | HTTP server port | | DBTYPE | sqlite | Database type (sqlite or postgres) | | DBDSN | file:data/tundr.db | Database connection string | | JWTKEY | | Secret key for JWT authentication | | MAXCONCURRENTJOBS | 10 | Maximum concurrent optimization jobs | | JOBTIMEOUT | 30m | Maximum job duration | | METRICSENABLED | true | Enable Prometheus metrics | | METRICSPORT | 9090 | Metrics server port |
Advanced Usage
Custom Kernels
You can implement custom kernel functions by implementing the kernels.Kernel interface:
go
type Kernel interface {
Eval(x, y []float64) float64
Hyperparameters() []float64
SetHyperparameters(params []float64) error
Bounds() [][2]float64
}
Example custom kernel:
```go type MyCustomKernel struct { lengthScale float64 variance float64 }
func (k MyCustomKernel) Eval(x, y []float64) float64 { // Implement your custom kernel function sumSq := 0.0 for i := range x { diff := x[i] - y[i] sumSq += diff * diff } return k.variance * math.Exp(-0.5sumSq/(k.lengthScale*k.lengthScale)) }
// Implement other required methods... ```
Parallel Evaluation
The optimizer supports parallel evaluation of multiple points:
go
config := optimization.OptimizerConfig{
Objective: objective,
Parameters: bounds,
NInitialPoints: 10,
MaxIterations: 50,
NJobs: 4, // Use 4 parallel workers
}
Callbacks
You can register callbacks to monitor the optimization process:
```go optimizer := bayesian.NewBayesianOptimizer(config)
// Add a callback that's called after each iteration optimizer.AddCallback(func(result *optimization.OptimizationResult) { fmt.Printf("Iteration %d: Best value = %f\n", len(result.History), result.BestSolution.Value) }) ```
API Documentation
REST API
Start Optimization
``` POST /api/v1/optimize Content-Type: application/json
{ "algorithm": "bayesian", "objective": "minimize", "parameters": [ {"name": "x", "type": "float", "bounds": [0, 10], "prior": "uniform"}, {"name": "y", "type": "float", "bounds": [-5, 5], "prior": "normal", "mu": 0, "sigma": 1} ], "constraints": [ {"type": "ineq", "expr": "x + y <= 10"} ], "options": { "maxiterations": 100, "ninitialpoints": 20, "acquisition": "ei", "xi": 0.01, "kappa": 0.1 }, "metadata": { "name": "example-optimization", "tags": ["test"], "userid": "user123" } } ```
Get Optimization Status
GET /api/v1/status/:id
Response:
json
{
"id": "job-123",
"status": "running",
"progress": 0.45,
"best_solution": {
"parameters": {"x": 1.2, "y": 3.4},
"value": 0.123
},
"start_time": "2025-06-30T10:00:00Z",
"elapsed_time": "1h23m45s",
"iterations": 45,
"metadata": {
"name": "example-optimization",
"tags": ["test"]
}
}
JSON-RPC 2.0 API
The server also supports JSON-RPC 2.0 for more advanced use cases:
``` POST /rpc Content-Type: application/json
{ "jsonrpc": "2.0", "id": 1, "method": "optimization.start", "params": [ { "algorithm": "bayesian", "objective": "minimize", "parameters": [ {"name": "x", "type": "float", "bounds": [0, 10]}, {"name": "y", "type": "float", "bounds": [-5, 5]} ], "options": { "maxiterations": 100, "ninitial_points": 20, "acquisition": "ei", "xi": 0.01 } } ] } ```
Performance Tuning
Memory Usage
For large-scale problems, you may need to adjust the following parameters:
- Batch Size: Process points in batches to limit memory usage
- GP Model: Use a sparse approximation for large datasets (>1000 points)
- Cholesky Decomposition: The default solver uses Cholesky decomposition with SVD fallback
Parallelism
You can control the number of parallel workers:
go
config := optimization.OptimizerConfig{
// ... other options ...
NJobs: runtime.NumCPU(), // Use all available CPUs
}
Caching
Enable caching of kernel matrix computations:
go
kernel := kernels.NewMatern52Kernel(1.0, 1.0)
kernel.EnableCache(true) // Enable kernel cache
Monitoring and Observability
The server exposes Prometheus metrics at /metrics:
optimization_requests_total: Total optimization requestsoptimization_duration_seconds: Duration of optimization jobsoptimization_iterations_total: Number of iterations per optimizationoptimization_errors_total: Number of optimization errorsgp_fit_duration_seconds: Duration of GP model fittingacquisition_evaluations_total: Number of acquisition function evaluations
Logging
Logs are structured in JSON format by default. The following log levels are available:
debug: Detailed debug informationinfo: General operational informationwarn: Non-critical issueserror: Critical errors
Contributing
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add some amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Development Workflow
```bash
Run tests
make test
Run linters
make lint
Run benchmarks
make benchmark
Format code
make fmt
Generate documentation
make docs ```
License
Apache 2.0 - See LICENSE for details.
Acknowledgments
- Gonum - Numerical computing libraries for Go
- Zap - Blazing fast, structured, leveled logging
- Chi - Lightweight, composable router for Go HTTP services
- Testify - Toolkit with common assertions and mocks
Development
Building
bash
make build
Testing
bash
make test
Linting
bash
make lint
Deployment
Docker
bash
docker build -t tundr/mcp-optimization-server .
docker run -p 8080:8080 --env-file .env tundr/mcp-optimization-server
Kubernetes
See the deploy/kubernetes directory for example Kubernetes manifests.
License
Owner
- Name: Donald Johnson
- Login: copyleftdev
- Kind: user
- Location: Los Angeles
- Repositories: 39
- Profile: https://github.com/copyleftdev
GitHub Events
Total
- Watch event: 1
- Push event: 4
- Create event: 1
Last Year
- Watch event: 1
- Push event: 4
- Create event: 1
Packages
- Total packages: 1
- Total downloads: unknown
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 0
proxy.golang.org: github.com/copyleftdev/TUNDR
- Homepage: https://github.com/copyleftdev/TUNDR
- Documentation: https://pkg.go.dev/github.com/copyleftdev/TUNDR#section-documentation
- License: other
Rankings
Dependencies
- github.com/beorn7/perks v1.0.1
- github.com/caarlos0/env/v10 v10.0.0
- github.com/cespare/xxhash/v2 v2.3.0
- github.com/davecgh/go-spew v1.1.1
- github.com/go-chi/chi/v5 v5.2.2
- github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822
- github.com/pmezard/go-difflib v1.0.0
- github.com/prometheus/client_golang v1.22.0
- github.com/prometheus/client_model v0.6.1
- github.com/prometheus/common v0.62.0
- github.com/prometheus/procfs v0.15.1
- github.com/stretchr/testify v1.10.0
- go.uber.org/multierr v1.10.0
- go.uber.org/zap v1.27.0
- golang.org/x/sys v0.30.0
- golang.org/x/tools v0.26.0
- gonum.org/v1/gonum v0.16.0
- google.golang.org/protobuf v1.36.5
- gopkg.in/yaml.v3 v3.0.1
- github.com/beorn7/perks v1.0.1
- github.com/caarlos0/env/v10 v10.0.0
- github.com/cespare/xxhash/v2 v2.3.0
- github.com/davecgh/go-spew v1.1.1
- github.com/go-chi/chi/v5 v5.2.2
- github.com/google/go-cmp v0.7.0
- github.com/klauspost/compress v1.18.0
- github.com/kr/pretty v0.3.1
- github.com/kr/text v0.2.0
- github.com/kylelemons/godebug v1.1.0
- github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822
- github.com/pmezard/go-difflib v1.0.0
- github.com/prometheus/client_golang v1.22.0
- github.com/prometheus/client_model v0.6.1
- github.com/prometheus/common v0.62.0
- github.com/prometheus/procfs v0.15.1
- github.com/rogpeppe/go-internal v1.10.0
- github.com/stretchr/testify v1.10.0
- go.uber.org/goleak v1.3.0
- go.uber.org/multierr v1.10.0
- go.uber.org/zap v1.27.0
- golang.org/x/sys v0.30.0
- golang.org/x/tools v0.26.0
- gonum.org/v1/gonum v0.16.0
- google.golang.org/protobuf v1.36.5
- gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405
- gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
- gopkg.in/yaml.v3 v3.0.1