Science Score: 57.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.0%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: SimiaCryptus
  • License: mit
  • Language: HTML
  • Default Branch: master
  • Size: 352 MB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 1
  • Releases: 0
Created 8 months ago · Last pushed 6 months ago
Metadata Files
Readme License Citation

README.md

QQN Optimizer: Quadratic-Quasi-Newton Optimization Algorithm

Rust License: MIT Crates.io Documentation

A comprehensive optimization library implementing the Quadratic-Quasi-Newton (QQN) algorithm alongside a rigorous benchmarking framework for optimization algorithm evaluation. 📄 Read the Academic Paper - Complete mathematical foundation and theoretical analysis

http://dx.doi.org/10.13140/RG.2.2.15200.19206

Table of Contents

Overview

The QQN Optimizer introduces a novel optimization algorithm that combines gradient descent and L-BFGS directions through quadratic interpolation. Unlike traditional approaches that choose between optimization directions or solve expensive subproblems, QQN constructs a smooth parametric path that guarantees descent while adaptively balancing first-order and second-order information.

Key Innovation: QQN constructs a quadratic path d(t) = t(1-t)(-∇f) + t²d_LBFGS that starts tangent to the gradient direction and curves toward the quasi-Newton direction, then performs univariate optimization along this path.

Key Features

Algorithm Capabilities

  • Robust Convergence: Guaranteed descent property regardless of L-BFGS direction quality
  • No Additional Hyperparameters: Combines existing methods without introducing new tuning parameters
  • Superlinear Local Convergence: Inherits L-BFGS convergence properties near optima
  • Multiple Line Search Methods: Supports Backtracking, Strong Wolfe, Golden Section, Bisection, and more

Comprehensive Benchmarking

  • 62 Benchmark Problems: Covering convex, non-convex, multimodal, and ML problems
  • 25 Optimizer Variants: QQN, L-BFGS, Trust Region, Gradient Descent, and Adam variants
  • Statistical Rigor: Automated statistical testing with Welch's t-test and effect size analysis
  • Reproducible Results: Fixed seeds and deterministic algorithms ensure reproducibility

Reporting and Analysis

  • Multi-Format Output: Generates Markdown, LaTeX, CSV, and HTML reports
  • Convergence Visualization: Automatic generation of convergence plots and performance profiles
  • Statistical Comparison: Win/loss/tie matrices with significance testing
  • Performance Metrics: Success rates, function evaluations, and convergence analysis

Installation

Prerequisites

  • For report generation: pandoc and LaTeX distribution with pdflatex (optional)
  • For OneDNN support: Intel OneDNN library (optional, see OneDNN Installation)

From Source

bash git clone https://github.com/SimiaCryptus/qqn-optimizer.git cd qqn-optimizer cargo build --release

OneDNN Installation

For enhanced performance with neural network problems, you can install Intel OneDNN: ```bash

Ubuntu/Debian systems

./install_onednn.py

Or install from source

./install_onednn.py --source

Then build with OneDNN support

cargo build --release --features onednn ```

Using Docker

bash docker build -t qqn-optimizer . docker run -v $(pwd)/results:/app/results qqn-optimizer benchmark

As a Library

Add to your Cargo.toml: toml [dependencies] qqn-optimizer = { git = "https://github.com/SimiaCryptus/qqn-optimizer.git" }

Quick Start

Running Benchmarks

```bash

Run full benchmark suite (may take hours)

cargo run --release -- benchmark

Run calibration benchmarks (faster, for testing)

cargo run --release -- calibration

Run specific problem sets

cargo run --release -- benchmark --problems analytic cargo run --release -- benchmark --problems ml

Generate reports from existing results

./processresultsmd.sh # Convert markdown to HTML ./processresultstex.sh # Convert LaTeX tables to PDF ```

Using QQN in Your Code

```rust use qqnoptimizer::optimizers::qqn::QQNOptimizer; use qqnoptimizer::linesearch::strongwolfe::StrongWolfeLineSearch;

// Define your objective function fn rosenbrock(x: &[f64]) -> f64 { let mut sum = 0.0; for i in 0..x.len()-1 { let a = 1.0 - x[i]; let b = x[i+1] - x[i] * x[i]; sum += a * a + 100.0 * b * b; } sum }

// Define gradient function fn rosenbrock_grad(x: &[f64]) -> Vec { let mut grad = vec![0.0; x.len()]; for i in 0..x.len()-1 { grad[i] += -2.0 * (1.0 - x[i]) - 400.0 * x[i] * (x[i+1] - x[i] * x[i]); if i > 0 { grad[i] += 200.0 * (x[i] - x[i-1] * x[i-1]); } } if x.len() > 1 { let last = x.len() - 1; grad[last] = 200.0 * (x[last] - x[last-1] * x[last-1]); } grad }

// Create and run optimizer let linesearch = StrongWolfeLineSearch::new(); let mut optimizer = QQNOptimizer::new(linesearch);

let initialpoint = vec![-1.0, 1.0]; // Starting point let result = optimizer.optimize( &rosenbrock, &rosenbrockgrad, initial_point, 1000, // max function evaluations 1e-8 // gradient tolerance );

println!("Optimum found at: {:?}", result.x); println!("Function value: {}", result.fx); println!("Function evaluations: {}", result.numfevals); ```

The QQN Algorithm

Mathematical Foundation

QQN addresses the fundamental question: given gradient and quasi-Newton directions, how should we combine them? The algorithm constructs a quadratic path satisfying three constraints:

  1. Initial Position: d(0) = 0 (starts at current point)
  2. Initial Tangent: d'(0) = -∇f(x) (begins with steepest descent)
  3. Terminal Position: d(1) = d_LBFGS (ends at L-BFGS direction)

This yields the canonical form: d(t) = t(1-t)(-∇f) + t²d_LBFGS

Key Properties

  • Guaranteed Descent: The initial tangent condition ensures descent regardless of L-BFGS quality
  • Adaptive Interpolation: Automatically balances first-order and second-order information
  • Robust to Failures: Gracefully degrades to gradient descent when L-BFGS fails
  • No Additional Parameters: Uses existing L-BFGS and line search parameters

Convergence Guarantees

  • Global Convergence: Under standard assumptions, converges to stationary points
  • Superlinear Local Convergence: Near optima with positive definite Hessian, achieves superlinear convergence matching L-BFGS

Benchmarking Framework

Problem Suite

The benchmark suite includes 62 carefully selected problems across five categories:

  • Convex Functions (6): Sphere, Matyas, Zakharov variants
  • Non-Convex Unimodal (12): Rosenbrock, Beale, Levy variants
  • Highly Multimodal (24): Rastrigin, Ackley, Michalewicz, StyblinskiTang
  • ML-Convex (8): Linear regression, logistic regression, SVM
  • ML-Non-Convex (9): Neural networks with varying architectures

Statistical Analysis

The framework employs rigorous statistical methods:

  • Multiple Runs: 50 runs per problem-optimizer pair for statistical validity
  • Welch's t-test: For comparing means with unequal variances
  • Cohen's d: For measuring effect sizes
  • Bonferroni Correction: For multiple comparison adjustment
  • Win/Loss/Tie Analysis: Comprehensive pairwise comparisons

Evaluation Methodology

  1. Calibration Phase: Determines problem-specific convergence thresholds
  2. Benchmarking Phase: Evaluates all optimizers with consistent criteria
  3. Statistical Analysis: Automated significance testing and effect size calculation
  4. Report Generation: Multi-format output with visualizations

Usage Examples

Custom Optimizer Implementation

```rust use qqnoptimizer::optimizers::traits::Optimizer; use qqnoptimizer::line_search::backtracking::BacktrackingLineSearch;

struct MyCustomOptimizer { line_search: BacktrackingLineSearch, }

impl Optimizer for MyCustomOptimizer { fn optimize( &mut self, f: &F, grad: &G, x0: Vec, maxfevals: usize, grad_tol: f64, ) -> OptimizationResult where F: Fn(&[f64]) -> f64, G: Fn(&[f64]) -> Vec, { // Your optimization logic here todo!() } } ```

Running Specific Benchmarks

```rust use qqnoptimizer::benchmarks::evaluation::runbenchmark; use qqnoptimizer::problemsets::analyticproblems; use qqnoptimizer::optimizersets::qqnvariants; use std::time::Duration;

[tokio::main]

async fn main() -> Result<(), Box> { let problems = analyticproblems(); let optimizers = qqnvariants();

run_benchmark(
    "my_benchmark_",
    1000,  // max function evaluations
    10,    // number of runs
    Duration::from_secs(60), // timeout
    problems,
    optimizers,
).await?;

Ok(())

} ```

Custom Problem Definition

```rust use qqn_optimizer::benchmarks::evaluation::ProblemSpec;

fn mycustomproblem() -> ProblemSpec { ProblemSpec { name: "MyProblem".tostring(), function: Box::new(|x: &[f64]| { // Your objective function x.iter().map(|xi| xi * xi).sum() }), gradient: Box::new(|x: &[f64]| { // Your gradient function x.iter().map(|xi| 2.0 * xi).collect() }), initialpoint: vec![1.0, 1.0, 1.0], bounds: None, // Optional bounds global_minimum: Some(0.0), // Known global minimum } } ```

Benchmark Results

Overall Performance

Based on comprehensive evaluation across 62 problems with over 31,000 optimization runs:

  • QQN Dominance: QQN variants won 36 out of 62 problems (58%)
  • Top Performers:
    • QQN-Bisection-1: 8 wins
    • QQN-StrongWolfe: 7 wins
    • L-BFGS: 6 wins
    • QQN-GoldenSection: 6 wins

Performance by Problem Type

Convex Problems: * QQN-Bisection: 100% success on Sphere problems with 12-16 evaluations * L-BFGS: 100% success on Sphere_10D with only 15 evaluations

Non-Convex Problems: * QQN-StrongWolfe: 35% success on Rosenbrock5D (best among all) * QQN-GoldenSection: 100% success on Beale2D

Multimodal Problems: * QQN-StrongWolfe: 90% success on StyblinskiTang_2D * Adam-Fast: Best on Michalewicz functions (45-60% success)

Machine Learning Problems: * Adam-Fast: Best on neural networks (32.5-60% success) * L-BFGS variants: 100% success on SVM problems

Key Insights

  1. Robustness: QQN maintains consistent performance across problem types
  2. Efficiency: Competitive function evaluation counts with high success rates
  3. Scalability: Performance degrades gracefully with dimensionality
  4. Specialization: Some algorithms excel on specific problem classes

API Documentation

Core Traits

```rust pub trait Optimizer { fn optimize( &mut self, f: &F, grad: &G, x0: Vec, maxfevals: usize, grad_tol: f64, ) -> OptimizationResult; }

pub trait LineSearch { fn search( &mut self, f: &F, grad: &G, x: &[f64], fx: f64, gx: &[f64], direction: &[f64], ) -> LineSearchResult; } ```

QQN Optimizer Variants

  • QQNOptimizer<BacktrackingLineSearch>: Basic backtracking line search
  • QQNOptimizer<StrongWolfeLineSearch>: Strong Wolfe conditions
  • QQNOptimizer<GoldenSectionLineSearch>: Golden section search
  • QQNOptimizer<BisectionLineSearch>: Bisection on derivative
  • QQNOptimizer<MoreThuenteLineSearch>: Moré-Thuente line search

Benchmarking API

```rust // Run benchmark with custom configuration pub async fn runbenchmark( prefix: &str, maxevals: usize, num_runs: usize, timeout: Duration, problems: Vec, optimizers: Vec, ) -> Result<(), Box>;

// Generate reports from benchmark results pub fn generatereports( resultsdir: &str, output_formats: &[ReportFormat], ) -> Result<(), Box>; ```

Contributing

We welcome contributions! Please see our Contributing Guidelines for details.

Development Setup

bash git clone https://github.com/SimiaCryptus/qqn-optimizer.git cd qqn-optimizer cargo build cargo test

Benchmark Report Processing

The project includes scripts to process benchmark results into various formats: ```bash

Process markdown reports to HTML

./processresultsmd.sh

Process LaTeX table exports to PDF

./processresultstex.sh `` These scripts automatically: * Convert.mdfiles to.htmlwith proper link updates * Compile.texfiles to.pdf` using pdflatex * Handle recursive directory processing * Provide detailed logging and error handling

Running Tests

```bash

Unit tests

cargo test

Integration tests

cargo test --test benchmark_reports

Benchmark tests (slow)

cargo test --release calibration

Test with OneDNN support (if installed)

cargo test --release --features onednn ```

Code Style

We use rustfmt and clippy for code formatting and linting:

bash cargo fmt cargo clippy -- -D warnings

Academic Paper

📄 Download Full Paper (PDF)

This work is documented in our academic paper (in preparation):

"Quadratic-Quasi-Newton Optimization: Combining Gradient and Quasi-Newton Directions Through Quadratic Interpolation"

The paper provides: * Complete mathematical derivation of the QQN algorithm * Theoretical convergence analysis * Comprehensive experimental evaluation * Comparison with existing optimization methods

Paper draft and supplementary materials available in the papers/ directory. Direct link to paper PDF.

Citing This Work

If you use QQN Optimizer in your research, please cite:

bibtex @article{qqn2024, title={Quadratic-Quasi-Newton Optimization: Combining Gradient and Quasi-Newton Directions Through Quadratic Interpolation}, author={[Author Name]}, journal={[Journal Name]}, year={2024}, url={https://github.com/SimiaCryptus/qqn-optimizer/} }

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

  • The QQN algorithm was originally developed in 2017
  • AI language models assisted in documentation and benchmarking framework development
  • Thanks to the Rust optimization community for inspiration and feedback

Support

  • Documentation: API Docs (when published)

Project Structure

qqn-optimizer/ ├── src/ # Core library source code │ ├── optimizers/ # Optimizer implementations │ ├── line_search/ # Line search algorithms │ ├── benchmarks/ # Benchmarking framework │ └── problem_sets/ # Test problem definitions ├── papers/ # Academic paper drafts ├── results/ # Benchmark results (generated) ├── scripts/ # Utility scripts ├── process_results_*.sh # Report processing scripts ├── install_onednn.py # OneDNN installation script └── Dockerfile # Container configuration


Note: This is research software. While we strive for correctness and performance, please validate results for your specific use case. The benchmarking framework is designed to facilitate fair comparison and reproducible research in optimization algorithms.

Owner

  • Name: Simia Cryptus
  • Login: SimiaCryptus
  • Kind: organization
  • Email: andrew@simiacryptus.com
  • Location: Seattle, WA

Big Data Science and Artificial Intelligence

Citation (CITATION.cff)

# CITATION.cff
# This file provides citation information for the QQN Optimizer software
# See https://citation-file-format.github.io/ for format specification

cff-version: 1.0.0
message: "If you use this software, please cite it as below."
type: software
title: "QQN: A Quadratic Hybridization of Quasi-Newton Methods for Nonlinear Optimization"
abstract: >
  We present the Quadratic-Quasi-Newton (QQN) algorithm, which combines gradient and 
  quasi-Newton directions through quadratic interpolation. QQN constructs a parametric 
  path d(t) = t(1-t)(-∇f) + t²d_L-BFGS and performs univariate optimization along this 
  path, creating an adaptive interpolation that requires no additional hyperparameters 
  beyond those of its constituent methods. Our comprehensive evaluation across 62 
  benchmark problems demonstrates QQN's superior robustness and practical utility with 
  statistical dominance over L-BFGS and Adam variants.

authors:
  - family-names: "Charneski"
    given-names: "Andrew"
    affiliation: "SimiaCryptus Software"
    orcid: "https://orcid.org/0000-0000-0000-0000"  # Add if available

repository-code: "https://github.com/SimiaCryptus/qqn-optimizer"
url: "https://simiacryptus.github.io/qqn-optimizer"
license: MIT
version: "1.0.0"
date-released: "2025-08-02"

keywords:
  - optimization
  - quasi-newton
  - quadratic-interpolation
  - l-bfgs
  - gradient-descent
  - numerical-methods
  - machine-learning
  - rust
  - research-software
  - benchmarking
  - nonlinear-optimization
  - statistical-analysis

references:
  - type: article
    title: "QQN: A Quadratic Hybridization of Quasi-Newton Methods for Nonlinear Optimization"
    authors:
      - family-names: "Charneski"
        given-names: "Andrew"
        affiliation: "SimiaCryptus Software"
    journal: "arXiv preprint"
    year: 2025
    month: 8
    url: "https://github.com/SimiaCryptus/qqn-optimizer/"
    abstract: >
      We present the Quadratic-Quasi-Newton (QQN) algorithm, which combines gradient and 
      quasi-Newton directions through quadratic interpolation. QQN constructs a parametric 
      path and performs univariate optimization along this path, creating an adaptive 
      interpolation that requires no additional hyperparameters beyond those of its 
      constituent methods.

  - type: article
    title: "On the limited memory BFGS method for large scale optimization"
    authors:
      - family-names: "Liu"
        given-names: "Dong C."
      - family-names: "Nocedal"
        given-names: "Jorge"
    journal: "Mathematical Programming"
    volume: 45
    issue: "1-3"
    start: 503
    end: 528
    year: 1989
    doi: "10.1007/BF01589116"

  - type: book
    title: "Numerical Optimization"
    authors:
      - family-names: "Nocedal"
        given-names: "Jorge"
      - family-names: "Wright"
        given-names: "Stephen J."
    publisher: "Springer Science & Business Media"
    year: 2006
    edition: "2nd"
    isbn: "978-0-387-30303-1"

  - type: software
    title: "COCO: A platform for comparing continuous optimizers in a black-box setting"
    authors:
      - family-names: "Hansen"
        given-names: "Nikolaus"
      - family-names: "Auger"
        given-names: "Anne"
      - family-names: "Ros"
        given-names: "Raymond"
      - family-names: "Mersmann"
        given-names: "Olaf"
      - family-names: "Tušar"
        given-names: "Tea"
      - family-names: "Brockhoff"
        given-names: "Dimo"
    year: 2016
    doi: "10.48550/arXiv.1603.08785"

preferred-citation:
  type: article
  title: "QQN: A Quadratic Hybridization of Quasi-Newton Methods for Nonlinear Optimization"
  authors:
    - family-names: "Charneski"
      given-names: "Andrew"
      affiliation: "SimiaCryptus Software"
  journal: "arXiv preprint"
  year: 2025
  month: 8
  url: "https://github.com/SimiaCryptus/qqn-optimizer/"
  repository-code: "https://github.com/SimiaCryptus/qqn-optimizer"
  abstract: >
    We present the Quadratic-Quasi-Newton (QQN) algorithm, which combines gradient and 
    quasi-Newton directions through quadratic interpolation. QQN constructs a parametric 
    path d(t) = t(1-t)(-∇f) + t²d_L-BFGS and performs univariate optimization along this 
    path, creating an adaptive interpolation that requires no additional hyperparameters 
    beyond those of its constituent methods. Our comprehensive evaluation across 62 
    benchmark problems demonstrates QQN's superior robustness and practical utility.
  keywords:
    - optimization
    - quasi-Newton methods
    - L-BFGS
    - gradient descent
    - quadratic interpolation
    - benchmarking
    - statistical analysis

GitHub Events

Total
  • Delete event: 4
  • Push event: 140
  • Pull request review event: 1
  • Pull request event: 8
  • Create event: 6
Last Year
  • Delete event: 4
  • Push event: 140
  • Pull request review event: 1
  • Pull request event: 8
  • Create event: 6

Dependencies

.github/workflows/ci.yml actions
  • actions/cache v3 composite
  • actions/checkout v4 composite
  • dtolnay/rust-toolchain master composite
  • dtolnay/rust-toolchain stable composite
Cargo.lock cargo
  • 256 dependencies
Cargo.toml cargo
  • criterion 0.6.0 development
  • proptest 1.7.0 development
  • anyhow 1.0
  • approx 0.5
  • bincode 1.3
  • candle-core 0.9.1
  • chrono 0.4
  • env_logger 0.11
  • half 2.3
  • humantime-serde 1.1
  • log 0.4.27
  • plotters 0.3
  • proptest 1.4
  • rand 0.8
  • rand_distr 0.4
  • serde 1.0
  • serde_json 1.0
  • serde_yaml 0.9
  • tempfile 3.8
  • thiserror 2.0.12
  • tokio 1.0
  • toml 0.8
  • tracing 0.1
  • tracing-subscriber 0.3
docker/Dockerfile docker
  • debian bookworm-slim build
  • rust 1.75 build