compromise.jl

First-Order Multi-Objective Optimizer — Derivative-Free and Gradient Based

https://github.com/manuelbb-upb/compromise.jl

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.1%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

First-Order Multi-Objective Optimizer — Derivative-Free and Gradient Based

Basic Info
  • Host: GitHub
  • Owner: manuelbb-upb
  • License: mit
  • Language: Julia
  • Default Branch: main
  • Size: 1.52 MB
Statistics
  • Stars: 3
  • Watchers: 2
  • Forks: 0
  • Open Issues: 2
  • Releases: 0
Created about 3 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

Compromise.jl

Stable Dev

About “CoMPrOMISE”

Constrained Multiobjective Problem Optimizer with Model Information to Save Evaluations

This package provides a flexible first-order solver for constrained and unconstrained nonlinear multi-objective problems. It uses a trust region approach and either exact derivatives or local surrogate models for a derivative-free descent. Box constraints are respected during model construction and treated as unrelaxable. Box constraints and linear constraints are supported and passed down to an inner LP solver. Nonlinear constraint functions can be modelled and are dealt with by incorporating a filter. They are relaxable, i.e., all other functions must be computable even when the constraints are violated.

Release Notes

I don't really keep up a consistent versioning scheme. But the changes in this section have been significant enough to warrant some comments.

Version 0.3

I did not exactly keep track of all the changes, but some types and default settings have changed, so a new breaking version is warranted. We have set-based algorithms now, but they are drafts only and mostly undocumented. Originally, it was optimize_set, but I tested optimize_many the most. Try optimize_many at your own risk.

Version 0.2.0

This was an intermediate version that has been superseded fast.

Version 0.2.0

  • We import and re-export @set and @reset from Accessors.jl.
  • AlgorithmOptions is no immutable and type-stable. @set algo_opts.float_type will trigger cnoversion and setting of type-dependent defaults.
  • Likewise, RBFConfig is no longer mutable and has concrete types.
  • TypedMOP supports @set and @reset for objectives, nl_eq_constraints and nl_eq_constraints.
  • The AbstractNonlinearOperator interface now requires CompromiseEvaluators.operator_dim_in and CompromiseEvaluators.operator_dim_out.
  • The ReturnObject now references the whole cache (basically a NamedTuple of internal structs.)
  • SimpleMOP reset call counters by default.

Version 0.1.0

This release is breaking, because the RBF database is no longer thread-safe by default. Instead, ConcurrentUtils is a weak dependency and no longer mandatory. To use a thread-safe RBF database, either configure your problem functions with :rbfLocked, use an RBFConfig with database_rwlock = ConcurrentRWLock() or pre-initialize a thread-safe database by setting the field rwlock.

Version 0.0.3

Internally, there have been major changes regarding the caching of MOP and surrogate result values. Previously, separate preallocation functions were required (e.g., prealloc_fx …). Now, there is only init_value_caches, and instead of accessing the cache arrays as properties, there are dedicated getter methods.

Version 0.0.2

RBF Surrogate Models

The internals of how RBF Surrogates are constructed has been redone. As before, the construction is based off “Derivative-Free Optimization Algorithms For Computationally Expensive Functions,” (Wild, 2009). In the old version, I did not really care for the matrix factorizations. Finding a poised set of points for fully-linear interpolation needs repeated QR factorizations of the point matrix. Afterwards, additional points are found by updating the Cholesky factorization of some symmetric matrix product involving the RBF Gram matrix.

  • To save allocations, the QR factorizations now make use of structures similar to those in FastLapackInterface. Once I manage to make a pull request to avoid even more allocations, we can also make FastLapackInterface a dependency.
  • The Cholesky factorization is now updated by using the formulas from the reference, and the factors are used to compute the surrogate coefficients.
  • In both cases, buffers are pre-allocated to support a maximum number of interpolation points, and we work with views mainly. Such temporary buffers are stored in RBFTrainingBuffers.
  • An RBFModel now only needs RBFParameters for successful training and reproducible evaluation. Most importantly, evaluation is decoupled from the RBFDatabase!! In older versions, we would view into the database to query interpolation points. These are now copied instead, so that changes to the database don't invalidate a model.
  • With commit cc709fa we can thus share a database in multiple optimization runs.
  • As an aside, there is a whole new backend for RBFs, which can be used in a standalone manner, too.

For most of the RBF related changes, commit ab5cba8 is most relevant.

Other changes
  • There likely was a bug in how descent directions were computed. In old versions, I tried to avoid the computation of an initial steplength by making it part of the descent direction sub-problem, but accounting for the change in criticality measure did not work out well. Commit f1386c2 makes things look a bit more elegant.
  • At some point in time, I messed up affine scaling. Should work now, and there is tests for it.
  • Threaded parallel execution is now supported internally (but improvised).
  • Lots of tests.
  • Changes to AbstractNonlinearOperator interface. A new AbstractNonlinearOperatorWrapper <: AbstractNonlinearOperator.
  • New defaults in AlgorithmOptions. Stopping based mainly on minimum radius.
  • A new return type (ReturnObject).

Example

Constrained Two-Parabolas Problem

This first example uses Taylor Polynomials to approximate the function locally. For that we need a gradient function. But we also show, how to add functions with derivative-free surrogates -- in this case nonlinear constraints.

First we load the optimizer package, “Compromise.jl”:

julia using Compromise

The package exports a simple problem structure, MutableMOP. As the name suggests, this is a mutable structure to define a multi-objective optimization problem. We can use it to set up a problem step by step. The only information required for initialization is the number of variables:

julia mop = MutableMOP(;num_vars = 2)

For a MutableMOP, all functions are vector-to-vector. We can define the objectives (objectives), nonlinear inequality constraints (nl_ineq_constraints) and nonlinear equality constraints (nl_eq_constraints). By default, these fields default to nothing. Alternatively, they need objects of type Compromise.NonlinearFunction. We have helpers to support normal Julia functions. For example, consider this vector-to-vector function:

julia function objective_function(x) return [ (x[1] - 2)^2 + (x[2] - 1)^2; (x[1] - 2)^2 + (x[2] + 1)^2 ] end

We can easily derive the gradients, so let's also define them manually, to use derivative-based models:

julia function objective_grads(x) # return the transposed Jacobian, i.e., gradients as columns df11 = df21 = 2 * (x[1] - 2) df12 = 2 * (x[2] - 1) df22 = 2 * (x[2] + 1) return [ df11 df21; df12 df22 ] end

To add these functions to mop, we call the helper method add_objectives and also specify the model class to be used. There are shorthand symbols, for example :exact or taylor1, for objectives with known gradients. We also have to tell the optimizer about the function signature. func_iip=true would imply an in-place objective with signature objective_function!(fx, x). dim_out is a mandatory argument.

julia add_objectives!( mop, objective_function, objective_grads, :taylor1; dim_out=2, func_iip=false, grads_iip=false )

!!! note For the above objective function, it would be sensible to additionally have a function objective_values_and_grads, that returns the objectives and gradients at the same time. That is possible, MutableMOP has an interface for such optimizations.

We support non-convex, nonlinear constraints (as long as they are relaxable). For example, we can constrain the problem to ℝ² without unit ball. For demonstration purposes, use an in-place function:

julia nl_ineq_function!(y, x) = y[1] = 1 - sum(x.^2)

Of course, that is a fairly simple constraint function. If it was more complicated, we could be tempted to use automatic differentiation for derivative calculations. Instead, you can also use derivative-free models, such as radial basis function (RBF) models.

For now, we stick with the fixed shape parameter and finalize our problem:

julia add_nl_ineq_constraints!(mop, nl_ineq_function!, :rbf; func_iip=true, dim_out=1 )

The MutableMOP is turned into a TypedMOP during initialization. We can thus simply pass it to optimize:

julia ret = optimize(mop, [-2.0, 0.5])

ret is the return object. You can query it using functions like opt_vars etc. Final argument vector:

julia opt_vars(ret)

Final value vector:

julia opt_objectives(ret)

Final constraint vector:

julia opt_nl_ineq_constraints(ret)

More RBF Options

Instead of passing :rbf, you can also pass an RBFConfig. To use the Gaussian kernel:

julia cfg = RBFConfig(; kernel=GaussianKernel())

Or the inverse multiquadric:

julia cfg = RBFConfig(; kernel=InverseMultiQuadricKernel())

Then:

julia mop = MutableMOP(; num_vars=2) add_objectives!( mop, objective_function, cfg; dim_out=2, func_iip=false, ) ret = optimize(mop, [-2.0, 0.5])

See the docstring for more options.

Sharing an RBFDatabase

Normally, each optimization run initializes a new database. But a database is only ever referenced. We can thus pre-initialize a database and use it in multiple runs:

julia rbf_db = Compromise.RBFModels.init_rbf_database(2, 2, nothing, nothing, Float64) cfg = RBFConfig(; database=rbf_db)

Set up problem:

julia mop = MutableMOP(; num_vars=2) objf_counter = Ref(0) function counted_objf(x) global objf_counter[] += 1 return objective_function(x) end add_objectives!( mop, counted_objf, cfg; dim_out=2, func_iip=false, )

First run:

julia ret = optimize(mop, [-2.0, 0.5]) objf_counter[]

Second run:

julia ret = optimize(mop, [-2.0, 0.0]) objf_counter[]

Parallelism

The RBF update algorithm has a lock to access the database in a safe way (?) when multiple optimization runs are done concurrently. There even is an “algorithm” for this:

julia using ConcurrentUtils mop = MutableMOP(; num_vars=2) add_objectives!( mop, counted_objf, :rbfLocked; dim_out=2, func_iip=false, ) X0 = [ -2.0 -2.0 0.0 0.5 0.0 0.0 ] opts = Compromise.ThreadedOuterAlgorithmOptions(; inner_opts=AlgorithmOptions(; stop_delta_min = 1e-8, ) ) rets = Compromise.optimize_with_algo(mop, opts, X0)

Stopping based on Number of Function Evaluations

The restriction of evaluation budget is a property of the evaluators. Because of this, it is not configurable with AlgorithmOptions. You can pass max_func_calls as a keyword argument to add_objectives! and similar functions. Likewise, max_grad_calls restricts the number of gradient calls, max_hess_calls limits Hessian computations.

~~For historic reasons, the count is kept between runs.~~ The count is now reset between runs by default. To reset the count between runs (sequential or parallel), indicate it when setting up the MOP.

julia mop = MutableMOP(; num_vars=2, reset_call_counters=false) # default add_objectives!( mop, objective_function, :rbf; dim_out=2, func_iip=false, max_func_calls=10 ) ret1 = optimize(mop, [-2, .5])

Now, there is no budget left for a second run:

julia ret2 = optimize(mop, [-2, -.5]) ismissing(opt_vars(ret2))

Here is a remedy:

julia mop.reset_call_counters=true ret1 = optimize(mop, [-2, .5])

Now, there is budget left for a second run:

julia ret2 = optimize(mop, [-2, -.5]) !ismissing(opt_vars(ret2))

Automatic Differentiation

There is an optional ForwardDiff extension. To use a derivative-based model without specifying the gradients by-hand, first load ForwardDiff.

julia using ForwardDiff

Now, ForwardDiffBackend should be available:

julia diff_backend = ForwardDiffBackend()

Set up the problem:

````julia mop = MutableMOP(2) addobjectives!(mop, objectivefunction, :exact; funciip=false, dimout=2, backend=diff_backend )

optimize(mop, -5 .+ 10 .* rand(2)) ````


This page was generated using Literate.jl.

Owner

  • Name: Manuel
  • Login: manuelbb-upb
  • Kind: user

PhD student of Applied Mathematics at Paderborn University • Multiobjective Optimization • Derivative-Free Opt. • Nonlinear Opt. • Surrogate Modelling • DoE

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: CoMPrOMISE
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Manuel Bastian
    family-names: Berkemeier
    email: manuelbb@mail.uni-paderborn.de
    affiliation: Paderborn University
    orcid: 'https://orcid.org/0000-0002-3958-2277'
repository-code: 'https://github.com/manuelbb-upb/Compromise.jl'
abstract: >-
  Research repository for a trust-region optimizer and
  framework for surrogate-assisted multi-objective
  optimization.
keywords:
  - multi-objective optimization
  - nonlinear optimization
  - radial basis functions
  - surrogate models
  - derivative-free optimization
  - constrained optimization
license: MIT

GitHub Events

Total
  • Push event: 6
Last Year
  • Push event: 6

Dependencies

.github/workflows/CI.yml actions
  • actions/checkout v2 composite
  • codecov/codecov-action v2 composite
  • julia-actions/cache v1 composite
  • julia-actions/julia-buildpkg v1 composite
  • julia-actions/julia-docdeploy v1 composite
  • julia-actions/julia-processcoverage v1 composite
  • julia-actions/julia-runtest v1 composite
  • julia-actions/setup-julia v1 composite
.github/workflows/TagBot.yml actions
  • JuliaRegistries/TagBot v1 composite
.github/workflows/CompatHelper.yml actions