Recent Releases of mxlpy

mxlpy - v0.25.0

Separate local and global fitting

To avoid confusion between approaches, the mxlpy module was separated into mxlpy.fit_local and mxlpy.fit_global.

Parameterizable local minimizer

The default scipy minimizer was changed from a raw function to a dataclass that can be parameterized on both tolerance and minimization method. That should make common code patterns easier

python fit_local.time_course( model_fn(), p0={"k1": 1.038, "k2": 1.87, "k3": 1.093}, data=res, minimizer=fit_local.ScipyMinimizer( tol=1e-6, method="Nelder-Mead", ), )

Get right hand side time course

The simulation Result now implements get_right_hand_side, which maps to Model.get_right_hand_side_time_course to get the right hand side of a model over all simulation time steps.

```python res = unwrap( Simulator(createlinearchainmodel(), y0={"S": 2.0, "P": 0.0}) .simulate(10) .getresult() )

res.getrighthand_side() ```

Get producers and consumers

The simulation result now implements get_producers and get_consumers to return the fluxes which have a positive or negative stoichiometry regarding a given variable. These can be scaled by those respective stoichiometries (to compare the net effect of the reaction) by setting scaled=True.

```python res = unwrap( Simulator(createlinearchainmodel(), y0={"S": 2.0, "P": 0.0}) .simulate(10) .getresult() )

res.getproducers("S") res.getproducers("S", scaled=True) ```

- Python
Published by marvinvanaalst 6 months ago

mxlpy - v0.24.0

Unify protocol time course usage

Use same names and semantics for protocol and protocol time courses in all respective modules

  • simulator
  • scan
  • mc
  • carousel
  • fit

This changes the names

| Old | New | | --- | --- | | scan.time_course_over_protocol | scan.protocol_time_course | | mc.time_course_over_protocol | mc.protocol_time_course |

Export models to TypeScript

```python from mxlpy.meta import generatemodelcode_ts

generatemodelcode_ts(model) ```

Overwrite symbolic expression of functions

This is especially useful in cases where the function source cannot be parsed

```python import sympy

from mxlpy import Model from mxlpy.meta import generatemodelcode_py

model = ( Model() .addvariable("x", 1.0) .addreaction("v1", fn=lambda x: x**2, args=["x"], stoichiometry={"x": -1}) )

generatemodelcodepy(model, customfns={"v1": sympy.Symbol("x") ** 2}) ```

Minor changes and improvements

  • improves SBML export
  • uses wadler-linding pretty printer on more types
  • Adds documentation for LLMs

- Python
Published by marvinvanaalst 7 months ago

mxlpy - v0.23.0

Initial assignment

This release extends and deprecates derived initial conditions by also allowing parameters to have initial assignments. To avoid confusion between derived parameters and initial assignments, the InitialAssignment type was introduced, replacing Derived in this context.

```python from mxlpy import InitialAssignment

( Model() .addparameters( { "k1": 0.1, "k2": InitialAssignment(fn=fns.twice, args=["k1"]), } ) .addvariables( { "v1": 0.1, "v2": InitialAssignment(fn=fns.proportional, args=["k2", "v1"]), } ) ) ```

Prior uses of Derived for derived initial conditions of release 0.19 are now deprecated

python ( Model().add_variables( { "x": 1.0, "y": Derived(fn=fns.twice, args=["x"]), # this doesn't work anymore! } ) )

Unit checking

You can check whether the units of your parameters and variables match the ones in derived values and reactions using check_units

python ( model .check_units(time_unit=units.second) .report() )

Diffrax support

Support for using diffrax solver suite as an additional set of integrators

Improved codegen

Massively improved parsing of python functions

SBML supports

Massively increased support of advanced SBML constructs

- Python
Published by marvinvanaalst 8 months ago

mxlpy - v0.22.0

Units

This release introduces units as optional meta information, which can be passed to variables, parameters, reactions and derived values.

python ( Model() .add_parameter("k_0", 1) # without unit .add_parameter("k_1", 1, unit=units.mmol_s) # with unit .add_parameters( { "k_2": 1.0, # without unit "k_3": Parameter(1, unit=units.mmol_s), # with unit } ) )

To enable rich display by default, the property of each of these model components (e.g. model.variables) has been replaced by a table view

python model.parameters

| | value | unit | |:----|--------:|:---------------------------------------| | k0 | 1 | | | k1 | 1 | $\frac{\text{m} \text{mol}}{\text{s}}$ | | k2 | 1 | | | k3 | 1 | $\frac{\text{m} \text{mol}}{\text{s}}$ |

In order to access these components directly, there are now get_raw_* methods for this specifically

  • get_raw_derived
  • get_raw_parameters
  • get_raw_reactions
  • get_raw_readouts
  • get_raw_stoichiometries_of_variable
  • get_raw_surrogates
  • get_raw_variables

Reworked protocol and protocol time course

Simulator.simulate_over_protocol has been renamed to Simulator.simulate_protocol and Simulator.simulate_protocol_time_course has been introduced, keeping consistency with Simulator.simulate and Simulator.simulate_time_course.

```python

Normal protocol

Simulator(model).simulateprotocol(protocol) Simulator(model).simulateprotocol(protocol, timepointsper_step=10) ```

Protocol time courses will return the values at both the protocol points as well as all time points.

```python

Protocol time course

( Simulator(model) .simulateprotocoltimecourse( protocol, timepoints=np.linspace(0, 6, 101, dtype=float), ) ) ```

As protocol time points are interpreted relative, but their time points absolutely, the convenience arguement time_points_as_relative is added. This is only ever required for simulating protocols after other simulations.

python ( Simulator(model) .simulate(6) .simulate_protocol_time_course( protocol, time_points=np.linspace(0, 6, 101, dtype=float), time_points_as_relative=True, ) )

The fitting function fit.time_course_over_protocol has been renamed protocol_time_course to reflect this change as well. Here, the correct data points are chosen automatically for the fitting procedure.

python ( fit.protocol_time_course( model, p0=p0, data=data, protocol=protocol, ) )

Unification of get args and dependent

The model.get_args and model.get_dependent methods have been combined into model.get_args, which now takes boolean arguments with the default selection given below.

python model.get_args( include_time=True, include_variables=True, include_parameters=True, include_derived_parameters=True, include_derived_variables=True, include_reactions=True, include_surrogate_outputs=True, include_readouts=False, )

The same is true for model.get_args_time_course and model.get_dependent_time_course.

Placeholders for unparsable functions

The report and meta.codegen_latex functions will now insert placeholder variables for functions that cannot be parsed instead of failing

python tex = to_tex_export(Model().add_derived("d1", bad_fn, args=[])) print(tex.export_derived(long_name_cutoff=10))

will return latex \begin{align*} \mathrm{d1} &= \textcolor{red}{d1} \\ \end{align*}

This allows the user to fill in those parts by hand.
This is not the case for automatic code generation as in meta.codegen_model, meta.generate_model_code_py and generate_model_code_rs, which will throw errors instead to avoid faulty pipelines.

Free parameters in codegen

To enable further analyses like parameter scans of exported models, there now exists an optional free_parameters keyword argument that will cause those parameters to become function arguments instead of being hard-coded into the exported function

python print(generate_model_code_py(get_linear_chain_2v(), free_parameters=["k1"]))

will produce

```python from collections.abc import Iterable

def model(time: float, variables: Iterable[float], k1: float) -> Iterable[float]: x, y = variables k2 = 2.0 k3 = 1.0 v1 = k1 v2 = k2x v3 = k3y dxdt = v1 - v2 dydt = v2 - v3 return dxdt, dydt ```

Direct markdown view of reports

report.markdown now directly returns a MarkdownReport, which in IPython-compatible kernels will automatically be displayed as markdown, without an explicit call to IPython.display.Markdown.

So this code

```python md = report.markdown( getsir(), getsird(), )

IPython Display

Markdown(md) ```

simpy becomes

python report.markdown( get_sir(), get_sird())

Optional renaming and export of reports

The report names old and new can now be replaced by custom names using the respective keyword arguments

python my_report = report.markdown( get_sir(), get_sird(), m1_name="SIR", # optionally rename first argument m2_name="SIRD", # optionally rename second argument )

Reports can also be easily written to disc by supplying a pathlib.Path

python my_report.write(Path("report.md"))

- Python
Published by marvinvanaalst 9 months ago

mxlpy - v0.21.0

Improvement of fit API

All the functions in the fit module now return a FitResult or None in case of failure.
This FitResult contains the best parameters as before, but also the loss / residual and a copy of the model with the parameters already set

python @dataclass class FitResult: model: Model best_pars: dict[str, float] loss: float

Reaction carousel

```python from mxlpy.carousel import Carousel, ReactionTemplate

carousel = Carousel( getsir(), { "infection": [ ReactionTemplate(fn=fns.massaction2s, args=["s", "i", "beta"]), ReactionTemplate( fn=fns.michaelismenten2s, args=["s", "i", "beta", "kmbs", "kmbi"], additionalparameters={"kmbs": 0.1, "kmbi": 1.0}, ), ], "recovery": [ ReactionTemplate(fn=fns.massaction1s, args=["i", "gamma"]), ReactionTemplate( fn=fns.michaelismenten1s, args=["i", "gamma", "kmgi"], additionalparameters={"km_gi": 0.1}, ), ], }, ) ```

You can run time courses through the entire carousel (similar to the scan methods)

```python carouseltimecourse = carousel.timecourse(np.linspace(0, 100, 101)) variablesbymodel = carouseltimecourse.getvariablesbymodel()

fig, ax = plot.oneaxes() plot.linemeanstd(variablesbymodel["s"].unstack().T, label="s", ax=ax) plot.linemeanstd(variablesbymodel["i"].unstack().T, label="i", ax=ax) plot.linemeanstd(variablesby_model["r"].unstack().T, label="r", ax=ax) ax.legend() plot.show() ```

It is also possible to fit through the entire carousel, which allows you to select the specific set of reactions which best fit the data

```python res = fit.carouseltimecourse( carousel, p0={... }, data=data, )

res.getbestfit() ```

- Python
Published by marvinvanaalst 9 months ago

mxlpy - v0.20.0

Comparison routines

This release adds convenience routines for common comparisons between model variants.
For example, comparing the steady-states

Compare steady-states ```python from mxlpy import compare

ssc = compare.steadystates(getsir(), getsird()) fig, ax = ssc.plotvariables() ```

comparing time courses

```python from mxlpy import compare

tcc = compare.timecourses( getsir(), getsird(), timepoints=np.linspace(0, 100, 101), ) fig, ax = tcc.plotvariablesrelative_difference() ```

and comparing protocol time courses

```python from mxlpy import compare, make_protocol

pc = compare.protocoltimecourses( getsir(), getsird(), protocol=makeprotocol( [ (10, {"beta": 0.2}), (10, {"beta": 1.0}), (80, {"beta": 0.2}), ] ), ) fig, ax = pc.plotvariablesrelativedifference() ```

Convenience functions

```python

model.getunusedparameters() {"p1"}

model.getstoichiometriesofvariable("variablename") {"v1": -1, "v2": 1}

model.getrawstoichiometriesofvariable("variable_name") {"v1": -1, "v2": Derived(...)} ```

Report functions

Reports now include additional information about the number of model components (variables, parameters etc) and whether it contains unused parameters.

LaTeX reports

The format of LaTeX exports is greatly improved, with automatic replacement of long variable names to avoid equations overflowing the page

Solved issues

46

- Python
Published by marvinvanaalst 9 months ago

mxlpy - v0.19.0

Quasi steady-state surrogates

You can now add quasi-steady state surrogates that take multiple inputs and return multiple values.
In that sense, they are extensions of derived parameters and variables.

```python from mxlpy import Model from mxlpy.surrogates import qss

def distribute(s: float) -> tuple[float, float]: return s / 3, s * 2 / 3

m = Model() m.addvariables({"a": 1.0}) m.addsurrogate( "distribute", qss.Surrogate( model=distribute, args=["a"], outputs=["a1", "a2"], ), ) m.get_dependent() ```

yields a 1.000000 a1 0.333333 a2 0.666667

Derived initial conditions

python m = Model() m.add_variables( { "x": 1.0, "y": Derived(fn=fns.twice, args=["x"]), } ) m.get_initial_conditions()

yields

{'x': 1.0, 'y': 2.0}

Initial conditions can also be derived from derived parameters / variables

python m = Model() m.add_variables( { "x": 1.0, "y": Derived(fn=fns.twice, args=["d1"]), } ) m.add_derived( "d1", fn=fns.twice, args=["x"], )

or from rates

python m = Model() m.add_variables( { "x": 1.0, "y": Derived(fn=fns.twice, args=["v1"]), } ) m.add_reaction( "v1", fn=fns.twice, args=["x"], stoichiometry={"x": -1, "y": 1}, )

Data references

You can now add data references to your model, which can just be used like any other model component and be passed as arguments to for example derived parameters / variables and reactions.

```python import pandas as pd from mxlpy import Model

def average(light: pd.Series) -> float: return light.mean()

lights = pd.Series( data={"400nm": 200, "500nm": 300, "600nm": 400}, dtype=float, )

m = Model() m.adddata("light", light) m.addderived("averagelight", average, args=["light"]) m.getdependent() ```

yields

average_light 300.0

Keras support and unification of surrogate / npe / nn submodule architecture

We now directly support Keras additionally to PyTorch as libraries for creating neural posterior estimators and surrogates.
For this we unified the namespaces of the npe, surrogates and nn modules respectively:

| Old | New | | ------------------------------ | ------------------------------ | | npe.TorchSteadyState | npe.torch.SteadyState | | npe.TorchTimeCourse | npe.torch.TimeCourse | | npe.TorchSteadyStateTrainer | npe.torch.SteadyStateTrainer | | npe.TorchTimeCourseTrainer | npe.torch.TimeCourseTrainer | | npe.train_torch_steady_state | npe.torch.train_steady_state | | npe.train_torch_time_course | npe.torch.train_time_course | | | npe.keras.SteadyState | | | npe.keras.TimeCourse | | | npe.keras.SteadyStateTrainer | | | npe.keras.TimeCourseTrainer | | | npe.keras.train_steady_state | | | npe.keras.train_time_course |

| Old | New | | --------------------------- | ---------------------------- | | surrogates.TorchSurrogate | surrogates.torch.Surrogate | | surrogates.TorchTrainer | surrogates.torch.Trainer | | surrogates.train_torch | surrogates.torch.train | | surrogates.PolySurrogate | surrogates.poly.Surrogate | | surrogates.train_poly | surrogates.poly.train | | | surrogates.qss.Surrogate | | | surrogates.keras.Surrogate | | | surrogates.keras.Trainer | | | surrogates.keras.train |

| Old | New | | ---------------- | ---------------- | | nn.torch.LSTM | nn.torch.LSTM | | nn.torch.MLP | nn.torch.MLP | | nn.torch.train | nn.torch.train | | | nn.keras.LSTM | | | nn.keras.MLP | | | nn.keras.train |

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.18.0

Enables fitting protocol time courses

python fit.time_course_over_protocol( model_fn(), p0={"k2": 2.0, "k3": 1.0} data=data, protocol=protocol, time_points_per_step=10, )

Enables customising the residual function for .fit functions

All functions in the .fit module can now use a loss function of the type

python type LossFn = Callable[ [ pd.DataFrame | pd.Series, pd.DataFrame | pd.Series, ], float, ]

E.g. you can defined a custom loss function like this

python def rmse( y_pred: pd.DataFrame | pd.Series, y_true: pd.DataFrame | pd.Series, ) -> float: """Calculate root mean square error between model and data.""" return np.sqrt(np.mean(np.square(y_pred - y_true)))

And then use it like this

python fit.steady_state(..., loss_fn=rmse)

Direct styling of plots

python plot.lines(..., color="C0", linestyle="dashed", linewidth=3),

Issues fixed

28 #29 #40 #41

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.17.0

Changes:

Made naming of surrogate and neural posterior estimation classes and functions more consistent with the remainder of mxlpy

| Old | New | | --------------------------------------- | ----------------------------- | | surrogates.PolySurrogate | surrogates.Polynomial | | surrogates.TorchSurrogate | surrogates.Torch | | | surrogates.TorchTrainer | | surrogates.train_polynomial_surrogate | surrogates.train_polynomial | | surrogates.train_torch_surrogate | surrogates.train_torch |

| Old | New | | --------------------------------------- | ------------------------------ | | npe.TorchSSEstimator | npe.TorchSteadyState | | | npe.TorchSteadyStateTrainer | | npe.TorchTimeCourseEstimator | npe.TorchTimeCourse | | | npe.TorchTimeCourseTrainer | | npe.train_torch_ss_estimator | npe.train_torch_steady_state | | npe.train_torch_time_course_estimator | npe.train_torch_time_course |

Extended surrogate use case for both derived values as well as reactions

Surrogates can now be defined to ouput both derived values as well as reactions.

Note that you therefore need to define the outputs of a surrogate explicitly.

Using surrogate as derived value:

python m = Model() m.add_variable("x", 1.0) m.add_surrogate( "surrogate", surrogates.Polynomial( model=Polynomial(coef=[2]), args=["x"], outputs=["y"], # new: required to define outputs; no stoichiometries defined! ), ) m.add_derived("z", fns.add, args=["x", "y"]) # use surrogate output as derived values

Using surrogate as reaction

python m = Model() m.add_variable("x", 1.0) m.add_surrogate( "surrogate", surrogates.Polynomial( model=Polynomial(coef=[2]), args=["x"], outputs=["v1"], # new: required to define outputs stoichiometries={"v1": {"x": -1}}, # use v1 as a reaction ), )

If you have multiple outputs, it is perfectly fine for them to mix between derived values and reactions.

Re-entrant training for surrogates and neural posterior estimates

python trainer = npe.TorchSteadyStateTrainer( features=npe_features, targets=npe_targets, ) trainer.train(epochs=100) trainer.get_loss().plot() # check if loss is sufficiently small trainer.train(epochs=200) # continue training if not trainer.get_estimator() # get neural posterior estimator

Custom loss functions for torch training

```python def mean_abs(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor: return torch.mean(torch.abs(x - y))

trainer = surrogates.TorchTrainer( features=surrogatefeatures, targets=surrogatetargets, lossfn=meanabs, )

trainer = npe.TorchSteadyStateTrainer( features=npefeatures, targets=npetargets, lossfn=meanabs, )

trainer = npe.TorchTimeCourseTrainer( features=npefeatures, targets=npetargets, lossfn=meanabs, ) ```

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.16.0

Allow initial values in scan and mc functions

All the scan and mc functions now allow to iterate over both parameters as well as initial conditions in one single pd.DataFrame. To avoid confusion, we thus renamed the second parameter called parameters to to_scan.

Thus, functions like scan.steady_state(model, parameters) are now replaced by the following definitions:

```python scan.steadystate( model: Model, *, toscan: pd.DataFrame, y0: dict[str, float] | None = None, ... )

scan.timecourse( model: Model, *, toscan: pd.DataFrame, time_points: Array, y0: dict[str, float] | None = None, ... )

scan.timecourseoverprotocol( model: Model, *, toscan: pd.DataFrame, protocol: pd.DataFrame, timepointsper_step: int = 10, y0: dict[str, float] | None = None, ... )

mc.steadystate( model: Model, *, mcto_scan: pd.DataFrame, y0: dict[str, float] | None = None, ... )

mc.timecourse( model: Model, *, timepoints: Array, mctoscan: pd.DataFrame, y0: dict[str, float] | None = None, ... )

mc.timecourseoverprotocol( model: Model, *, protocol: pd.DataFrame, mcto_scan: pd.DataFrame, y0: dict[str, float] | None = None, ... )

mc.scansteadystate( model: Model, *, toscan: pd.DataFrame, mcto_scan: pd.DataFrame, y0: dict[str, float] | None = None, ... )

mc.variableelasticities( model: Model, *, mctoscan: pd.DataFrame, toscan: list[str] | None = None, variables: dict[str, float] | None = None, ... )

mc.parameterelasticities( model: Model, *, mctoscan: pd.DataFrame, toscan: list[str], variables: dict[str, float], ... )

mc.responsecoefficients( model: Model, *, mctoscan: pd.DataFrame, toscan: list[str], variables: dict[str, float] | None = None, ... ) ```

To keep consistency, the naming was also adjusted in the mca submodule

```python mca.variableelasticities( model: Model, *, toscan: list[str] | None = None, variables: dict[str, float] | None = None, ... )

mca.parameterelasticities( model: Model, *, toscan: list[str] | None = None, variables: dict[str, float] | None = None, ... )

mca.responsecoefficients( model: Model, *, toscan: list[str] | None = None, variables: dict[str, float] | None = None, ... ) ```

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.15.0

Speedup of jacobian matrix calculation

Usage of jacobian matrix for simulation has been sped up massively. As the construction of it still takes some time, it is by de-activated by default. Decide on a case-by-case basis whether to use it.

python Simulator(model, use_jacobian=False) # default case

Removed need to explicitly name derived variables

Before it was necessary to name derived stoichiometries directly, this was now removed

```python

old code

stoichiometry={"x": Derived("x", fn=constant, args=["stoich"])}

new code

stoichiometry={"x": Derived(fn=constant, args=["stoich"])} ```

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.14.0

Disable using sympy jacobian for integration by default

Due to #33 we will disable using the sympy jacobian for now, as it deteriorates integration performance instead of improving it

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.13.0

Increased flexibility for plot customisation using plot.context

python with plot.context( colors=["r", "g", "b"], line_width=2, ): fig, ax = plot.lines(data) ax.set(xlabel="time", ylabel="amplitude")

Several minor bugfixes

  • #31
  • #32

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.12.0

Automatic jacobian construction for integration

mxlpy now automatically creates a jacobian from the symbolic representation of the model (if possible) to use for speeding up integrations

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.11.0

Markdown report

It is now possible to easily report differences between two models (or model versions) using mxlpy.report.markdown.

python report.markdown( get_sir(), get_sird(), ) `

Variables

| Name | Old Value | New Value | | ---- | --------- | --------- | | d | - | 0.0 |

Parameters

| Name | Old Value | New Value | | ---- | --------- | --------- | | mu | - | 0.01 |

Reactions

| Name | Old Value | New Value | | ---- | --------- | --------- | | death | - | $i \mu$ |

Numerical differences of right hand side values

| Name | Old Value | New Value | Relative Change | | ---- | --------- | --------- | --------------- | | i | 0.01 | 0.01 | 12.5%

This can easily be extended by user-defined analysis functions, which insert their figures directly into the markdown report as shown below

```python def analyseconcentrations(m1: Model, m2: Model, imgdir: Path) -> tuple[str, Path]: rold = unwrap(Simulator(m1).simulate(100).getresult()) rnew = unwrap(Simulator(m2).simulate(100).getresult()) fig = plotdifference(rold.variables, rnew.variables) fig.savefig((path := imgdir / "concentration.png"), dpi=300) plt.close(fig) return "## Comparison of largest changing", path

report.markdown( getsir(), getsird(), analyses=[analyse_concentrations], ) ```

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.10.0

Numerical parameter identifiability

Adds support for numerical approximations for parameter identifiability using profile likelihood

```python errorsbeta = profilelikelihood( sir(), data=data, parametername="beta", parametervalues=np.linspace(0.2 * 0.5, 0.2 * 1.5, 10), n_random=10, )

fig, ax = plot.lines(errors_beta, legend=False) ax.set(title="beta", xlabel="parameter value", ylabel="abs(error)") plot.show() ```

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.9.0

Increased flexibility of scan / mc / mca methods

It is now possible to directly inject a modified integrator into all .scan, .mc, and .mca methods, e.g. to adjust tolerances more easily

```python from functools import partial from mxlpy import Assimulo, mc, mca, scan

scan.steadystate(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8) scan.timecourse(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8) scan.timecourseover_protocol(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8)

mca.response_coefficients(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8)

mc.steadystate(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8) mc.timecourse(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8) mc.timecourseoverprotocol(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8) mc.scansteadystate(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8) mc.responsecoefficients(..., integator=partial(Assimilo, atol=1e-8, rtol=1e-8) ```

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.8.0

Name change to MxlPy

  • to better reflect the mechanistic learning approach

Stabilized metaprogramming features

python from mxlpy.meta import ( generate_latex_code, generate_model_code_py, generate_mxlpy_code, )

Stabilized symbolic features

python symbolic_model = to_symbolic_model(model) symbolic_model.jacobian()

$$\left[\begin{matrix}- 1.0 \beta i & - 1.0 \beta s & 0\\1.0 \beta i & 1.0 \beta s - 1.0 \gamma & 0\\0 & 1.0 \gamma & 0\end{matrix}\right]$$

python symbolic.check_identifiability( symbolic_model, outputs=[sympy.Symbol("i"), sympy.Symbol("r")], )

Various minor bug fixes

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.7.0

Refactored Simulator returns

The Simulator class now just returns a Result object on .get_result (or None on failure).
This means that all methods on Result are guaranteed to return a non-none value if the object was created, simplifying end-user code.
See #20 for the discussion.

```python

Basic use case for most people, enabled by iter

variables, fluxes = Simulator(model).simulate(tend).getresult()

advanced use cases

res = Simulator(model).simulate(tend).getresult()

avoid additional calculations

variables = res.getvariables( includederived=False, include_readouts=False, )

Split by simulation commands

variablesunconcatenated = res.getvariables(concatenated=False)

Normalised to some factor

variablesnormalise = res.getvariables(normalise=...) ```

Enables inserting variable values between simulations

This feature allows simulations of e.g. dose responses, where a dose is re-administered multiple times.

python result = unwrap( Simulator(m) .simulate_time_course(np.linspace(0, 24, 100)) # simulate .update_variable("X0", 0.66) # or .update_variables({"X0": 0.66}) .simulate_time_course(np.linspace(24, 48, 100)) # continue simulation with updated variable .get_result() )

- Python
Published by marvinvanaalst 10 months ago

mxlpy - v0.6.0

Enables using rates as an argument for other derived variables and reactions

Note that this uses the rate function and not the derivative (stoichiometries are not taken into account).

Example: note the usage of "v1" in the rate "v2"

```python

( Model() .addvariables({"x": 1.0, "y": 0.1}) .addreaction("v1", fns.constant, args=["x"], stoichiometry={"x": -1, "y": 1}) .addreaction("v2", fns.massaction_1s, args=["v1", "y"], stoichiometry={"y": -1}) ) ```

Example: note the usage of v1 in fraction_of_vmax

```python

( Model() .addvariables({"s": 1.0}) .addparameters({"vmax": 1.0, "km": 0.1}) .addreaction( "v1", fns.michaelismenten1s, args=["s", "vmax", "km"], stoichiometry={"s": -1}, ) .addderived("fractionofvmax", fns.div, args=["v1", "vmax"]) ) ```

- Python
Published by marvinvanaalst 11 months ago

mxlpy - v0.5.0

Improved error messages for sorting derived values

Example for a missing dependency

```python

( Model() .addderived("d", fns.constant, args=["x"]) .getargs() )

MissingDependenciesError: Dependencies cannot be solved. Missing dependencies: d: ['x'] ````

Example for a circular dependency

```python

( Model() .addderived("d1", fns.constant, args=["d2"]) .addderived("d2", fns.constant, args=["d1"]) .get_args() )

CircularDependencyError: Exceeded max iterations on sorting dependencies. Check if there are circular references. Missing dependencies: d2: {'d1'} d1: {'d2'} ```

- Python
Published by marvinvanaalst 11 months ago

mxlpy - v0.4.0

Added symbolic models & structural identifiability analysis by re-implementing StrikePy, which is based on the STRIKE-GOLDD matlab package.

- Python
Published by marvinvanaalst 11 months ago

mxlpy -

- Python
Published by marvinvanaalst 11 months ago

mxlpy - 0.2.0

- Python
Published by marvinvanaalst about 1 year ago