https://github.com/jcbayley/coatopt

optimising coating layers

https://github.com/jcbayley/coatopt

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.5%) to scientific vocabulary
Last synced: 5 months ago · JSON representation

Repository

optimising coating layers

Basic Info
  • Host: GitHub
  • Owner: jcbayley
  • Language: Python
  • Default Branch: main
  • Size: 1.78 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 3
  • Releases: 0
Created over 1 year ago · Last pushed 6 months ago
Metadata Files
Readme

README.md

CoatOpt: Multi-Objective Coating Optimization

A reinforcement learning framework for optimizing gravitational wave detector mirror coatings using PC-HPPO (Parameter Constrained Hybrid Proximal Policy Optimization).

Overview

CoatOpt uses multi-objective reinforcement learning to design optimal coating stacks that simultaneously optimize: - Reflectivity: Maximize mirror reflectance - Thermal Noise: Minimize Brownian thermal noise - Absorption: Minimize optical absorption

The algorithm maintains a Pareto front of non-dominated solutions, allowing exploration of trade-offs between competing objectives.

Installation

CoatOpt uses uv as the package manager. Python 3.9+ is required.

Option 1: Using uv (Recommended)

```bash

Install uv if you don't have it

curl -LsSf https://astral.sh/uv/install.sh | sh

Clone and set up the project

cd coatopt uv sync ```

Option 2: Traditional Setup

```bash

Create virtual environment

python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate

Install package

pip install -e . ```

Quick Start

Training a Model (CLI)

```bash

Using uv (recommended)

uv run coatopt-train -c src/coatopt/config/default.ini --save-plots

Or with activated environment

source .venv/bin/activate coatopt-train -c src/coatopt/config/default.ini --save-plots ```

Interactive GUI

```bash

Launch GUI interface

uv run coatopt-ui ```

Evaluation Only

```bash

Run evaluation on trained model

uv run coatopt-train -c src/coatopt/config/default.ini --evaluate -n 1000 --save-plots ```

Start Fresh Training

```bash

Retrain from scratch (ignore existing checkpoints)

uv run coatopt-train -c src/coatopt/config/default.ini --retrain ```

Configuration

Key parameters in default.ini:

Data Section: - n_layers: Number of coating layers (default: 8) - optimise_parameters: Objectives to optimize (reflectivity, thermalnoise, absorption) - `minthickness/max_thickness`: Layer thickness bounds

Network Section: - pre_network_type: Feature extraction network (lstm, linear, attn) - hidden_size: Network hidden dimensions - hyper_networks: Enable hypernetwork architecture

Training Section: - n_iterations: Training iterations (default: 8000) - lr_*_policy: Learning rates for discrete/continuous policies - clip_ratio: PPO clipping parameter

Documentation

For detailed usage instructions and examples: - Basic Usage Guide - Installation and getting started - Configuration Reference - All configuration parameters - Available Options - Advanced features and options

Outputs

Training generates: - Model checkpoints: Neural network weights - Training metrics: HDF5 files with training statistics - Evaluation results: Complete analysis data - Plots: Training progress and Pareto front visualizations

Algorithm Details

PC-HPPO-OML uses: - Hierarchical action space: Discrete material selection + continuous thickness - Multi-objective rewards: Dynamic weight cycling and randomisation to explore Pareto front - Pareto front maintenance: Non-dominated sorting to find pareto front - LSTM or attention pre-networks: Sequential processing of coating layer information - PPO updates: Stable policy gradient optimization with clipping

Development

Code Quality

This project includes pre-commit hooks for code formatting and linting:

```bash

Install development dependencies

uv sync --extra dev

Set up pre-commit hooks

uv run pre-commit install ```

Running Tests

```bash

Run all tests

uv run pytest

Run tests with coverage

uv run pytest --cov=src --cov-report=html ```

Requirements

Key dependencies automatically installed: - torch>=2.0.0: Neural network training - numpy, scipy: Numerical computation - tmm, tmm_fast: Transfer Matrix Method for optics - matplotlib: Visualization - pandas: Data handling - pymoo: Multi-objective optimization utilities - mlflow: Experiment tracking

Owner

  • Login: jcbayley
  • Kind: user

GitHub Events

Total
  • Issues event: 6
  • Push event: 17
  • Public event: 1
  • Pull request event: 8
  • Create event: 4
Last Year
  • Issues event: 6
  • Push event: 17
  • Public event: 1
  • Pull request event: 8
  • Create event: 4

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 3
  • Total pull requests: 6
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 minute
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 3
  • Pull requests: 6
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 minute
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • jcbayley (3)
Pull Request Authors
  • jcbayley (6)
Top Labels
Issue Labels
bug (2)
Pull Request Labels