https://github.com/darothen/experiment

Organizing numerical model experiment output

https://github.com/darothen/experiment

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    1 of 2 committers (50.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.5%) to scientific vocabulary

Keywords

climate data model science
Last synced: 5 months ago · JSON representation

Repository

Organizing numerical model experiment output

Basic Info
  • Host: GitHub
  • Owner: darothen
  • License: mit
  • Language: Python
  • Default Branch: master
  • Size: 142 KB
Statistics
  • Stars: 8
  • Watchers: 6
  • Forks: 3
  • Open Issues: 14
  • Releases: 0
Topics
climate data model science
Created about 9 years ago · Last pushed over 8 years ago
Metadata Files
Readme License

README.md

experiment: Managing modeling experiment output

Build Status

experiment is designed to help you manage your modeling/data analysis workflows using xarray.

Example Scenario

Suppose you've performed a set of climate model simulations with one particular model. In those simulations, you've looked at two emissions scenarios (a "high" and a "low" emissions case) and you've used three different values for some tuned parameter in the model (let's call them "x", "y", and "z"). Each simulation produces the same set of output tapes on disk, which you've conveniently arranged in the following hierarchical folder layout:

high_emis/ /param_x /param_y /param_z low_emis/ /param_x /param_y /param_z

Each output file has a simple naming scheme which reflects the parameter choices, so for instance surface temperature output for one simulation is in a file named low.x.TS.nc under lowemis/paramx/.

The idea underpinning experiment is that it should be really easy to analyze this data, and you shouldn't have to spend time writing lots of boilerplate code to load and process your simulations. You went through the hassle to organize your data in a logical manner (which you're doing for reproducibility anyway, right?) - why not leverage that organization to help you out?

Example Usage

experiment lets you describe how your data is organized on disk by defining Cases and an Experiment. In the example above, we have two Cases: an emissions scenario, and a set of values for a given parameter. We record this calling Case with a short name (alias), long-name, and set of values:

``` python from experiment import Case

emis = Case("emis", "Emissions Scenario", ['low', 'high']) param = Case("param", "Tuning Parameter", ['x', 'y', 'z']) ```

The values should correspond to how the output files (or folders) are labeled on disk, using any set of alphanumeric strings. For instance, if the parameter values were 1.5, 2.0, and 4.0, you could encode them as string versions of those numbers, or something like "1p5", "2p5", and "4p5" if that's more convenient.

A collection of Cases constitutes an Experiment. An Experiment defines where the data exists on disk, and uses simple Python format strings to define the naming schema for the directory structure and files. In our example, the case_path is a tree-structure, "emis{emis}/param{param}", where the curly-braced parameters correspond to the short names of the Cases we previously defined. At each of these directories, we have files which look like "{emis}.{param}.___.nc". The "___" is a placeholder for some identifying label (usually a variable name, if you've saved your data in timeseries format, or a timestamp if in timeslice format), and the surrounding bits (including the ".") are an output_prefix and output_suffix, respectively.

Using this information, we can create an Experiment to access our data:

``` python from experiment import Experiment

myexperiment = Experiment( name='myclimateexperiment', cases=[emis, param], datadir='/path/to/my/data', casepath="emis{emis}/param{param}", outputprefix="{emis}.{param}.", output_suffix=".nc" ) ```

my_experiment has useful helper methods which let you quickly construct filenames to individual parts of your dataset, or iterate over different components.

The real advantage that experiment provides is its flexibility in defining the paths to your data. You can use almost any naming/organizational scheme, such as:

  • a single folder with all the metadata contained in the filenames
  • hierarchical folders but incomplete (or missing) metadata in filenames
  • data stored in different places on disk or a cluster

In the latter case, you could point an Experiment to an arbitrary case_path, and build a symlinked hierarchy to your data.

Loading data

The point behind having an Experiment object is to be able to quickly load your data. We can do that with the Experiment.load() function, which will return a dictionary of Datasets, each one indexed by a tuple of the case values corresponding to it.

python data = my_experiment.load("TS")

This is useful for organizing your data for further analysis. You can pass a function to the preprocess kwarg, and it will be applied to each loaded Dataset before loaded into memory. Optionally, you can also pass master=True to the load() function, which will concatenate the data on new dimensions into a "master" dataset that contains all of your data. Preprocessing is applied before the dataset is concatenated, to reduce the memory overhead.

Saving Experiments

An Experiment can also be directly read from disk in .yml format. The case here would serialize to

``` yaml

Sample Experiment configuration

name: myexperiment cases: emis: longname: Emissions Scenario vals: - high - low param: longname: Tuning Parameter vals: [x, y, z] datadir: /path/to/my/data

Be sure to use single-quotes here so you don't have to escape the

braces

casepath: 'emis{emis}/param{param}' outputprefix: '{emis}.{param}.' output_suffix: '.nc' ... ```

which can be directly loaded into an Experiment via

python my_experiment = Experiment.load("my_experiment.yml")

Owner

  • Name: Daniel Rothenberg
  • Login: darothen
  • Kind: user
  • Location: Frederick, CO
  • Company: Waymo

Tech Lead @ Waymo | Weather/Climate Scientist | Pythonista | ex-Chief Scientist @ ClimaCell/Tomorrow.io | Formerly Postdoc Associate @ MIT EAPS/IDSS/CGCS

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Committers

Last synced: about 2 years ago

All Time
  • Total Commits: 22
  • Total Committers: 2
  • Avg Commits per committer: 11.0
  • Development Distribution Score (DDS): 0.091
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
darothen d****n@m****u 20
Francesco Bartoli f****i@g****t 2
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 14
  • Total pull requests: 5
  • Average time to close issues: 3 days
  • Average time to close pull requests: about 17 hours
  • Total issue authors: 3
  • Total pull request authors: 1
  • Average comments per issue: 0.79
  • Average comments per pull request: 2.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • darothen (12)
  • TomNicholas (1)
  • grandey (1)
Pull Request Authors
  • francbartoli (5)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

setup.py pypi