https://github.com/brsynth/icfree-ml

Design of experiments (DoE) and machine learning packages for the iCFree project

https://github.com/brsynth/icfree-ml

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.7%) to scientific vocabulary

Keywords

cell-free design-of-experiments latin-hypercube-sampling machine-learning
Last synced: 6 months ago · JSON representation

Repository

Design of experiments (DoE) and machine learning packages for the iCFree project

Basic Info
  • Host: GitHub
  • Owner: brsynth
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 5.48 MB
Statistics
  • Stars: 5
  • Watchers: 0
  • Forks: 2
  • Open Issues: 0
  • Releases: 24
Topics
cell-free design-of-experiments latin-hypercube-sampling machine-learning
Created about 4 years ago · Last pushed about 1 year ago
Metadata Files
Readme Changelog License

README.md

iCFree

iCFree is a Python-based program designed to automate the process of generating and running a Snakemake workflow for sampling and preparing instructions for laboratory experiments. The program includes components for generating samples, creating plates, and instructing the handling of these plates.

Table of Contents

Installation

  1. Install Conda:

    • Download the installer for your operating system from the Conda Installation page.
    • Follow the instructions on the page to install Conda. For example, on Windows, you would download the installer and run it. On macOS and Linux, you might use a command like: bash bash ~/Downloads/Miniconda3-latest-Linux-x86_64.sh
    • Follow the prompts on the installer to complete the installation.
  2. Install iCFree from conda-forge: bash conda install -c conda-forge icfree

Usage

The main entry point of the program is the __main__.py file. You can run the program from the command line by providing the necessary arguments for each step of the workflow.

Basic Command

bash python -m icfree --sampler_input_filename <input_file> --sampler_nb_samples <number_of_samples> --sampler_seed <seed> --sampler_output_filename <output_file> --plate_designer_input_filename <input_file> --plate_designer_sample_volume <volume> --plate_designer_default_dead_volume <dead_volume> --plate_designer_num_replicates <replicates> --plate_designer_well_capacity <capacity> --plate_designer_start_well_src_plt <start_well_src> --plate_designer_start_well_dst_plt <start_well_dst> --plate_generat...

Components

Sampler

The sampler.py script generates Latin Hypercube Samples (LHS) for given components.

Usage

bash python icfree/sampler.py <input_file> <output_file> <num_samples> [--step <step_size>] [--seed <seed>]

Arguments
  • input_file: Input file path with components and their max values.
  • output_file: Output CSV file path for the samples.
  • num_samples: Number of samples to generate.
  • --step: Step size for creating discrete ranges (default: 2.5).
  • --seed: Seed for random number generation for reproducibility (optional).

Plate Designer

The plate_designer.py script generates plates based on the sampled data.

Usage

bash python icfree/plate_designer.py <input_file> <sample_volume> [options]

Options
  • --defaultdeadvolume: Default dead volume.
  • --dead_volumes: Dead volumes for specific wells.
  • --num_replicates: Number of replicates.
  • --well_capacity: Well capacity.
  • --startwellsrc_plt: Starting well for the source plate.
  • --startwelldst_plt: Starting well for the destination plate.
  • --extra_wells: Extra wells to add to the plate.
  • --output_folder: Folder to save the output files.

Instructor

The instructor.py script generates instructions for handling the generated plates.

Usage

bash python icfree/instructor.py <source_plate> <destination_plate> <output_instructions> [options]

Options
  • --maxtransfervolume: Maximum transfer volume.
  • --split_threshold: Threshold for splitting components.
  • --sourceplatetype: Type of the source plate.
  • --split_components: Components to split.
  • --dispense_order: Comma-separated list of component names specifying the dispensing order.

Learner

The Learner module carries out an active learning process to both train the model and explore the space of possible cell-free combinations.

Usage

bash python -m icfree.learner <data_folder> <parameter_file> <output_folder> [options]

Options
  • --name_list: a comma-separated string of column names or identifiers, converted to a list of strings representing columns that contain labels (y). This separates y columns from the rest (X features). (Default: Yield1,Yield2,Yield3,Yield4,Yield5)
  • --test: a flag for validating the model; not required to run inside the active learning loop. If not set, skip the validating step.
  • --nbrep NBREP: the number of test repetitions for validating the model behavior. 80% of data is randomly separated for training, and 20% is used for testing. (Default: 100)
  • --flatten: a flag to indicate whether to flatten Y data. If set, treats each repetition in the same experiment independently; multiple same X values with different y outputs are modeled. Else, calculates the average of y across repetitions and only model with y average.
  • --seed SEED: the random seed value used for reproducibility in random operations. (Default: 85)
  • --nbnewdata_predict: The number of new data points sampled from all possible cases. (Default: 1000)
  • --nbnewdata: The number of new data points selected from the generated ones. These are the data points labeled after active learning loops. nb_new_data_predict must be greater than nb_new_data to be meaningful. (Default: 50)
  • --parameterstep: The step size used to decrement the maximum predefined concentration sequentially. For example, if the maximum concentration is max, the sequence of concentrations is calculated as: `max - 1 * parameterstep,max - 2 * parameterstep,max - 3 * parameterstep`, and so on. Each concentration is a candidate for experimental testing. Smaller steps result in more possible combinations to sample. (Default: 10)
  • --n_group: parameter for the cluster margin algorithm, specifying the number of groups into which generated data will be clustered. (Default: 15)
  • --km: parameter for the cluster margin algorithm, specifying the number of data points for the first selection. Ensure nb_new_data_predict > ks > km. (Default: 50)
  • --ks: parameter for the cluster margin algorithm, specifying the number of data points for the second selection. This is also similar to nb_new_data. (Default: 20)
  • --plot: a flag to indicate whether to generate all plots for analysis visualization.
  • --save_plot: a flag to indicate whether to save all generated plots.
  • --verbose: flag to indicate whether to print all messages to the console.

Example

Here is an example of how to run the program with sample data:

bash python -m icfree --sampler_input_filename data/components.csv --sampler_nb_samples 100 --sampler_seed 42 --sampler_output_filename results/samples.csv --plate_designer_input_filename results/samples.csv --plate_designer_sample_volume 10 --plate_designer_default_dead_volume 2 --plate_designer_num_replicates 3 --plate_designer_well_capacity 200 --plate_designer_start_well_src_plt A1 --plate_designer_start_well_dst_plt B1 --plate_designer_output_folder results/plates --instructor_max_transfer_volume...

License

This project is licensed under the MIT License. See the LICENSE file for details.

Authors

ChatGPT, OpenAI

Owner

  • Name: BioRetroSynth
  • Login: brsynth
  • Kind: organization

Our group is interested in synthetic biology and systems metabolic engineering in whole-cell and cell-free systems.

GitHub Events

Total
  • Create event: 7
  • Issues event: 4
  • Release event: 7
  • Watch event: 1
  • Issue comment event: 4
  • Push event: 44
  • Pull request event: 2
Last Year
  • Create event: 7
  • Issues event: 4
  • Release event: 7
  • Watch event: 1
  • Issue comment event: 4
  • Push event: 44
  • Pull request event: 2

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 2
  • Total pull requests: 1
  • Average time to close issues: 5 days
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 2.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 2
  • Pull requests: 1
  • Average time to close issues: 5 days
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 2.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • Mostafa-Mahdy (2)
  • guillaume-gricourt (1)
Pull Request Authors
  • tduigou (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

.github/workflows/check.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
.github/workflows/tag.yml actions
  • actions/checkout v2 composite
  • actions/create-release v1 composite
  • actions/download-artifact v2 composite
  • actions/upload-artifact v2 composite
  • ad-m/github-push-action master composite
  • conda-incubator/setup-miniconda v2 composite
  • mathieudutour/github-tag-action v5.6 composite
  • ruby/setup-ruby v1 composite
.github/workflows/test.yml actions
  • actions/checkout v3 composite
  • conda-incubator/setup-miniconda v2 composite
setup.py pypi
environment.yaml pypi