simglucose

A Type-1 Diabetes simulator implemented in Python for Reinforcement Learning purpose

https://github.com/jxx123/simglucose

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: ncbi.nlm.nih.gov
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.8%) to scientific vocabulary

Keywords

artificial-pancreas diabetes glucose-monitoring openai-gym python reinforcement-learning rllab simulation simulator simulator-controls

Keywords from Contributors

interactive serializer cycles packaging network-simulation shellcodes hacking autograding observability genomics
Last synced: 6 months ago · JSON representation

Repository

A Type-1 Diabetes simulator implemented in Python for Reinforcement Learning purpose

Basic Info
  • Host: GitHub
  • Owner: jxx123
  • License: mit
  • Language: Python
  • Default Branch: master
  • Size: 2.29 MB
Statistics
  • Stars: 267
  • Watchers: 18
  • Forks: 125
  • Open Issues: 23
  • Releases: 0
Topics
artificial-pancreas diabetes glucose-monitoring openai-gym python reinforcement-learning rllab simulation simulator simulator-controls
Created about 8 years ago · Last pushed 10 months ago
Metadata Files
Readme License

README.md

simglucose

Downloads Downloads Downloads

A Type-1 Diabetes simulator implemented in Python for Reinforcement Learning purpose

This simulator is a python implementation of the FDA-approved UVa/Padova Simulator (2008 version) for research purpose only. The simulator includes 30 virtual patients, 10 adolescents, 10 adults, 10 children. There is documentation of the virtual patient's parameters.

HOW TO CITE: Jinyu Xie. Simglucose v0.2.1 (2018) [Online]. Available: https://github.com/jxx123/simglucose. Accessed on: Month-Date-Year.

Notice: simglucose no longer supports python 3.7 and 3.8, please update to >=3.9 verison. Thanks!

Announcement (08/20/2023): simglucose now supports gymnasium! Check examples/run_gymnasium.py for usage.

| Animation | CVGA Plot | BG Trace Plot | Risk Index Stats | | ------------------------------------------------ | :---------------------------- | ----------------------------------------------- | ----------------------------------------------- | | animation screenshot | CVGA | BG Trace Plot | Risk Index Stats |

<!-- Zone Stats -->

Main Features

  • Simulation environment follows OpenAI gym and rllab APIs. It returns observation, reward, done, info at each step, which means the simulator is "reinforcement-learning-ready".
  • Supports customized reward function. The reward function is a function of blood glucose measurements in the last hour. By default, the reward at each step is risk[t-1] - risk[t]. risk[t] is the risk index at time t defined in this paper.
  • Supports parallel computing. The simulator simulates multiple patients in parallel using pathos multiprocessing package (you are free to turn parallel off by setting parallel=False).
  • The simulator provides a random scenario generator (from simglucose.simulation.scenario_gen import RandomScenario) and a customized scenario generator (from simglucose.simulation.scenario import CustomScenario). Commandline user-interface will guide you through the scenario settings.
  • The simulator provides the most basic basal-bolus controller for now. It provides very simple syntax to implement your own controller, like Model Predictive Control, PID control, reinforcement learning control, etc.
  • You can specify random seed in case you want to repeat your experiments.
  • The simulator will generate several plots for performance analysis after simulation. The plots include blood glucose trace plot, Control Variability Grid Analysis (CVGA) plot, statistics plot of blood glucose in different zones, risk indices statistics plot.
  • NOTE: animate and parallel cannot be set to True at the same time in macOS. Most backends of matplotlib in macOS is not thread-safe. Windows has not been tested. Let me know the results if anybody has tested it out.

Installation

It is highly recommended using pip to install simglucose, follow this link to install pip.

Auto installation:

bash pip install simglucose

Manual installation:

bash git clone https://github.com/jxx123/simglucose.git cd simglucose

If you have pip installed, then

bash pip install -e .

If you do not have pip, then

bash python setup.py install

If rllab (optional) is installed, the package will utilize some functionalities in rllab.

Note: there might be some minor differences between auto install version and manual install version. Use git clone and manual installation to get the latest version.

Quick Start

Use simglucose as a simulator and test controllers

Run the simulator user interface

python from simglucose.simulation.user_interface import simulate simulate()

You are free to implement your own controller, and test it in the simulator. For example,

```python from simglucose.simulation.user_interface import simulate from simglucose.controller.base import Controller, Action

class MyController(Controller): def init(self, initstate): self.initstate = initstate self.state = initstate

def policy(self, observation, reward, done, **info):
    '''
    Every controller must have this implementation!
    ----
    Inputs:
    observation - a namedtuple defined in simglucose.simulation.env. For
                  now, it only has one entry: blood glucose level measured
                  by CGM sensor.
    reward      - current reward returned by environment
    done        - True, game over. False, game continues
    info        - additional information as key word arguments,
                  simglucose.simulation.env.T1DSimEnv returns patient_name
                  and sample_time
    ----
    Output:
    action - a namedtuple defined at the beginning of this file. The
             controller action contains two entries: basal, bolus
    '''
    self.state = observation
    action = Action(basal=0, bolus=0)
    return action

def reset(self):
    '''
    Reset the controller state to inital state, must be implemented
    '''
    self.state = self.init_state

ctrller = MyController(0) simulate(controller=ctrller) ```

These two examples can also be found in examples\ folder.

In fact, you can specify a lot more simulation parameters through simulation:

python simulate(sim_time=my_sim_time, scenario=my_scenario, controller=my_controller, start_time=my_start_time, save_path=my_save_path, animate=False, parallel=True)

OpenAI Gym usage

  • Using default reward

```python import gym

Register gym environment. By specifying kwargs,

you are able to choose which patient or patients to simulate.

patient_name must be 'adolescent#001' to 'adolescent#010',

or 'adult#001' to 'adult#010', or 'child#001' to 'child#010'

It can also be a list of patient names

You can also specify a custom scenario or a list of custom scenarios

If you chose a list of patient names or a list of custom scenarios,

every time the environment is reset, a random patient and scenario will be

chosen from the list

from gym.envs.registration import register from simglucose.simulation.scenario import CustomScenario from datetime import datetime

starttime = datetime(2018, 1, 1, 0, 0, 0) mealscenario = CustomScenario(starttime=starttime, scenario=[(1,20)])

register( id='simglucose-adolescent2-v0', entrypoint='simglucose.envs:T1DSimEnv', kwargs={'patientname': 'adolescent#002', 'customscenario': mealscenario} )

env = gym.make('simglucose-adolescent2-v0')

observation = env.reset() for t in range(100): env.render(mode='human') print(observation) # Action in the gym environment is a scalar # representing the basal insulin, which differs from # the regular controller action outside the gym # environment (a tuple (basal, bolus)). # In the perfect situation, the agent should be able # to control the glucose only through basal instead # of asking patient to take bolus action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: print("Episode finished after {} timesteps".format(t + 1)) break ```

  • Customized reward function

```python import gym from gym.envs.registration import register

def customreward(BGlasthour): if BGlasthour[-1] > 180: return -1 elif BGlast_hour[-1] < 70: return -2 else: return 1

register( id='simglucose-adolescent2-v0', entrypoint='simglucose.envs:T1DSimEnv', kwargs={'patientname': 'adolescent#002', 'rewardfun': customreward} )

env = gym.make('simglucose-adolescent2-v0')

reward = 1 done = False

observation = env.reset() for t in range(200): env.render(mode='human') action = env.action_space.sample() observation, reward, done, info = env.step(action) print(observation) print("Reward = {}".format(reward)) if done: print("Episode finished after {} timesteps".format(t + 1)) break ```

rllab usage

```python from rllab.algos.ddpg import DDPG from rllab.envs.normalizedenv import normalize from rllab.explorationstrategies.oustrategy import OUStrategy from rllab.policies.deterministicmlppolicy import DeterministicMLPPolicy from rllab.qfunctions.continuousmlpqfunction import ContinuousMLPQFunction from rllab.envs.gymenv import GymEnv from gym.envs.registration import register

register( id='simglucose-adolescent2-v0', entrypoint='simglucose.envs:T1DSimEnv', kwargs={'patientname': 'adolescent#002'} )

env = GymEnv('simglucose-adolescent2-v0') env = normalize(env)

policy = DeterministicMLPPolicy( envspec=env.spec, # The neural network policy should have two hidden layers, each with 32 hidden units. hiddensizes=(32, 32) )

es = OUStrategy(env_spec=env.spec)

qf = ContinuousMLPQFunction(env_spec=env.spec)

algo = DDPG( env=env, policy=policy, es=es, qf=qf, batchsize=32, maxpathlength=100, epochlength=1000, minpoolsize=10000, nepochs=1000, discount=0.99, scalereward=0.01, qflearningrate=1e-3, policylearningrate=1e-4 ) algo.train() ```

Advanced Usage

You can create the simulation objects, and run batch simulation. For example,

```python from simglucose.simulation.env import T1DSimEnv from simglucose.controller.basalbolusctrller import BBController from simglucose.sensor.cgm import CGMSensor from simglucose.actuator.pump import InsulinPump from simglucose.patient.t1dpatient import T1DPatient from simglucose.simulation.scenariogen import RandomScenario from simglucose.simulation.scenario import CustomScenario from simglucose.simulation.simengine import SimObj, sim, batch_sim from datetime import timedelta from datetime import datetime

specify start_time as the beginning of today

now = datetime.now() start_time = datetime.combine(now.date(), datetime.min.time())

--------- Create Random Scenario --------------

Specify results saving path

path = './results'

Create a simulation environment

patient = T1DPatient.withName('adolescent#001') sensor = CGMSensor.withName('Dexcom', seed=1) pump = InsulinPump.withName('Insulet') scenario = RandomScenario(starttime=starttime, seed=1) env = T1DSimEnv(patient, sensor, pump, scenario)

Create a controller

controller = BBController()

Put them together to create a simulation object

s1 = SimObj(env, controller, timedelta(days=1), animate=False, path=path) results1 = sim(s1) print(results1)

--------- Create Custom Scenario --------------

Create a simulation environment

patient = T1DPatient.withName('adolescent#001') sensor = CGMSensor.withName('Dexcom', seed=1) pump = InsulinPump.withName('Insulet')

custom scenario is a list of tuples (time, meal_size)

scen = [(7, 45), (12, 70), (16, 15), (18, 80), (23, 10)] scenario = CustomScenario(starttime=starttime, scenario=scen) env = T1DSimEnv(patient, sensor, pump, scenario)

Create a controller

controller = BBController()

Put them together to create a simulation object

s2 = SimObj(env, controller, timedelta(days=1), animate=False, path=path) results2 = sim(s2) print(results2)

--------- batch simulation --------------

Re-initialize simulation objects

s1.reset() s2.reset()

create a list of SimObj, and call batch_sim

s = [s1, s2] results = batch_sim(s, parallel=True) print(results) ```

Run analysis offline (example/offline_analysis.py):

```python from simglucose.analysis.report import report import pandas as pd from pathlib import Path

get the path to the example folder

exmaplepth = Path(file_).parent

find all csv with pattern #.csv, e.g. adolescent#001.csv

resultfilenames = list(exmaplepth.glob( 'results/2017-12-3117-46-32/#.csv')) patientnames = [f.stem for f in resultfilenames] df = pd.concat( [pd.readcsv(str(f), indexcol=0) for f in resultfilenames], keys=patient_names) report(df) ```

Release Notes

08/20/2023

  • Fixed numpy compatibility issues for risk index computation (thanks to @yihuicai)
  • Support Gymnasium.
    • NOTE: the observation in gymnasium version is no longer a namedtuple with a CGM field. It is a numpy array instead (to be consistent with its space definition).
  • NOTE: Python 3.7 and 3.8 are no longer supported. Please update to >3.9 version.

03/10/2021

  • Fixed some random seed issues.

5/27/2020

  • Add PIDController at simglucose/controller/pidctrller. There is an example at examples/runpid_controller.py showing how to use it.

9/10/2018

  • Controller policy method gets access to all the current patient state through info['patient_state'].

2/26/2018

  • Support customized reward function.

1/10/2018

  • Added workaround to select patient when make gym environment: register gym environment by passing kwargs of patient_name.

1/7/2018

  • Added OpenAI gym support, use gym.make('simglucose-v0') to make the environment.
  • Noticed issue: the patient name selection is not available in gym.make for now. The patient name has to be hard-coded in the constructor of simglucose.envs.T1DSimEnv.

Reporting issues

Shoot me any bugs, enhancements or even discussion by creating issues.

How to contribute

The following instruction is originally from the contribution instructions of sklearn.

The preferred workflow for contributing to simglucose is to fork the main repository on GitHub, clone, and develop on a branch. Steps:

  1. Fork the project repository by clicking on the 'Fork' button near the top right of the page. This creates a copy of the code under your GitHub user account. For more details on how to fork a repository see this guide.

  2. Clone your fork of the simglucose repo from your GitHub account to your local disk:

bash $ git clone git@github.com:YourLogin/simglucose.git $ cd simglucose

  1. Create a feature branch to hold your development changes:

bash $ git checkout -b my-feature

Always use a feature branch. It's good practice to never work on the master branch!

  1. Develop the feature on your feature branch. Add changed files using git add and then git commit files:

bash $ git add modified_files $ git commit

to record your changes in Git, then push the changes to your GitHub account with:

bash $ git push -u origin my-feature

  1. Follow these instructions to create a pull request from your fork. This will email the committers.

(If any of the above seems like magic to you, please look up the Git documentation on the web, or ask a friend or another contributor for help.)

Owner

  • Name: Jinyu Xie
  • Login: jxx123
  • Kind: user
  • Location: Mountain View, CA

From control systems to machine learning to Artificial Intelligence.

GitHub Events

Total
  • Issues event: 3
  • Watch event: 43
  • Issue comment event: 10
  • Push event: 1
  • Pull request event: 5
  • Pull request review comment event: 1
  • Pull request review event: 1
  • Fork event: 15
Last Year
  • Issues event: 3
  • Watch event: 43
  • Issue comment event: 10
  • Push event: 1
  • Pull request event: 5
  • Pull request review comment event: 1
  • Pull request review event: 1
  • Fork event: 15

Committers

Last synced: about 2 years ago

All Time
  • Total Commits: 97
  • Total Committers: 5
  • Avg Commits per committer: 19.4
  • Development Distribution Score (DDS): 0.186
Past Year
  • Commits: 25
  • Committers: 3
  • Avg Commits per committer: 8.333
  • Development Distribution Score (DDS): 0.44
Top Committers
Name Email Commits
Jinyu Xie x****8@g****m 79
Yihui Cai c****n@g****m 6
Hannes Voß h****5@w****e 5
Chris c****n@g****t 5
dependabot[bot] 4****] 2
Committer Domains (Top 20 + Academic)
gmx.net: 1

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 42
  • Total pull requests: 42
  • Average time to close issues: 3 months
  • Average time to close pull requests: about 1 month
  • Total issue authors: 31
  • Total pull request authors: 10
  • Average comments per issue: 2.26
  • Average comments per pull request: 0.5
  • Merged pull requests: 33
  • Bot issues: 0
  • Bot pull requests: 6
Past Year
  • Issues: 8
  • Pull requests: 8
  • Average time to close issues: 8 days
  • Average time to close pull requests: 2 days
  • Issue authors: 6
  • Pull request authors: 2
  • Average comments per issue: 1.25
  • Average comments per pull request: 0.0
  • Merged pull requests: 8
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • sumanabasu (3)
  • jxx123 (3)
  • drozzy (2)
  • eseglo (2)
  • OneBirding (2)
  • Wenzhou-Lyu (2)
  • edger-asiimwe (2)
  • aacelik (2)
  • celikalp (1)
  • harispoljo (1)
  • yihuicai (1)
  • mariuszmatusiak (1)
  • AdeLouis (1)
  • mathmath-cyber (1)
  • PorkShoulderHolder (1)
Pull Request Authors
  • jxx123 (24)
  • dependabot[bot] (6)
  • hannesvoss (4)
  • Shurikal (3)
  • drozzy (3)
  • BillVanAntwerp (2)
  • edger-asiimwe (2)
  • anbraten (1)
  • mia-jingyi (1)
  • RobotPsychologist (1)
  • jparr721 (1)
Top Labels
Issue Labels
enhancement (4) bug (2)
Pull Request Labels
dependencies (6) bug (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 1,090 last-month
  • Total dependent packages: 1
  • Total dependent repositories: 3
  • Total versions: 22
  • Total maintainers: 1
pypi.org: simglucose

A Type-1 Diabetes Simulator as a Reinforcement Learning Environment in OpenAI gym or rllab (python implementation of UVa/Padova Simulator)

  • Versions: 22
  • Dependent Packages: 1
  • Dependent Repositories: 3
  • Downloads: 1,090 Last month
Rankings
Forks count: 4.6%
Stargazers count: 5.1%
Average: 7.1%
Dependent packages count: 7.3%
Dependent repos count: 9.1%
Downloads: 9.3%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/python-package.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/python-publish.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite