pymdp

pymdp: A Python library for active inference in discrete state spaces - Published in JOSS (2022)

https://github.com/infer-actively/pymdp

Science Score: 95.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 9 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: arxiv.org, joss.theoj.org
  • Committers with academic emails
    2 of 18 committers (11.1%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Keywords from Contributors

mesh parallel ode simulations distribution energy-system pde

Scientific Fields

Mathematics Computer Science - 63% confidence
Last synced: 6 months ago · JSON representation

Repository

A Python implementation of active inference for Markov Decision Processes

Basic Info
  • Host: GitHub
  • Owner: infer-actively
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 14.8 MB
Statistics
  • Stars: 559
  • Watchers: 30
  • Forks: 113
  • Open Issues: 59
  • Releases: 6
Created about 6 years ago · Last pushed 6 months ago
Metadata Files
Readme Contributing License

README.md

A Python package for simulating Active Inference agents in Markov Decision Process environments. Please see our companion paper, published in the Journal of Open Source Software: "pymdp: A Python library for active inference in discrete state spaces" for an overview of the package and its motivation. For a more in-depth, tutorial-style introduction to the package and a mathematical overview of active inference in Markov Decision Processes, see the longer arxiv version of the paper.

This package is hosted on the infer-actively GitHub organization, which was built with the intention of hosting open-source active inference and free-energy-principle related software.

Most of the low-level mathematical operations are NumPy ports of their equivalent functions from the SPM implementation in MATLAB. We have benchmarked and validated most of these functions against their SPM counterparts.

Status

status PyPI version Documentation Status DOI

pymdp in action

Here's a visualization of pymdp agents in action. One of the defining features of active inference agents is the drive to maximize "epistemic value" (i.e. curiosity). Equipped with such a drive in environments with uncertain yet disclosable hidden structure, active inference can ultimately allow agents to simultaneously learn about the environment as well as maximize reward.

The simulation below (see associated notebook here) demonstrates what might be called "epistemic chaining," where an agent (here, analogized to a mouse seeking food) forages for a chain of cues, each of which discloses the location of the subsequent cue in the chain. The final cue (here, "Cue 2") reveals the location a hidden reward. This is similar in spirit to "behavior chaining" used in operant conditioning, except that here, each successive action in the behavioral sequence doesn't need to be learned through instrumental conditioning. Rather, active inference agents will naturally forage the sequence of cues based on an intrinsic desire to disclose information. This ultimately leads the agent to the hidden reward source in the fewest number of moves as possible.

You can run the code behind simulating tasks like this one and others in the Examples section of the official documentation.


Cue 2 in Location 1, Reward on Top


Cue 2 in Location 3, Reward on Bottom

Quick-start: Installation and Usage

In order to use pymdp to build and develop active inference agents, we recommend installing it with the the package installer pip, which will install pymdp locally as well as its dependencies. This can also be done in a virtual environment (e.g. with venv).

When pip installing pymdp, use the package name inferactively-pymdp:

bash pip install inferactively-pymdp

Once in Python, you can then directly import pymdp, its sub-packages, and functions.

```bash

import pymdp from pymdp import utils from pymdp.agent import Agent

numobs = [3, 5] # observation modality dimensions numstates = [3, 2, 2] # hidden state factor dimensions numcontrols = [3, 1, 1] # control state factor dimensions Amatrix = utils.randomAmatrix(numobs, numstates) # create sensory likelihood (A matrix) Bmatrix = utils.randomBmatrix(numstates, num_controls) # create transition likelihood (B matrix)

Cvector = utils.objarrayuniform(numobs) # uniform preferences

instantiate a quick agent using your A, B and C arrays

myagent = Agent( A = Amatrix, B = Bmatrix, C = Cvector)

give the agent a random observation and get the optimized posterior beliefs

observation = [1, 4] # a list specifying the indices of the observation, for each observation modality

qs = myagent.inferstates(observation) # get posterior over hidden states (a multi-factor belief)

Do active inference

qpi, negefe = myagent.inferpolicies() # return the policy posterior and return (negative) expected free energies of each policy as well

action = myagent.sampleaction() # sample an action

... and so on ...

```

Getting started / introductory material

We recommend starting with the Installation/Usage section of the official documentation for the repository, which provides a series of useful pedagogical notebooks for introducing you to active inference and how to build agents in pymdp.

For new users to pymdp, we specifically recommend stepping through following three Jupyter notebooks (can also be used on Google Colab):

Special thanks to Beren Millidge and Daphne Demekas for their help in prototyping earlier versions of the Active Inference from Scratch tutorial, which were originally based on a grid world POMDP environment create by Alec Tschantz.

We also have (and are continuing to build) a series of notebooks that walk through active inference agents performing different types of tasks, such as the classic T-Maze environment and the newer Epistemic Chaining demo.

Contributing

This package is under active development. If you would like to contribute, please refer to this file

If you would like to contribute to this repo, we recommend using venv and pip bash cd <path_to_repo_fork> python3 -m venv env source env/bin/activate pip install -r requirements.txt pip install -e ./ # This will install pymdp as a local dev package

You should then be able to run tests locally with pytest bash pytest test

Citing pymdp

If you use pymdp in your work or research, please consider citing our paper (open-access) published in the Journal of Open-Source Software:

@article{Heins2022, doi = {10.21105/joss.04098}, url = {https://doi.org/10.21105/joss.04098}, year = {2022}, publisher = {The Open Journal}, volume = {7}, number = {73}, pages = {4098}, author = {Conor Heins and Beren Millidge and Daphne Demekas and Brennan Klein and Karl Friston and Iain D. Couzin and Alexander Tschantz}, title = {pymdp: A Python library for active inference in discrete state spaces}, journal = {Journal of Open Source Software} }

For a more in-depth, tutorial-style introduction to the package and a mathematical overview of active inference in Markov Decision Processes, you can also consult the longer arxiv version of the paper.

Authors

Owner

  • Name: infer-actively
  • Login: infer-actively
  • Kind: organization

JOSS Publication

pymdp: A Python library for active inference in discrete state spaces
Published
May 04, 2022
Volume 7, Issue 73, Page 4098
Authors
Conor Heins
Department of Collective Behaviour, Max Planck Institute of Animal Behavior, 78457 Konstanz, Germany, Centre for the Advanced Study of Collective Behaviour, 78457 Konstanz, Germany, Department of Biology, University of Konstanz, 78457 Konstanz, Germany, VERSES Research Lab, Los Angeles, California, USA
Beren Millidge
VERSES Research Lab, Los Angeles, California, USA, MRC Brain Networks Dynamics Unit, University of Oxford, Oxford, UK
Daphne Demekas
Department of Computing, Imperial College London, London, UK
Brennan Klein ORCID
VERSES Research Lab, Los Angeles, California, USA, Network Science Institute, Northeastern University, Boston, MA, USA, Laboratory for the Modeling of Biological and Socio-Technical Systems, Northeastern University, Boston, USA
Karl Friston
Wellcome Centre for Human Neuroimaging, Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
Iain D. Couzin
Department of Collective Behaviour, Max Planck Institute of Animal Behavior, 78457 Konstanz, Germany, Centre for the Advanced Study of Collective Behaviour, 78457 Konstanz, Germany, Department of Biology, University of Konstanz, 78457 Konstanz, Germany
Alexander Tschantz
VERSES Research Lab, Los Angeles, California, USA, Sussex AI Group, Department of Informatics, University of Sussex, Brighton, UK, Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
Editor
Elizabeth DuPre ORCID
Tags
active inference Markov Decision Process POMDP MDP Reinforcement Learning Artificial Intelligence Bayesian inference free energy principle

GitHub Events

Total
  • Create event: 11
  • Issues event: 41
  • Watch event: 94
  • Delete event: 9
  • Member event: 3
  • Issue comment event: 54
  • Push event: 43
  • Pull request review comment event: 13
  • Pull request review event: 28
  • Pull request event: 44
  • Fork event: 30
Last Year
  • Create event: 11
  • Issues event: 41
  • Watch event: 94
  • Delete event: 9
  • Member event: 3
  • Issue comment event: 56
  • Push event: 43
  • Pull request review comment event: 13
  • Pull request review event: 28
  • Pull request event: 44
  • Fork event: 30

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 861
  • Total Committers: 18
  • Avg Commits per committer: 47.833
  • Development Distribution Score (DDS): 0.258
Past Year
  • Commits: 4
  • Committers: 3
  • Avg Commits per committer: 1.333
  • Development Distribution Score (DDS): 0.5
Top Committers
Name Email Commits
conorheins c****s@g****m 639
dimarkov 5****v 95
alec-tschantz t****c@g****m 69
Conor Heins c****r@u****l 14
Tim Verbelen t****n@v****o 11
arun a****2@u****k 10
Beren b****k@g****m 5
Alessandro Muzzi c****i@v****o 3
Ran Wei r****i@v****i 2
dependabot[bot] 4****] 2
Alexander Kiefer a****r@A****e 2
Brennan Klein b****n@B****l 2
Daphne Demekas z****e@u****k 2
Leon Bovett l****t@m****m 1
Pietro Monticone 3****e 1
SWauthier 4****r 1
dimarkov o****e@g****m 1
mahault 4****t 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 80
  • Total pull requests: 152
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 14 days
  • Total issue authors: 31
  • Total pull request authors: 26
  • Average comments per issue: 1.01
  • Average comments per pull request: 0.44
  • Merged pull requests: 121
  • Bot issues: 0
  • Bot pull requests: 3
Past Year
  • Issues: 42
  • Pull requests: 44
  • Average time to close issues: 24 days
  • Average time to close pull requests: 6 days
  • Issue authors: 15
  • Pull request authors: 13
  • Average comments per issue: 0.48
  • Average comments per pull request: 0.39
  • Merged pull requests: 28
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • conorheins (32)
  • Arun-Niranjan (10)
  • JohnBoik (4)
  • scott-carroll-verses-ai (3)
  • riddhipits (3)
  • seankmartin (2)
  • patrickmineault (2)
  • JmacCl (1)
  • xugami (1)
  • arthurgreef (1)
  • nnishio106 (1)
  • SamGijsen (1)
  • sunghwan87 (1)
  • mklingebiel (1)
  • vala1958 (1)
Pull Request Authors
  • conorheins (73)
  • tverbele (11)
  • Arun-Niranjan (8)
  • OzanCatalVerses (8)
  • toonvdm (7)
  • dimarkov (7)
  • ran-wei-verses (5)
  • BerenMillidge (3)
  • dependabot[bot] (3)
  • riddhipits (3)
  • aswinpaul (2)
  • leonbovett (2)
  • nikolamilovic-ft (2)
  • spetey (2)
  • LearnableLoopAI (2)
Top Labels
Issue Labels
enhancement (13) question (6) bug (6) cleanup (3) better_errors (3) documentation (3) help wanted (2) open (2) good first issue (2) tutorial_suggestion (2)
Pull Request Labels
dependencies (3)

Packages

  • Total packages: 3
  • Total downloads:
    • pypi 1,143 last-month
  • Total docker downloads: 16
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 2
    (may contain duplicates)
  • Total versions: 16
  • Total maintainers: 1
proxy.golang.org: github.com/infer-actively/pymdp
  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.4%
Average: 5.6%
Dependent repos count: 5.7%
Last synced: 6 months ago
pypi.org: inferactively-pymdp

A Python package for solving Markov Decision Processes with Active Inference

  • Versions: 8
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 1,123 Last month
  • Docker Downloads: 16
Rankings
Stargazers count: 3.5%
Forks count: 5.6%
Dependent packages count: 10.1%
Average: 11.9%
Downloads: 18.9%
Dependent repos count: 21.6%
Maintainers (1)
Last synced: 6 months ago
pypi.org: test-inferactively-pymdp

A Python package for solving Markov Decision Processes with Active Inference

  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 20 Last month
Rankings
Stargazers count: 3.5%
Forks count: 5.6%
Dependent packages count: 10.1%
Average: 16.4%
Dependent repos count: 21.6%
Downloads: 41.4%
Maintainers (1)
Last synced: 6 months ago

Dependencies

docs/requirements.txt pypi
  • jinja2 ==3.0.0
  • jupyter-sphinx >=0.3.2
  • matplotlib *
  • myst-nb *
  • numpy *
  • seaborn *
  • sphinx ==4.2.0
  • sphinx-autodoc-typehints ==1.11.1
  • sphinx_rtd_theme *
requirements.txt pypi
  • Pillow >=8.2.0
  • attrs >=20.3.0
  • autograd >=1.3
  • cycler >=0.10.0
  • iniconfig >=1.1.1
  • kiwisolver >=1.3.1
  • matplotlib >=3.1.3
  • myst-nb >=0.13.1
  • nose >=1.3.7
  • numpy >=1.19.5
  • openpyxl >=3.0.7
  • packaging >=20.8
  • pandas >=1.2.4
  • pluggy >=0.13.1
  • py >=1.10.0
  • pyparsing >=2.4.7
  • pytest >=6.2.1
  • python-dateutil >=2.8.1
  • pytz >=2020.5
  • scipy >=1.6.0
  • seaborn >=0.11.1
  • six >=1.15.0
  • sphinx-rtd-theme >=0.4
  • toml >=0.10.2
  • typing-extensions >=3.7.4.3
  • xlsxwriter >=1.4.3
setup.py pypi
  • Pillow >=8.2.0
  • attrs >=20.3.0
  • autograd >=1.3
  • cycler >=0.10.0
  • iniconfig >=1.1.1
  • kiwisolver >=1.3.1
  • matplotlib >=3.1.3
  • myst-nb >=0.13.1
  • nose >=1.3.7
  • numpy >=1.19.5
  • openpyxl >=3.0.7
  • packaging >=20.8
  • pandas >=1.2.4
  • pluggy >=0.13.1
  • py >=1.10.0
  • pyparsing >=2.4.7
  • pytest >=6.2.1
  • python-dateutil >=2.8.1
  • pytz >=2020.5
  • scipy >=1.6.0
  • seaborn >=0.11.1
  • six >=1.15.0
  • sphinx-rtd-theme >=0.4
  • toml >=0.10.2
  • typing-extensions >=3.7.4.3
  • xlsxwriter >=1.4.3
.github/workflows/python-package.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite