pygmmis

Gaussian mixture model for incomplete (missing or truncated) and noisy data

https://github.com/pmelchior/pygmmis

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.9%) to scientific vocabulary

Keywords

data-analysis gmm
Last synced: 6 months ago · JSON representation ·

Repository

Gaussian mixture model for incomplete (missing or truncated) and noisy data

Basic Info
  • Host: GitHub
  • Owner: pmelchior
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 1000 KB
Statistics
  • Stars: 102
  • Watchers: 6
  • Forks: 24
  • Open Issues: 4
  • Releases: 0
Topics
data-analysis gmm
Created about 10 years ago · Last pushed over 3 years ago
Metadata Files
Readme License Citation

README.md

PyPI License DOI arXiv

pyGMMis

Need a simple and powerful Gaussian-mixture code in pure python? It can be as easy as this:

python import pygmmis gmm = pygmmis.GMM(K=K, D=D) # K components, D dimensions logL, U = pygmmis.fit(gmm, data) # logL = log-likelihood, U = association of data to components However, pyGMMis has a few extra tricks up its sleeve.

  • It can account for independent multivariate normal measurement errors for each of the observed samples, and then recovers an estimate of the error-free distribution. This technique is known as "Extreme Deconvolution" by Bovy, Hogg & Roweis (2011).
  • It works with missing data (features) by setting the respective elements of the covariance matrix to a vary large value, thus effectively setting the weights of the missing feature to 0.
  • It can deal with gaps (aka "truncated data") and variable sample completeness as long as
    • you know the incompleteness over the entire feature space,
    • and the incompleteness does not depend on the sample density (missing at random).
  • It can incorporate a "background" distribution (implemented is a uniform one) and separate signal from background, with the former being fit by the GMM.
  • It keeps track of which components need to be evaluated in which regions of the feature space, thereby substantially increasing the performance for fragmented data.

If you want more context and details on those capabilities, have a look at this blog post.

Under the hood, pyGMMis uses the Expectation-Maximization procedure. When dealing with sample incompleteness it generates its best guess of the unobserved samples on the fly given the current model fit to the observed samples.

Example of pyGMMis

In the example above, the true distribution is shown as contours in the left panel. We then draw 400 samples from it (red), add Gaussian noise to them (1,2,3 sigma contours shown in blue), and select only samples within the box but outside of the circle (blue).

The code is written in pure python (developed and tested in 2.7), parallelized with multiprocessing, and is capable of performing density estimation with millions of samples and thousands of model components on machines with sufficient memory.

More details are in the paper listed in the file CITATION.cff.

Installation and Prerequisites

You can either clone the repo and install by python setup.py install or get the latest release with

pip install pygmmis

Dependencies:

  • numpy
  • scipy
  • multiprocessing
  • parmap

How to run the code

  1. Create a GMM object with the desired component number K and data dimensionality D: gmm = pygmmis.GMM(K=K, D=D)

  2. Define a callback for the completeness function. When called with with data with shape (N,D) and returns the probability of each sample getting observed. Two simple examples:

```python def cutAtSix(coords): """Selects all samples whose first coordinate is < 6""" return (coords[:,0] < 6)

def selSlope(coords, rng=np.random): """Selects probabilistically according to first coordinate x: Omega = 1 for x < 0 = 1-x for x = 0 .. 1 = 0 for x > 1 """ return np.max(0, np.min(1, 1 - coords[:,0])) ```

  1. If the samples are noisy (i.e. they have positional uncertainties), you need to provide the covariance matrix of each data sample, or one for all in case of i.i.d. noise.

  2. If the samples are noisy and there completeness function isn't constant, you need to provide a callback function that returns an estimate of the covariance at arbitrary locations:

```python # example 1: simply using the same covariance for all samples dispersion = 1 defaultcovar = np.eye(D) * dispersion**2 covarcb = lambda coords: default_covar

# example: use the covariance of the nearest neighbor. def covartreecb(coords, tree, covar): """Return the covariance of the nearest neighbor of coords in data.""" dist, ind = tree.query(coords, k=1) return covar[ind.flatten()]

from sklearn.neighbors import KDTree tree = KDTree(data, leaf_size=100)

from functools import partial covarcb = partial(covartree_cb, tree=tree, covar=covar) ```

  1. If there is a uniform background signal, you need to define it. Because a uniform distribution is normalizable only if its support is finite, you need to decide on the footprint over which the background model is present, e.g.:

```python footprint = data.min(axis=0), data.max(axis=0) amp = 0.3 bg = pygmmis.Background(footprint, amp=amp)

# fine tuning, if desired bg.ampmin = 0.1 bg.ampmax = 0.5 bg.adjust_amp = False # freezes bg.amp at current value ```

  1. Select an initialization method. This tells the GMM what initial parameters is should assume. The options are 'minmax','random','kmeans','none'. See the respective functions for details:
  • pygmmis.initFromDataMinMax()
  • pygmmis.initFromDataAtRandom()
  • pygmmis.initFromKMeans()

For difficult situations, or if you are not happy with the convergence, you may want to experiment with your own initialization. All you have to do is set gmm.amp, gmm.mean, and gmm.covar to desired values and use init_method='none'.

  1. Decide to freeze out any components. This makes sense if you know some of the parameters of the components. You can freeze amplitude, mean, or covariance of any component by listing them in a dictionary, e.g:

python frozen={"amp": [1,2], "mean": [], "covar": [1]}

This freezes the amplitudes of component 1 and 2 (NOTE: Counting starts at 0), and the covariance of 1.

  1. Run the fitter:

```python w = 0.1 # minimum covariance regularization, same units as data cutoff = 5 # segment the data set into neighborhood within 5 sigma around components tol = 1e-3 # tolerance on logL to terminate EM

# define RNG for deterministic behavior from numpy.random import RandomState seed = 42 rng = RandomState(seed)

# run EM logL, U = pygmmis.fit(gmm, data, initmethod='random',\ selcallback=cb, covarcallback=covarcb, w=w, cutoff=cutoff,\ background=bg, tol=tol, frozen=frozen, rng=rng) ```

This runs the EM procedure until tolerance is reached and returns the final mean log-likelihood of all samples, and the neighborhood of each component (indices of data samples that are within cutoff of a GMM component).

  1. Evaluate the model:

```python # log of p(x) p = gmm(testcoords, aslog=False) Ns = 1000 # draw samples from GMM samples = gmm.draw(Ns)

# draw sample from the model with noise, background, and selection: # if you want to get the missing sample, set invertsel=True. # Norig is the estimated number of samples prior to selection obssize = len(data) samples, covarsamples, Norig = pygmmis.draw(gmm, obssize, selcallback=cb,\ invertsel=False, origsize=None,\ covarcallback=covar_cb,background=bg) ```

For a complete example, have a look at the test script. For requests and bug reports, please open an issue.

Owner

  • Name: Peter Melchior
  • Login: pmelchior
  • Kind: user
  • Location: Princeton, NJ, USA
  • Company: Princeton University

Asst. Prof. of Statistical Astronomy

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Melchior"
  given-names: "Peter"
  orcid: "https://orcid.org/0000-0002-8873-5065"
title: "pyGMMis"
url: "https://github.com/pmelchior/pygmmis"
preferred-citation:
  type: article
  authors:
  - family-names: "Melchior"
    given-names: "Peter"
    orcid: "https://orcid.org/0000-0002-8873-5065"
  - family-names: "Goulding"
    given-names: "Andy"
    orcid: "https://orcid.org/0000-0003-4700-663X"
  doi: "10.1016/j.ascom.2018.09.013"
  journal: "Astronomy and Computing"
  start: 183 # First page number
  end: 194 # Last page number
  title: "Filling the gaps: Gaussian mixture models from noisy, truncated or incomplete samples"
  volume: 25
  year: 2018
  month: 10

GitHub Events

Total
  • Watch event: 5
  • Fork event: 3
Last Year
  • Watch event: 5
  • Fork event: 3

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 333
  • Total Committers: 3
  • Avg Commits per committer: 111.0
  • Development Distribution Score (DDS): 0.006
Top Committers
Name Email Commits
Peter Melchior p****r@g****m 331
Adrian Price-Whelan a****w@g****m 1
Sergio Oller s****r@g****m 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 18
  • Total pull requests: 3
  • Average time to close issues: 4 months
  • Average time to close pull requests: 12 months
  • Total issue authors: 12
  • Total pull request authors: 3
  • Average comments per issue: 3.94
  • Average comments per pull request: 1.0
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • philastrophist (3)
  • Gabriel-p (2)
  • ravi0912 (2)
  • pmelchior (2)
  • zecevicp (2)
  • Jieyu-Wang (1)
  • keatonb (1)
  • wgandler (1)
  • timstaley (1)
  • jhmarcus (1)
  • Omarito2412 (1)
  • jpfeuffer (1)
Pull Request Authors
  • zeehio (1)
  • philastrophist (1)
  • adrn (1)
Top Labels
Issue Labels
enhancement (3)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 65 last-month
  • Total dependent packages: 1
  • Total dependent repositories: 1
  • Total versions: 10
  • Total maintainers: 1
pypi.org: pygmmis

Gaussian mixture model for incomplete, truncated, and noisy data

  • Versions: 10
  • Dependent Packages: 1
  • Dependent Repositories: 1
  • Downloads: 65 Last month
Rankings
Dependent packages count: 4.8%
Stargazers count: 7.4%
Forks count: 8.1%
Average: 12.5%
Downloads: 20.9%
Dependent repos count: 21.5%
Maintainers (1)
Last synced: 6 months ago

Dependencies

setup.py pypi
  • numpy *