hdbscan
hdbscan: Hierarchical density based clustering - Published in JOSS (2017)
Science Score: 95.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 9 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: ieee.org, joss.theoj.org -
✓Committers with academic emails
5 of 95 committers (5.3%) from academic institutions -
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Keywords
Keywords from Contributors
Scientific Fields
Repository
A high performance implementation of HDBSCAN clustering.
Basic Info
- Host: GitHub
- Owner: scikit-learn-contrib
- License: bsd-3-clause
- Language: Jupyter Notebook
- Default Branch: master
- Homepage: http://hdbscan.readthedocs.io/en/latest/
- Size: 27.8 MB
Statistics
- Stars: 2,984
- Watchers: 56
- Forks: 517
- Open Issues: 373
- Releases: 56
Topics
Metadata Files
README.rst
.. image:: https://img.shields.io/pypi/v/hdbscan.svg
:target: https://pypi.python.org/pypi/hdbscan/
:alt: PyPI Version
.. image:: https://anaconda.org/conda-forge/hdbscan/badges/version.svg
:target: https://anaconda.org/conda-forge/hdbscan
:alt: Conda-forge Version
.. image:: https://anaconda.org/conda-forge/hdbscan/badges/downloads.svg
:target: https://anaconda.org/conda-forge/hdbscan
:alt: Conda-forge downloads
.. image:: https://img.shields.io/pypi/l/hdbscan.svg
:target: https://github.com/scikit-learn-contrib/hdbscan/blob/master/LICENSE
:alt: License
.. image:: https://travis-ci.org/scikit-learn-contrib/hdbscan.svg
:target: https://travis-ci.org/scikit-learn-contrib/hdbscan
:alt: Travis Build Status
.. image:: https://codecov.io/gh/scikit-learn-contrib/hdbscan/branch/master/graph/badge.svg
:target: https://codecov.io/gh/scikit-learn-contrib/hdbscan
:alt: Test Coverage
.. image:: https://readthedocs.org/projects/hdbscan/badge/?version=latest
:target: https://hdbscan.readthedocs.org
:alt: Docs
.. image:: http://joss.theoj.org/papers/10.21105/joss.00205/status.svg
:target: http://joss.theoj.org/papers/10.21105/joss.00205
:alt: JOSS article
.. image:: https://mybinder.org/badge.svg
:target: https://mybinder.org/v2/gh/scikit-learn-contrib/hdbscan
:alt: Launch example notebooks in Binder
=======
HDBSCAN
=======
HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications
with Noise. Performs DBSCAN over varying epsilon values and integrates
the result to find a clustering that gives the best stability over epsilon.
This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN),
and be more robust to parameter selection.
In practice this means that HDBSCAN returns a good clustering straight
away with little or no parameter tuning -- and the primary parameter,
minimum cluster size, is intuitive and easy to select.
HDBSCAN is ideal for exploratory data analysis; it's a fast and robust
algorithm that you can trust to return meaningful clusters (if there
are any).
Based on the papers:
McInnes L, Healy J. *Accelerated Hierarchical Density Based Clustering*
In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, pp 33-42.
2017 `[pdf] `_
R. Campello, D. Moulavi, and J. Sander, *Density-Based Clustering Based on
Hierarchical Density Estimates*
In: Advances in Knowledge Discovery and Data Mining, Springer, pp 160-172.
2013
Documentation, including tutorials, are available on ReadTheDocs at http://hdbscan.readthedocs.io/en/latest/ .
Notebooks `comparing HDBSCAN to other clustering algorithms `_, explaining `how HDBSCAN works `_ and `comparing performance with other python clustering implementations `_ are available.
------------------
How to use HDBSCAN
------------------
The hdbscan package inherits from sklearn classes, and thus drops in neatly
next to other sklearn clusterers with an identical calling API. Similarly it
supports input in a variety of formats: an array (or pandas dataframe, or
sparse matrix) of shape ``(num_samples x num_features)``; an array (or sparse matrix)
giving a distance matrix between samples.
.. code:: python
import hdbscan
from sklearn.datasets import make_blobs
data, _ = make_blobs(1000)
clusterer = hdbscan.HDBSCAN(min_cluster_size=10)
cluster_labels = clusterer.fit_predict(data)
-----------
Performance
-----------
Significant effort has been put into making the hdbscan implementation as fast as
possible. It is `orders of magnitude faster than the reference implementation `_ in Java,
and is currently faster than highly optimized single linkage implementations in C and C++.
`version 0.7 performance can be seen in this notebook `_ .
In particular `performance on low dimensional data is better than sklearn's DBSCAN `_ ,
and via support for caching with joblib, re-clustering with different parameters
can be almost free.
------------------------
Additional functionality
------------------------
The hdbscan package comes equipped with visualization tools to help you
understand your clustering results. After fitting data the clusterer
object has attributes for:
* The condensed cluster hierarchy
* The robust single linkage cluster hierarchy
* The reachability distance minimal spanning tree
All of which come equipped with methods for plotting and converting
to Pandas or NetworkX for further analysis. See the notebook on
`how HDBSCAN works `_ for examples and further details.
The clusterer objects also have an attribute providing cluster membership
strengths, resulting in optional soft clustering (and no further compute
expense). Finally each cluster also receives a persistence score giving
the stability of the cluster over the range of distance scales present
in the data. This provides a measure of the relative strength of clusters.
-----------------
Outlier Detection
-----------------
The HDBSCAN clusterer objects also support the GLOSH outlier detection algorithm.
After fitting the clusterer to data the outlier scores can be accessed via the
``outlier_scores_`` attribute. The result is a vector of score values, one for
each data point that was fit. Higher scores represent more outlier like objects.
Selecting outliers via upper quantiles is often a good approach.
Based on the paper:
R.J.G.B. Campello, D. Moulavi, A. Zimek and J. Sander
*Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection*,
ACM Trans. on Knowledge Discovery from Data, Vol 10, 1 (July 2015), 1-51.
---------------------
Robust single linkage
---------------------
The hdbscan package also provides support for the *robust single linkage*
clustering algorithm of Chaudhuri and Dasgupta. As with the HDBSCAN
implementation this is a high performance version of the algorithm
outperforming scipy's standard single linkage implementation. The
robust single linkage hierarchy is available as an attribute of
the robust single linkage clusterer, again with the ability to plot
or export the hierarchy, and to extract flat clusterings at a given
cut level and gamma value.
Example usage:
.. code:: python
import hdbscan
from sklearn.datasets import make_blobs
data, _ = make_blobs(1000)
clusterer = hdbscan.RobustSingleLinkage(cut=0.125, k=7)
cluster_labels = clusterer.fit_predict(data)
hierarchy = clusterer.cluster_hierarchy_
alt_labels = hierarchy.get_clusters(0.100, 5)
hierarchy.plot()
Based on the paper:
K. Chaudhuri and S. Dasgupta.
*"Rates of convergence for the cluster tree."*
In Advances in Neural Information Processing Systems, 2010.
----------------
Branch detection
----------------
The hdbscan package supports a branch-detection post-processing step
by `Bot et al. `_. Cluster shapes,
such as branching structures, can reveal interesting patterns
that are not expressed in density-based cluster hierarchies. The
BranchDetector class mimics the HDBSCAN API and can be used to
detect branching hierarchies in clusters. It provides condensed
branch hierarchies, branch persistences, and branch memberships and
supports joblib's caching functionality. A notebook
`demonstrating the BranchDetector is available `_.
Example usage:
.. code:: python
import hdbscan
from sklearn.datasets import make_blobs
data, _ = make_blobs(1000)
clusterer = hdbscan.HDBSCAN(branch_detection_data=True).fit(data)
branch_detector = hdbscan.BranchDetector().fit(clusterer)
branch_detector.cluster_approximation_graph_.plot(edge_width=0.1)
Based on the paper:
D.M. Bot, J. Peeters, J. Liesenborgs and J. Aerts
*FLASC: a flare-sensitive clustering algorithm.*
PeerJ Computer Science, Vol 11, April 2025, e2792.
https://doi.org/10.7717/peerj-cs.2792/.
----------
Installing
----------
Easiest install, if you have Anaconda (thanks to conda-forge which is awesome!):
.. code:: bash
conda install -c conda-forge hdbscan
PyPI install, presuming you have an up to date pip:
.. code:: bash
pip install hdbscan
Binary wheels for a number of platforms are available thanks to the work of
Ryan Helinski .
If pip is having difficulties pulling the dependencies then we'd suggest to first upgrade
pip to at least version 10 and try again:
.. code:: bash
pip install --upgrade pip
pip install hdbscan
Otherwise install the dependencies manually using anaconda followed by pulling hdbscan from pip:
.. code:: bash
conda install cython
conda install numpy scipy
conda install scikit-learn
pip install hdbscan
For a manual install of the latest code directly from GitHub:
.. code:: bash
pip install --upgrade git+https://github.com/scikit-learn-contrib/hdbscan.git#egg=hdbscan
Alternatively download the package, install requirements, and manually run the installer:
.. code:: bash
wget https://github.com/scikit-learn-contrib/hdbscan/archive/master.zip
unzip master.zip
rm master.zip
cd hdbscan-master
pip install -r requirements.txt
python setup.py install
-----------------
Running the Tests
-----------------
The package tests can be run after installation using the command:
.. code:: bash
nosetests -s hdbscan
or, if ``nose`` is installed but ``nosetests`` is not in your ``PATH`` variable:
.. code:: bash
python -m nose -s hdbscan
If one or more of the tests fail, please report a bug at https://github.com/scikit-learn-contrib/hdbscan/issues/new
--------------
Python Version
--------------
The hdbscan library supports both Python 2 and Python 3. However we recommend Python 3 as the better option if it is available to you.
----------------
Help and Support
----------------
For simple issues you can consult the `FAQ `_ in the documentation.
If your issue is not suitably resolved there, please check the `issues `_ on github. Finally, if no solution is available there feel free to `open an issue `_ ; the authors will attempt to respond in a reasonably timely fashion.
------------
Contributing
------------
We welcome contributions in any form! Assistance with documentation, particularly expanding tutorials,
is always welcome. To contribute please `fork the project `_ make your changes and submit a pull request. We will do our best to work through any issues with
you and get your code merged into the main branch.
------
Citing
------
If you have used this codebase in a scientific publication and wish to cite it, please use the `Journal of Open Source Software article `_.
L. McInnes, J. Healy, S. Astels, *hdbscan: Hierarchical density based clustering*
In: Journal of Open Source Software, The Open Journal, volume 2, number 11.
2017
.. code:: bibtex
@article{mcinnes2017hdbscan,
title={hdbscan: Hierarchical density based clustering},
author={McInnes, Leland and Healy, John and Astels, Steve},
journal={The Journal of Open Source Software},
volume={2},
number={11},
pages={205},
year={2017}
}
To reference the high performance algorithm developed in this library please cite our paper in ICDMW 2017 proceedings.
McInnes L, Healy J. *Accelerated Hierarchical Density Based Clustering*
In: 2017 IEEE International Conference on Data Mining Workshops (ICDMW), IEEE, pp 33-42.
2017
.. code:: bibtex
@inproceedings{mcinnes2017accelerated,
title={Accelerated Hierarchical Density Based Clustering},
author={McInnes, Leland and Healy, John},
booktitle={Data Mining Workshops (ICDMW), 2017 IEEE International Conference on},
pages={33--42},
year={2017},
organization={IEEE}
}
If you used the branch-detection functionality in this library please cite our `PeerJ paper `_:
Bot DM, Peeters J, Liesenborgs J, Aerts J.
*FLASC: a flare-sensitive clustering algorithm.*
In: PeerJ Computer Science, Volume 11, e2792, 2025.
https://doi.org/10.7717/peerj-cs.2792
.. code:: bibtex
@article{bot2025flasc,
title = {{FLASC: a flare-sensitive clustering algorithm}},
author = {Bot, Dani{\"{e}}l M. and Peeters, Jannes and Liesenborgs, Jori and Aerts, Jan},
year = {2025},
month = {apr},
journal = {PeerJ Comput. Sci.},
volume = {11},
pages = {e2792},
issn = {2376-5992},
doi = {10.7717/peerj-cs.2792},
url = {https://peerj.com/articles/cs-2792},
}
---------
Licensing
---------
The hdbscan package is 3-clause BSD licensed. Enjoy.
Owner
- Name: scikit-learn-contrib
- Login: scikit-learn-contrib
- Kind: organization
- Website: http://contrib.scikit-learn.org
- Repositories: 27
- Profile: https://github.com/scikit-learn-contrib
scikit-learn compatible projects
JOSS Publication
hdbscan: Hierarchical density based clustering
Authors
Tutte Institute for Mathematics and Computing
Shopify
Tags
clustering unsupervised learning machine learningGitHub Events
Total
- Create event: 1
- Issues event: 15
- Release event: 1
- Watch event: 160
- Member event: 1
- Issue comment event: 43
- Push event: 6
- Pull request review event: 6
- Pull request review comment event: 3
- Pull request event: 22
- Fork event: 21
Last Year
- Create event: 1
- Issues event: 15
- Release event: 1
- Watch event: 160
- Member event: 1
- Issue comment event: 43
- Push event: 6
- Pull request review event: 6
- Pull request review comment event: 3
- Pull request event: 22
- Fork event: 21
Committers
Last synced: 5 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Leland McInnes | l****s@g****m | 633 |
| Jelmer Bot | j****t@g****m | 21 |
| neontty | n****y@g****m | 11 |
| Guillaume Lemaitre | g****e@v****u | 10 |
| John Healy | j****l@g****m | 10 |
| Steve Astels | s****s@g****m | 10 |
| Greg Demand | 6****d | 9 |
| luis261 | l****r@g****e | 8 |
| jc-healy | j****y@g****m | 8 |
| gclen | g****g@u****t | 7 |
| João Matias | j****s@t****m | 7 |
| Dicksonchin93 | c****n@d****m | 7 |
| Guillaume Ansanay-Alex | g****y@g****m | 7 |
| Bruno Alano | b****o@n****r | 7 |
| gclendenning | 6****g | 6 |
| Matthew Carrigan | r****1@g****m | 6 |
| Sebastian Berg | s****b@n****m | 5 |
| cmalzer | c****r@g****e | 5 |
| Rhaedonius | l****o@a****n | 5 |
| Nathaniel Saul | n****t@s****m | 4 |
| m-dz | m****c@g****m | 4 |
| Ryan Helinski | r****i@g****m | 4 |
| the null | 4****e | 4 |
| Lukas Großberger | c****e@g****z | 4 |
| gr | g****y@g****m | 3 |
| cmalzer | c****r@g****e | 3 |
| areeh | a****t@i****m | 3 |
| Adam Lugowski | a****i@g****m | 3 |
| K.-Michael Aye | m****e | 2 |
| Mark Dimitsas | d****s@g****m | 2 |
| and 65 more... | ||
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 156
- Total pull requests: 59
- Average time to close issues: 6 months
- Average time to close pull requests: 3 months
- Total issue authors: 149
- Total pull request authors: 35
- Average comments per issue: 4.07
- Average comments per pull request: 1.61
- Merged pull requests: 38
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 15
- Pull requests: 23
- Average time to close issues: 2 months
- Average time to close pull requests: 21 days
- Issue authors: 14
- Pull request authors: 10
- Average comments per issue: 1.0
- Average comments per pull request: 1.22
- Merged pull requests: 15
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- notluquis (3)
- lucetka (2)
- katzurik (2)
- divya-agrawal3103 (2)
- KukumavMozolo (2)
- jakirkham (2)
- steffencruz (1)
- mdagost (1)
- xavgit (1)
- OlgaGKononova (1)
- SergioG-M (1)
- PezAmaury (1)
- jgonggrijp (1)
- mczerny (1)
- huijiawu0 (1)
Pull Request Authors
- JelmerBot (13)
- lmcinnes (5)
- gclendenning (4)
- chenxinye (4)
- divyegala (2)
- axiak (2)
- seberg (2)
- Antobiotics (2)
- joaquindas (2)
- prodrigues-tdx (2)
- smartIU (2)
- cearlefraym (2)
- NicoSantamaria (2)
- meshari343 (2)
- Rhaedonius (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 3
-
Total downloads:
- pypi 901,612 last-month
- Total docker downloads: 934,700,138
-
Total dependent packages: 133
(may contain duplicates) -
Total dependent repositories: 522
(may contain duplicates) - Total versions: 87
- Total maintainers: 4
pypi.org: hdbscan
Clustering based on density with variable density clusters
- Homepage: http://github.com/scikit-learn-contrib/hdbscan
- Documentation: https://hdbscan.readthedocs.io/
- License: BSD
-
Latest release: 0.8.40
published about 1 year ago
Rankings
Maintainers (3)
conda-forge.org: hdbscan
- Homepage: http://github.com/scikit-learn-contrib/hdbscan
- License: BSD-3-Clause
-
Latest release: 0.8.29
published about 3 years ago
Rankings
spack.io: py-hdbscan
HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. In practice this means that HDBSCAN returns a good clustering straight away with little or no parameter tuning -- and the primary parameter, minimum cluster size, is intuitive and easy to select. HDBSCAN is ideal for exploratory data analysis; it's a fast and robust algorithm that you can trust to return meaningful clusters (if there are any).
- Homepage: https://github.com/scikit-learn-contrib/hdbscan
- License: []
-
Latest release: 0.8.29
published over 2 years ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v1 composite
- actions/setup-python v1 composite
- actions/checkout v1 composite
- actions/setup-python v1 composite
- cython >=0.27
- joblib >=1.0
- numpy >=1.20
- scikit-learn >=0.20
- scipy >=1.0
- actions/checkout v1 composite
- actions/setup-python v1 composite
- hdbscan >=0.8.11
- matplotlib >=2.0
- python >=3.5
- scikit-learn >=0.19
- seaborn >=0.8
- sphinx_rtd_theme *
