https://github.com/epistasislab/tpot2

A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

https://github.com/epistasislab/tpot2

Science Score: 59.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 9 DOI reference(s) in README
  • Academic publication links
    Links to: springer.com, acm.org
  • Committers with academic emails
    1 of 9 committers (11.1%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.7%) to scientific vocabulary

Keywords

adsp ag066833 aiml alzheimer alzheimers automated-machine-learning automation automl data-science feature-engineering gradient-boosting hyperparameter-optimization lm010098 machine-learning model-selection nia parameter-tuning python random-forest scikit-learn

Keywords from Contributors

u01ag066833
Last synced: 5 months ago · JSON representation

Repository

A Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.

Basic Info
Statistics
  • Stars: 235
  • Watchers: 9
  • Forks: 31
  • Open Issues: 54
  • Releases: 11
Topics
adsp ag066833 aiml alzheimer alzheimers automated-machine-learning automation automl data-science feature-engineering gradient-boosting hyperparameter-optimization lm010098 machine-learning model-selection nia parameter-tuning python random-forest scikit-learn
Created almost 3 years ago · Last pushed about 1 year ago
Metadata Files
Readme License Support

README.md

TPOT


Tests PyPI Downloads Conda Downloads

TPOT stands for Tree-based Pipeline Optimization Tool. TPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming. Consider TPOT your Data Science Assistant.

Contributors

TPOT recently went through a major refactoring. The package was rewritten from scratch to improve efficiency and performance, support new features, and fix numerous bugs. New features include genetic feature selection, a significantly expanded and more flexible method of defining search spaces, multi-objective optimization, a more modular framework allowing for easier customization of the evolutionary algorithm, and more. While in development, this new version was referred to as "TPOT2" but we have now merged what was once TPOT2 into the main TPOT package. You can learn more about this new version of TPOT in our GPTP paper titled "TPOT2: A New Graph-Based Implementation of the Tree-Based Pipeline Optimization Tool for Automated Machine Learning."

Ribeiro, P. et al. (2024). TPOT2: A New Graph-Based Implementation of the Tree-Based Pipeline Optimization Tool for Automated Machine Learning. In: Winkler, S., Trujillo, L., Ofria, C., Hu, T. (eds) Genetic Programming Theory and Practice XX. Genetic and Evolutionary Computation. Springer, Singapore. https://doi.org/10.1007/978-981-99-8413-8_1

The current version of TPOT was developed at Cedars-Sinai by:
- Pedro Henrique Ribeiro (Lead developer - https://github.com/perib, https://www.linkedin.com/in/pedro-ribeiro/)
- Anil Saini (anil.saini@cshs.org)
- Jose Hernandez (jgh9094@gmail.com)
- Jay Moran (jay.moran@cshs.org)
- Nicholas Matsumoto (nicholas.matsumoto@cshs.org)
- Hyunjun Choi (hyunjun.choi@cshs.org)
- Miguel E. Hernandez (miguel.e.hernandez@cshs.org)
- Jason Moore (moorejh28@gmail.com)

The original version of TPOT was primarily developed at the University of Pennsylvania by:
- Randal S. Olson (rso@randalolson.com)
- Weixuan Fu (weixuanf@upenn.edu)
- Daniel Angell (dpa34@drexel.edu)
- Jason Moore (moorejh28@gmail.com)
- and many more generous open-source contributors

License

Please see the repository license for the licensing and usage information for TPOT. Generally, we have licensed TPOT to make it as widely usable as possible.

TPOT is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

TPOT is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public License along with TPOT. If not, see http://www.gnu.org/licenses/.

Documentation

The documentation webpage can be found here.

We also recommend looking at the Tutorials folder for jupyter notebooks with examples and guides.

Installation

TPOT requires a working installation of Python.

Creating a conda environment (optional)

We recommend using conda environments for installing TPOT, though it would work equally well if manually installed without it.

More information on making anaconda environments found here.

conda create --name tpotenv python=3.10 conda activate tpotenv

Packages Used

python version <3.12 numpy scipy scikit-learn updatechecker tqdm stopit pandas joblib xgboost matplotlib traitlets lightgbm optuna jupyter networkx dask distributed dask-ml dask-jobqueue functimeout configspace

Many of the hyperparameter ranges used in our configspaces were adapted from either the original TPOT package or the AutoSklearn package.

Note for M1 Mac or other Arm-based CPU users

You need to install the lightgbm package directly from conda using the following command before installing TPOT.

This is to ensure that you get the version that is compatible with your system.

conda install --yes -c conda-forge 'lightgbm>=3.3.3'

Installing Extra Features with pip

If you want to utilize the additional features provided by TPOT along with scikit-learn extensions, you can install them using pip. The command to install TPOT with these extra features is as follows:

pip install tpot[sklearnex]

Please note that while these extensions can speed up scikit-learn packages, there are some important considerations:

These extensions may not be fully developed and tested on Arm-based CPUs, such as M1 Macs. You might encounter compatibility issues or reduced performance on such systems.

We recommend using Python 3.9 when installing these extra features, as it provides better compatibility and stability.

Developer/Latest Branch Installation

pip install -e /path/to/tpotrepo

If you downloaded with git pull, then the repository folder will be named TPOT. (Note: this folder is the one that includes setup.py inside of it and not the folder of the same name inside it). If you downloaded as a zip, the folder may be called tpot-main.

Usage

See the Tutorials Folder for more instructions and examples.

Best Practices

1

TPOT uses dask for parallel processing. When Python is parallelized, each module is imported within each processes. Therefore it is important to protect all code within a if __name__ == "__main__" when running TPOT from a script. This is not required when running TPOT from a notebook.

For example:

```

my_analysis.py

import tpot if name == "main": X, y = loadmydata() est = tpot.TPOTClassifier() est.fit(X,y) #rest of analysis ```

2

When designing custom objective functions, avoid the use of global variables.

Don't Do: ``` globalX = [[1,2],[4,5]] globaly = [0,1] def foo(est): return myscorer(est, X=globalX, y=global_y)

```

Instead use a partial

``` from functools import partial

def fooscorer(est, X, y): return myscorer(est, X, y)

if name=='main': X = [[1,2],[4,5]] y = [0,1] finalscorer = partial(fooscorer, X=X, y=y) ```

Similarly when using lambda functions.

Dont Do:

``` def new_objective(est, a, b) #definition

a = 100 b = 20 badfunction = lambda est : newobjective(est=est, a=a, b=b) ```

Do: ``` def new_objective(est, a, b) #definition

a = 100 b = 20 goodfunction = lambda est, a=a, b=b : newobjective(est=est, a=a, b=b) ```

Tips

TPOT will not check if your data is correctly formatted. It will assume that you have passed in operators that can handle the type of data that was passed in. For instance, if you pass in a pandas dataframe with categorical features and missing data, then you should also include in your configuration operators that can handle those feautures of the data. Alternatively, if you pass in preprocessing = True, TPOT will impute missing values, one hot encode categorical features, then standardize the data. (Note that this is currently fitted and transformed on the entire training set before splitting for CV. Later there will be an option to apply per fold, and have the parameters be learnable.)

Setting verbose to 5 can be helpful during debugging as it will print out the error generated by failing pipelines.

Contributing to TPOT

We welcome you to check the existing issues for bugs or enhancements to work on. If you have an idea for an extension to TPOT, please file a new issue so we can discuss it.

Citing TPOT

If you use TPOT in a scientific publication, please consider citing at least one of the following papers:

Trang T. Le, Weixuan Fu and Jason H. Moore (2020). Scaling tree-based automated machine learning to biomedical big data with a feature set selector. Bioinformatics.36(1): 250-256.

BibTeX entry:

bibtex @article{le2020scaling, title={Scaling tree-based automated machine learning to biomedical big data with a feature set selector}, author={Le, Trang T and Fu, Weixuan and Moore, Jason H}, journal={Bioinformatics}, volume={36}, number={1}, pages={250--256}, year={2020}, publisher={Oxford University Press} }

Randal S. Olson, Ryan J. Urbanowicz, Peter C. Andrews, Nicole A. Lavender, La Creis Kidd, and Jason H. Moore (2016). Automating biomedical data science through tree-based pipeline optimization. Applications of Evolutionary Computation, pages 123-137.

BibTeX entry:

bibtex @inbook{Olson2016EvoBio, author={Olson, Randal S. and Urbanowicz, Ryan J. and Andrews, Peter C. and Lavender, Nicole A. and Kidd, La Creis and Moore, Jason H.}, editor={Squillero, Giovanni and Burelli, Paolo}, chapter={Automating Biomedical Data Science Through Tree-Based Pipeline Optimization}, title={Applications of Evolutionary Computation: 19th European Conference, EvoApplications 2016, Porto, Portugal, March 30 -- April 1, 2016, Proceedings, Part I}, year={2016}, publisher={Springer International Publishing}, pages={123--137}, isbn={978-3-319-31204-0}, doi={10.1007/978-3-319-31204-0_9}, url={http://dx.doi.org/10.1007/978-3-319-31204-0_9} }

Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, and Jason H. Moore (2016). Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. Proceedings of GECCO 2016, pages 485-492.

BibTeX entry:

bibtex @inproceedings{OlsonGECCO2016, author = {Olson, Randal S. and Bartley, Nathan and Urbanowicz, Ryan J. and Moore, Jason H.}, title = {Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science}, booktitle = {Proceedings of the Genetic and Evolutionary Computation Conference 2016}, series = {GECCO '16}, year = {2016}, isbn = {978-1-4503-4206-3}, location = {Denver, Colorado, USA}, pages = {485--492}, numpages = {8}, url = {http://doi.acm.org/10.1145/2908812.2908918}, doi = {10.1145/2908812.2908918}, acmid = {2908918}, publisher = {ACM}, address = {New York, NY, USA}, }

Support for TPOT

TPOT was developed in the Artificial Intelligence Innovation (A2I) Lab at Cedars-Sinai with funding from the NIH under grants U01 AG066833 and R01 LM010098. We are incredibly grateful for the support of the NIH and the Cedars-Sinai during the development of this project.

The TPOT logo was designed by Todd Newmuis, who generously donated his time to the project.

Owner

  • Name: Epistasis Lab at Cedars Sinai
  • Login: EpistasisLab
  • Kind: organization
  • Email: jason.moore@csmc.edu
  • Location: United States of America

Prof. Jason H. Moore's research lab at Cedars Sinai

GitHub Events

Total
  • Issues event: 7
  • Watch event: 45
  • Delete event: 2
  • Issue comment event: 27
  • Push event: 24
  • Pull request review event: 1
  • Pull request review comment event: 1
  • Pull request event: 13
  • Fork event: 7
  • Create event: 1
Last Year
  • Issues event: 7
  • Watch event: 45
  • Delete event: 2
  • Issue comment event: 27
  • Push event: 24
  • Pull request review event: 1
  • Pull request review comment event: 1
  • Pull request event: 13
  • Fork event: 7
  • Create event: 1

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 401
  • Total Committers: 9
  • Avg Commits per committer: 44.556
  • Development Distribution Score (DDS): 0.287
Past Year
  • Commits: 148
  • Committers: 6
  • Avg Commits per committer: 24.667
  • Development Distribution Score (DDS): 0.311
Top Committers
Name Email Commits
perib p****h@g****m 286
Jay Moran j****n@o****m 38
Jose j****4@g****m 35
nickotto i****s@g****m 13
nickotto n****o@g****m 11
gketron g****n@u****u 11
Ethan Glaser e****r@i****m 5
Anil Kumar Saini a****r@g****m 1
gketronDS g****n@c****g 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 63
  • Total pull requests: 103
  • Average time to close issues: 3 months
  • Average time to close pull requests: 5 days
  • Total issue authors: 10
  • Total pull request authors: 9
  • Average comments per issue: 0.59
  • Average comments per pull request: 0.61
  • Merged pull requests: 94
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 10
  • Pull requests: 19
  • Average time to close issues: 1 day
  • Average time to close pull requests: 19 days
  • Issue authors: 4
  • Pull request authors: 5
  • Average comments per issue: 1.7
  • Average comments per pull request: 1.42
  • Merged pull requests: 15
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • perib (42)
  • jgh9094 (4)
  • chimaerase (4)
  • WirelessAether (3)
  • jay-m-dev (1)
  • JinaKim2705 (1)
  • RADj375 (1)
  • yzy945 (1)
  • miguelehernandez (1)
  • Et9797 (1)
Pull Request Authors
  • perib (93)
  • jay-m-dev (16)
  • nickotto (7)
  • jgh9094 (4)
  • gketronDS (4)
  • chimaerase (2)
  • peiyanpan (1)
  • ethanglaser (1)
  • theaksaini (1)
Top Labels
Issue Labels
enhancement (34) question (7) API (7) bug (3) documentation (2) to test (2)
Pull Request Labels
bug (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 88 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 10
  • Total maintainers: 1
pypi.org: tpot2

Tree-based Pipeline Optimization Tool

  • Versions: 10
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 88 Last month
Rankings
Dependent packages count: 7.2%
Stargazers count: 14.9%
Forks count: 17.2%
Average: 20.8%
Downloads: 31.4%
Dependent repos count: 33.4%
Maintainers (1)
Last synced: 5 months ago

Dependencies

pyproject.toml pypi
.github/workflows/docs.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/publish_package.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/tests.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
docs/requirements_docs.txt pypi
  • mkdocs ==1.4.2
  • mkdocs-include-markdown-plugin ==4.0.4
  • mkdocs-jupyter ==0.24.1
  • mkdocs-material ==9.1.6
  • mkdocstrings ==0.21.2
  • mkdocstrings-python ==0.10.1
  • nbconvert ==7.4.0
requirements_dev.txt pypi
  • flake8 ==6.0.0 development
  • mypy ==1.2.0 development
  • pytest ==7.3.0 development
  • pytest-cov ==4.0.0 development
  • tox ==4.4.12 development
setup.py pypi
  • baikal >=0.4.2
  • dask >=2023.3.1
  • dask-jobqueue >=0.8.1
  • dask-ml >=2022.5.27
  • distributed >=2023.3.1
  • func_timeout >=4.3.5
  • joblib >=1.1.1
  • jupyter >=1.0.0
  • lightgbm >=3.3.3
  • matplotlib >=3.6.2
  • networkx >=3.0
  • numpy >=1.16.3
  • optuna >=3.0.5
  • pandas >=1.5.3,<2.0.0
  • scikit-learn >=1.2.0
  • scipy >=1.3.1
  • stopit >=1.1.1
  • tqdm >=4.36.1
  • traitlets >=5.8.0
  • update_checker >=0.16
  • xgboost >=1.7.0