gama

An automated machine learning tool aimed to facilitate AutoML research.

https://github.com/amore-labs/gama

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 5 DOI reference(s) in README
  • Academic publication links
    Links to: springer.com, joss.theoj.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (17.4%) to scientific vocabulary

Keywords

automl hyperparameter-optimization research-tool
Last synced: 4 months ago · JSON representation ·

Repository

An automated machine learning tool aimed to facilitate AutoML research.

Basic Info
Statistics
  • Stars: 99
  • Watchers: 6
  • Forks: 28
  • Open Issues: 41
  • Releases: 12
Topics
automl hyperparameter-optimization research-tool
Created about 8 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Code of conduct Citation

README.md

GAMA logo

General Automated Machine learning Assistant
An automated machine learning tool based on genetic programming.
Make sure to check out the documentation.

Build Status codecov DOI


GAMA is an AutoML package for end-users and AutoML researchers. It generates optimized machine learning pipelines given specific input data and resource constraints. A machine learning pipeline contains data preprocessing (e.g. PCA, normalization) as well as a machine learning algorithm (e.g. Logistic Regression, Random Forests), with fine-tuned hyperparameter settings (e.g. number of trees in a Random Forest).

To find these pipelines, multiple search procedures have been implemented. GAMA can also combine multiple tuned machine learning pipelines together into an ensemble, which on average should help model performance. At the moment, GAMA is restricted to classification and regression problems on tabular data.

In addition to its general use AutoML functionality, GAMA aims to serve AutoML researchers as well. During the optimization process, GAMA keeps an extensive log of progress made. Using this log, insight can be obtained on the behaviour of the search procedure. For example, it can produce a graph that shows pipeline fitness over time: graph of fitness over time

Note: we temporarily disabled support for the GAMA Dashboard, we will add out-of-the-box visualization again later this year.

Installing GAMA

You can install GAMA with pip: pip install gama

Minimal Example

The following example uses AutoML to find a machine learning pipeline that classifies breast cancer as malign or benign. See the documentation for examples in classification, regression, using ARFF as input.

```python from sklearn.datasets import loadbreastcancer from sklearn.modelselection import traintestsplit from sklearn.metrics import logloss, accuracy_score from gama import GamaClassifier

if name == 'main': X, y = loadbreastcancer(returnXy=True) Xtrain, Xtest, ytrain, ytest = traintestsplit(X, y, stratify=y, random_state=0)

automl = GamaClassifier(max_total_time=180, store="nothing")
print("Starting `fit` which will take roughly 3 minutes.")
automl.fit(X_train, y_train)

label_predictions = automl.predict(X_test)
probability_predictions = automl.predict_proba(X_test)

print('accuracy:', accuracy_score(y_test, label_predictions))
print('log loss:', log_loss(y_test, probability_predictions))
# the `score` function outputs the score on the metric optimized towards (by default, `log_loss`)
print('log_loss', automl.score(X_test, y_test))

```

note: By default, GamaClassifier optimizes towards log_loss.

Citing

If you want to cite GAMA, please use our ECML-PKDD 2020 Demo Track publication.

latex @InProceedings{10.1007/978-3-030-67670-4_39, author="Gijsbers, Pieter and Vanschoren, Joaquin", editor="Dong, Yuxiao and Ifrim, Georgiana and Mladeni{\'{c}}, Dunja and Saunders, Craig and Van Hoecke, Sofie", title="GAMA: A General Automated Machine Learning Assistant", booktitle="Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track", year="2021", publisher="Springer International Publishing", address="Cham", pages="560--564", abstract="The General Automated Machine learning Assistant (GAMA) is a modular AutoML system developed to empower users to track and control how AutoML algorithms search for optimal machine learning pipelines, and facilitate AutoML research itself. In contrast to current, often black-box systems, GAMA allows users to plug in different AutoML and post-processing techniques, logs and visualizes the search process, and supports easy benchmarking. It currently features three AutoML search algorithms, two model post-processing steps, and is designed to allow for more components to be added.", isbn="978-3-030-67670-4" }

License

The contents of this repository is under an Apache-2.0 License.

Owner

  • Name: Artificial Minds @ TU/e
  • Login: amore-labs
  • Kind: organization
  • Email: openml-labs@tue.nl

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software in a publication, please cite the metadata from preferred-citation."
preferred-citation:
  type: article
  authors:
  - family-names: "Gijsbers"
    given-names: "Pieter"
    orcid: "https://orcid.org/0000-0001-7346-8075"
  - family-names: "Vanschoren"
    given-names: "Joaquin"
    orcid: "https://orcid.org/0000-0001-7044-9805"
  journal: "CoRR"
  title: "GAMA: a General Automated Machine learning Assistant"
  abstract: "The General Automated Machine learning Assistant (GAMA) is a modular AutoML system developed to empower users to track and control how AutoML algorithms search for optimal machine learning pipelines, and facilitate AutoML research itself. In contrast to current, often black-box systems, GAMA allows users to plug in different AutoML and post-processing techniques, logs and visualizes the search process, and supports easy benchmarking. It currently features three AutoML search algorithms, two model post-processing steps, and is designed to allow for more components to be added."
  volume: abs/2007.04911
  year: 2020
  start: 560
  end: 564
  pages: 5
  doi: 10.1007/978-3-030-67670-4_39
  url: https://arxiv.org/abs/2007.04911

GitHub Events

Total
Last Year

Dependencies

.github/actions/pytest/action.yaml actions
  • actions/cache v3 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action v3 composite
.github/workflows/build-docs.yaml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/changelog.yaml actions
  • actions/checkout v3 composite
  • thollander/actions-comment-pull-request v1 composite
.github/workflows/precommit.yaml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • pre-commit/action v3.0.0 composite
.github/workflows/publish.yaml actions
  • actions/checkout v3 composite
  • actions/download-artifact v3 composite
  • actions/setup-python v4 composite
  • actions/upload-artifact v3 composite
.github/workflows/pytest.yaml actions
  • ./.github/actions/pytest * composite
  • actions/checkout v3 composite
.github/workflows/test-with-pre.yaml actions
  • ./.github/actions/pytest * composite
  • actions/checkout v3 composite
pyproject.toml pypi
  • black ==19.10b0
  • category-encoders >=1.2.8
  • liac-arff >=2.2.2
  • numpy >=1.20.0
  • pandas >=1.0
  • psutil *
  • scikit-learn >=1.1.0
  • scipy >=1.0.0
  • stopit >=1.1.1