gama-f1

“Fork of the GAMA AutoML framework with modifications to use F1 score as the primary evaluation metric.

https://github.com/lucas-mendonca-andrade/gama-f1

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 5 DOI reference(s) in README
  • Academic publication links
    Links to: springer.com, joss.theoj.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (17.2%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

“Fork of the GAMA AutoML framework with modifications to use F1 score as the primary evaluation metric.

Basic Info
  • Host: GitHub
  • Owner: lucas-mendonca-andrade
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 3.55 MB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 7 months ago · Last pushed 7 months ago
Metadata Files
Readme License Code of conduct Citation

README.md

GAMA logo

This project is a fork from GAMA AutoML just to add binary f1 score as evaluating metric

General Automated Machine learning Assistant
An automated machine learning tool based on genetic programming.
Make sure to check out the documentation.

Build Status codecov DOI


GAMA is an AutoML package for end-users and AutoML researchers. It generates optimized machine learning pipelines given specific input data and resource constraints. A machine learning pipeline contains data preprocessing (e.g. PCA, normalization) as well as a machine learning algorithm (e.g. Logistic Regression, Random Forests), with fine-tuned hyperparameter settings (e.g. number of trees in a Random Forest).

To find these pipelines, multiple search procedures have been implemented. GAMA can also combine multiple tuned machine learning pipelines together into an ensemble, which on average should help model performance. At the moment, GAMA is restricted to classification and regression problems on tabular data.

In addition to its general use AutoML functionality, GAMA aims to serve AutoML researchers as well. During the optimization process, GAMA keeps an extensive log of progress made. Using this log, insight can be obtained on the behaviour of the search procedure. For example, it can produce a graph that shows pipeline fitness over time: graph of fitness over time

For more examples and information on the visualization, see the technical guide.

Installing GAMA

You can install GAMA with pip: pip install gama

Minimal Example

The following example uses AutoML to find a machine learning pipeline that classifies breast cancer as malign or benign. See the documentation for examples in classification, regression, using ARFF as input.

```python from sklearn.datasets import loadbreastcancer from sklearn.modelselection import traintestsplit from sklearn.metrics import logloss, accuracy_score from gama import GamaClassifier

if name == 'main': X, y = loadbreastcancer(returnXy=True) Xtrain, Xtest, ytrain, ytest = traintestsplit(X, y, stratify=y, random_state=0)

automl = GamaClassifier(max_total_time=180, store="nothing")
print("Starting `fit` which will take roughly 3 minutes.")
automl.fit(X_train, y_train)

label_predictions = automl.predict(X_test)
probability_predictions = automl.predict_proba(X_test)

print('accuracy:', accuracy_score(y_test, label_predictions))
print('log loss:', log_loss(y_test, probability_predictions))
# the `score` function outputs the score on the metric optimized towards (by default, `log_loss`)
print('log_loss', automl.score(X_test, y_test))

```

note: By default, GamaClassifier optimizes towards log_loss.

Citing

If you want to cite GAMA, please use our ECML-PKDD 2020 Demo Track publication.

latex @InProceedings{10.1007/978-3-030-67670-4_39, author="Gijsbers, Pieter and Vanschoren, Joaquin", editor="Dong, Yuxiao and Ifrim, Georgiana and Mladeni{\'{c}}, Dunja and Saunders, Craig and Van Hoecke, Sofie", title="GAMA: A General Automated Machine Learning Assistant", booktitle="Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track", year="2021", publisher="Springer International Publishing", address="Cham", pages="560--564", abstract="The General Automated Machine learning Assistant (GAMA) is a modular AutoML system developed to empower users to track and control how AutoML algorithms search for optimal machine learning pipelines, and facilitate AutoML research itself. In contrast to current, often black-box systems, GAMA allows users to plug in different AutoML and post-processing techniques, logs and visualizes the search process, and supports easy benchmarking. It currently features three AutoML search algorithms, two model post-processing steps, and is designed to allow for more components to be added.", isbn="978-3-030-67670-4" }

Owner

  • Login: lucas-mendonca-andrade
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software in a publication, please cite the metadata from preferred-citation."
preferred-citation:
  type: article
  authors:
  - family-names: "Gijsbers"
    given-names: "Pieter"
    orcid: "https://orcid.org/0000-0001-7346-8075"
  - family-names: "Vanschoren"
    given-names: "Joaquin"
    orcid: "https://orcid.org/0000-0001-7044-9805"
  journal: "CoRR"
  title: "GAMA: a General Automated Machine learning Assistant"
  abstract: "The General Automated Machine learning Assistant (GAMA) is a modular AutoML system developed to empower users to track and control how AutoML algorithms search for optimal machine learning pipelines, and facilitate AutoML research itself. In contrast to current, often black-box systems, GAMA allows users to plug in different AutoML and post-processing techniques, logs and visualizes the search process, and supports easy benchmarking. It currently features three AutoML search algorithms, two model post-processing steps, and is designed to allow for more components to be added."
  volume: abs/2007.04911
  year: 2020
  start: 560
  end: 564
  pages: 5
  doi: 10.1007/978-3-030-67670-4_39
  url: https://arxiv.org/abs/2007.04911

GitHub Events

Total
  • Push event: 2
Last Year
  • Push event: 2

Dependencies

pyproject.toml pypi
  • black ==19.10b0
  • category-encoders >=1.2.8
  • liac-arff >=2.2.2
  • numpy >=1.20.0
  • pandas >=1.0
  • psutil *
  • scikit-learn >=1.1.0
  • scipy >=1.0.0
  • stopit >=1.1.1
setup.py pypi