deforce

deforce: Derivative-Free Algorithms for Optimizing Cascade Forward Neural Networks

https://github.com/thieu1995/deforce

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 9 DOI reference(s) in README
  • Academic publication links
    Links to: zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.6%) to scientific vocabulary

Keywords

cascade-forward-networks cfnn classification derivative-free-based-cfnn derivative-free-optimization genetic-algorithm gradient-free-based-cascade-forward nature-inspired-optimization neural-network particle-swarm-optimization regression sgd-optimizer
Last synced: 4 months ago · JSON representation ·

Repository

deforce: Derivative-Free Algorithms for Optimizing Cascade Forward Neural Networks

Basic Info
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 2
  • Open Issues: 0
  • Releases: 2
Topics
cascade-forward-networks cfnn classification derivative-free-based-cfnn derivative-free-optimization genetic-algorithm gradient-free-based-cascade-forward nature-inspired-optimization neural-network particle-swarm-optimization regression sgd-optimizer
Created almost 2 years ago · Last pushed about 1 year ago
Metadata Files
Readme Changelog License Code of conduct Citation

README.md

deforce: Derivative-Free Algorithms for Optimizing Cascade Forward Neural Networks


GitHub release Wheel PyPI version PyPI - Python Version PyPI - Status PyPI - Downloads Downloads Tests & Publishes to PyPI GitHub Release Date Documentation Status Chat GitHub contributors GitTutorial DOI License: GPL v3

deforce (DErivative Free Optimization foR Cascade forward nEural networks) is a Python library that implements variants and the traditional version of Cascade Forward Neural Networks. These include Derivative Free-optimized CFN models (such as genetic algorithm, particle swarm optimization, whale optimization algorithm, teaching learning optimization, differential evolution, ...) and Gradient Descent-optimized CFN models (such as stochastic gradient descent, Adam optimizer, Adelta optimizer, ...). It provides a comprehensive list of optimizers for training CFN models and is also compatible with the Scikit-Learn library. With deforce, you can perform searches and hyperparameter tuning for traditional CFN networks using the features provided by the Scikit-Learn library.

  • Free software: GNU General Public License (GPL) V3 license
  • Provided estimator: CfnRegressor, CfnClassifier, DfoCfnRegressor, DfoCfnClassifier, DfoTuneCfn
  • Total DFO-based CFN models: > 200 regressors, > 200 classifiers.
  • Total GD-based CFN models: 12 regressors, 12 classifiers.
  • Supported performance metrics: >= 67 (47 regressions and 20 classifications)
  • Supported objective functions: >= 67 (47 regressions and 20 classifications)
  • Documentation: https://deforce.readthedocs.io
  • Python versions: >= 3.8.x
  • Dependencies: numpy, scipy, scikit-learn, pandas, mealpy, permetrics, torch, skorch

Citation Request

If you want to understand how to use Derivative Free-optimized Cascade Forward Neural Network, you need to read the paper titled "Optimization of neural-network model using a meta-heuristic algorithm for the estimation of dynamic Poisson’s ratio of selected rock types". The paper can be accessed at the following link

Please include these citations if you plan to use this library:

```bibtex @article{van2024deforce, title={deforce: Derivative-free algorithms for optimizing Cascade Forward Neural Networks}, author={Van Thieu, Nguyen and Nguyen, Hoang and Garg, Harish and Sirbiladze, Gia}, journal={Software Impacts}, volume={21}, pages={100675}, year={2024}, publisher={Elsevier}, doi={10.1016/j.simpa.2024.100675}, url={https://doi.org/10.1016/j.simpa.2024.100675} }

@software{thieudeforce2024, author = {Van Thieu, Nguyen}, title = {{deforce: Derivative-Free Algorithms for Optimizing Cascade Forward Neural Networks}}, url = {https://github.com/thieu1995/deforce}, doi = {10.5281/zenodo.10935437}, year = {2024} }

@article{van2023mealpy, title={MEALPY: An open-source library for latest meta-heuristic algorithms in Python}, author={Van Thieu, Nguyen and Mirjalili, Seyedali}, journal={Journal of Systems Architecture}, year={2023}, publisher={Elsevier}, doi={10.1016/j.sysarc.2023.102871} } ```

Installation

After installation, check the installed version by:

```sh $ python

import deforce deforce.version ```

Examples

Please check documentation website and examples folder.

1) deforce provides this useful classes

python from deforce import DataTransformer, Data from deforce import CfnRegressor, CfnClassifier from deforce import DfoCfnRegressor, DfoCfnClassifier

2) What can you do with all model classes

```python from deforce import CfnRegressor, CfnClassifier, DfoCfnRegressor, DfoCfnClassifier

Use standard CFN model for regression problem

regressor = CfnRegressor(hiddensize=50, act1name="tanh", act2name="sigmoid", objname="MSE", maxepochs=1000, batchsize=32, optimizer="SGD", optimizer_paras=None, verbose=False, seed=42)

Use standard CFN model for classification problem

classifier = CfnClassifier(hiddensize=50, act1name="tanh", act2name="sigmoid", objname="NLLL", maxepochs=1000, batchsize=32, optimizer="SGD", optimizer_paras=None, verbose=False, seed=42)

Use Metaheuristic-optimized CFN model for regression problem

print(DfoCfnClassifier.SUPPORTEDOPTIMIZERS) print(DfoCfnClassifier.SUPPORTEDREG_OBJECTIVES)

optparas = {"name": "WOA", "epoch": 100, "popsize": 30} regressor = DfoCfnRegressor(hiddensize=50, act1name="tanh", act2name="sigmoid", objname="MSE", optimizer="OriginalWOA", optimizerparas=optparas, verbose=True, seed=42)

Use Metaheuristic-optimized CFN model for classification problem

print(DfoCfnClassifier.SUPPORTEDOPTIMIZERS) print(DfoCfnClassifier.SUPPORTEDCLS_OBJECTIVES)

optparas = {"name": "WOA", "epoch": 100, "popsize": 30} classifier = DfoCfnClassifier(hiddensize=50, act1name="tanh", act2name="softmax", objname="CEL", optimizer="OriginalWOA", optimizerparas=optparas, verbose=True, seed=42) ```

3) After you define the model, do something with it + Use provides functions to train, predict, and evaluate model

```python from deforce import CfnRegressor, Data

data = Data() # Assumption that you have provide this object like above

model = CfnRegressor(hiddensize=50, act1name="tanh", act2name="sigmoid", objname="MSE", maxepochs=1000, batchsize=32, optimizer="SGD", optimizer_paras=None, verbose=False)

Train the model

model.fit(data.Xtrain, data.ytrain)

Predicting a new result

ypred = model.predict(data.Xtest)

Calculate metrics using score or scores functions.

print(model.score(data.Xtest, data.ytest, method="MAE")) print(model.scores(data.Xtest, data.ytest, list_methods=["MAPE", "NNSE", "KGE", "MASE", "R2", "R", "R2S"]))

Calculate metrics using evaluate function

print(model.evaluate(data.ytest, ypred, list_metrics=("MSE", "RMSE", "MAPE", "NSE")))

Save performance metrics to csv file

model.saveevaluationmetrics(data.ytest, ypred, listmetrics=("RMSE", "MAE"), savepath="history", filename="metrics.csv")

Save training loss to csv file

model.savetrainingloss(save_path="history", filename="loss.csv")

Save predicted label

model.saveypredicted(X=data.Xtest, ytrue=data.ytest, savepath="history", filename="y_predicted.csv")

Save model

model.savemodel(savepath="history", filename="traditional_CFN.pkl")

Load model

trainedmodel = CfnRegressor.loadmodel(loadpath="history", filename="traditionalCFN.pkl") ```

Owner

  • Name: Nguyen Van Thieu
  • Login: thieu1995
  • Kind: user
  • Location: Earth
  • Company: AIIR Group

Knowledge is power, sharing it is the premise of progress in life. It seems like a burden to someone, but it is the only way to achieve immortality.

Citation (CITATION.cff)

cff-version: 0.1.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: "Van Thieu"
    given-names: "Nguyen"
    orcid: "https://orcid.org/0000-0001-9994-8747"
title: "deforce: Derivative-Free Algorithms for Optimizing Cascade Forward Neural Networks"
version: v1.0.0
doi: 10.5281/zenodo.10935437
date-released: 2024-04-06
url: "https://github.com/thieu1995/deforce"

GitHub Events

Total
  • Push event: 2
Last Year
  • Push event: 2

Issues and Pull Requests

Last synced: about 1 year ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 16 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 2
  • Total maintainers: 1
pypi.org: deforce

deforce: Derivative-Free Algorithms for Optimizing Cascade Forward Neural Networks

  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 16 Last month
Rankings
Dependent packages count: 9.6%
Average: 36.4%
Dependent repos count: 63.3%
Maintainers (1)
Last synced: 5 months ago

Dependencies

requirements.txt pypi
  • flake8 >=4.0.1
  • mealpy >=3.0.1
  • numpy >=1.17.1
  • pandas >=1.3.5
  • permetrics >=1.5.0
  • pytest ==7.1.2
  • pytest-cov ==4.0.0
  • scikit-learn >=1.0.2
  • scipy >=1.7.1
  • skorch >=0.13.0
  • torch >=2.0.0
setup.py pypi
  • numpy >=1.17.1
  • pandas >=1.3.5
  • torch >=2.0.0
.github/workflows/publish-package.yaml actions
  • actions/cache v1 composite
  • actions/checkout v1 composite
  • actions/download-artifact v2 composite
  • actions/setup-python v1 composite
  • actions/upload-artifact master composite
  • actions/upload-artifact v2 composite
  • pypa/gh-action-pypi-publish master composite
docs/requirements.txt pypi
  • mealpy >=3.0.1
  • numpy >=1.17.1
  • pandas >=1.3.5
  • permetrics >=2.0.0
  • readthedocs-sphinx-search ==0.1.1
  • scikit-learn >=1.2.0
  • scipy >=1.8.1
  • skorch >=0.13.0
  • sphinx ==4.4.0
  • sphinx_rtd_theme ==1.0.0
  • torch >=2.0.0