@stdlib/ml-incr-sgd-regression

Online regression via stochastic gradient descent (SGD).

https://github.com/stdlib-js/ml-incr-sgd-regression

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.3%) to scientific vocabulary

Keywords

algorithm gradient-descent incremental javascript machine-learning math mathematics ml node node-js nodejs online prediction regression statistics stats stdlib
Last synced: 6 months ago · JSON representation ·

Repository

Online regression via stochastic gradient descent (SGD).

Basic Info
Statistics
  • Stars: 6
  • Watchers: 3
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
algorithm gradient-descent incremental javascript machine-learning math mathematics ml node node-js nodejs online prediction regression statistics stats stdlib
Created over 4 years ago · Last pushed 8 months ago
Metadata Files
Readme Changelog Contributing License Code of conduct Citation Security

README.md

About stdlib...

We believe in a future in which the web is a preferred environment for numerical computation. To help realize this future, we've built stdlib. stdlib is a standard library, with an emphasis on numerical and scientific computation, written in JavaScript (and C) for execution in browsers and in Node.js.

The library is fully decomposable, being architected in such a way that you can swap out and mix and match APIs and functionality to cater to your exact preferences and use cases.

When you use stdlib, you can be absolutely certain that you are using the most thorough, rigorous, well-written, studied, documented, tested, measured, and high-quality code out there.

To join us in bringing numerical computing to the web, get started by checking us out on GitHub, and please consider financially supporting stdlib. We greatly appreciate your continued support!

Online Regression

NPM version Build Status Coverage Status <!-- dependencies -->

Online regression via Stochastic Gradient Descent.

## Installation ```bash npm install @stdlib/ml-incr-sgd-regression ``` Alternatively, - To load the package in a website via a `script` tag without installation and bundlers, use the [ES Module][es-module] available on the [`esm`][esm-url] branch (see [README][esm-readme]). - If you are using Deno, visit the [`deno`][deno-url] branch (see [README][deno-readme] for usage intructions). - For use in Observable, or in browser/node environments, use the [Universal Module Definition (UMD)][umd] build available on the [`umd`][umd-url] branch (see [README][umd-readme]). The [branches.md][branches-url] file summarizes the available branches and displays a diagram illustrating their relationships. To view installation and usage instructions specific to each branch build, be sure to explicitly navigate to the respective README files on each branch, as linked to above.
## Usage ```javascript var incrSGDRegression = require( '@stdlib/ml-incr-sgd-regression' ); ``` #### incrSGDRegression( \[options] ) Creates an online linear regression model fitted via [stochastic gradient descent][stochastic-gradient-descent]. The module performs [L2 regularization][l2-regularization] of the model coefficients, shrinking them towards zero by penalizing the squared [euclidean norm][euclidean-norm] of the coefficients. ```javascript var randu = require( '@stdlib/random-base-randu' ); var normal = require( '@stdlib/random-base-normal' ); var accumulator = incrSGDRegression(); var x1; var x2; var i; var y; // Update model as data comes in... for ( i = 0; i < 100000; i++ ) { x1 = randu(); x2 = randu(); y = (3.0 * x1) + (-3.0 * x2) + 2.0 + normal( 0.0, 1.0 ); accumulator( [ x1, x2 ], y ); } ``` The function accepts the following `options`: - **learningRate**: `string` denoting the learning rate to use. Can be `constant`, `pegasos` or `basic`. Default: `basic`. - **loss**: `string` denoting the loss function to use. Can be `squaredError`, `epsilonInsensitive` or `huber`. Default: `squaredError`. - **epsilon**: insensitivity parameter. Default: `0.1`. - **lambda**: regularization parameter. Default: `1e-3`. - **eta0**: constant learning rate. Default: `0.02`. - **intercept**: `boolean` indicating whether to include an intercept. Default: `true`. ```javascript var accumulator = incrSGDRegression({ 'loss': 'squaredError', 'lambda': 1e-4 }); ``` The `learningRate` decides how fast or slow the weights will be updated towards the optimal weights. Let `i` denote the current iteration of the algorithm (i.e. the number of data points having arrived). The possible learning rates are: | Option | Definition | | :-------------: | :---------------------: | | basic (default) | 1000.0 / ( i + 1000.0 ) | | constant | eta0 | | pegasos | 1.0 / ( lambda \* i ) | The used loss function is specified via the `loss` option. The available options are: - **epsilonInsensitive**: Penalty is the absolute value of the error whenever the absolute error exceeds epsilon and zero otherwise. - **huber**: Squared-error loss for observations with error smaller than epsilon in magnitude, linear loss otherwise. Should be used in order to decrease the influence of outliers on the model fit. - **squaredError**: Squared error loss, i.e. the squared difference of the observed and fitted values. The `lambda` parameter determines the amount of shrinkage inflicted on the model coefficients: ```javascript var createRandom = require( '@stdlib/random-base-randu' ).factory; var accumulator; var coefs; var opts; var rand; var x1; var x2; var i; var y; opts = { 'seed': 23 }; rand = createRandom( opts ); accumulator = incrSGDRegression({ 'lambda': 1e-5 }); for ( i = 0; i < 100; i++ ) { x1 = rand(); x2 = rand(); y = (3.0 * x1) + (-3.0 * x2) + 2.0; accumulator( [ x1, x2 ], y ); } coefs = accumulator.coefs; // returns [ ~3.007, ~-3.002, ~2 ] rand = createRandom( opts ); accumulator = incrSGDRegression({ 'lambda': 1e-2 }); for ( i = 0; i < 100; i++ ) { x1 = rand(); x2 = rand(); y = (3.0 * x1) + (-3.0 * x2) + 2.0; accumulator( [ x1, x2 ], y ); } coefs = accumulator.coefs; // returns [ ~2.893, ~-2.409, ~1.871 ] ``` Higher values of `lambda` reduce the variance of the model coefficient estimates at the expense of introducing bias. By default, the model contains an `intercept` term. To omit the `intercept`, set the corresponding option to `false`: ```javascript var accumulator = incrSGDRegression({ 'intercept': false }); accumulator( [ 1.4, 0.5 ], 2.0 ); var dim = accumulator.coefs.length; // returns 2 accumulator = incrSGDRegression(); accumulator( [ 1.4, 0.5 ], 2.0 ); dim = accumulator.coefs.length; // returns 3 ``` If `intercept` is `true`, an element equal to one is implicitly added to each `x` vector. Hence, this module performs regularization of the intercept term. #### accumulator( x, y ) Updates the model coefficients in light of incoming data. `y` must be a numeric response value, `x` a `numeric array` of predictors. The number of predictors is decided upon first invocation of this method. All subsequent calls must supply `x` vectors of the same dimensionality. ```javascript accumulator( [ 1.0, 0.0 ], 5.0 ); ``` #### accumulator.predict( x ) Predicts the response for a new feature vector `x`, where `x` must be a `numeric array` of predictors. Given feature vector `x = [x_0, x_1, ...]` and model coefficients `c = [c_0, c_1, ...]`, the prediction is equal to `x_0*c_0 + x_1*c_1 + ... + c_intercept`. ```javascript var yhat = accumulator.predict( [ 0.5, 2.0 ] ); // returns ``` #### accumulator.coefs Getter for the model coefficients / feature weights stored in an `array`. The coefficients are ordered as `[c_0, c_1,..., c_intercept]`, where `c_0` corresponds to the first feature in `x` and so on. ```javascript var coefs = accumulator.coefs; // returns ```
## Notes - Stochastic gradient descent is sensitive to the scaling of the features. One is best advised to either scale each attribute to `[0,1]` or `[-1,1]` or to transform them into z-scores with zero mean and unit variance. One should keep in mind that the same scaling has to be applied to test vectors in order to obtain accurate predictions. - Since this module performs regularization of the intercept term, scaling the response variable to an appropriate scale is also highly recommended.
## Examples ```javascript var randu = require( '@stdlib/random-base-randu' ); var normal = require( '@stdlib/random-base-normal' ); var incrSGDRegression = require( '@stdlib/ml-incr-sgd-regression' ); var accumulator; var rnorm; var x1; var x2; var y; var i; rnorm = normal.factory( 0.0, 1.0 ); // Create model: accumulator = incrSGDRegression({ 'lambda': 1e-7, 'loss': 'squaredError', 'intercept': true }); // Update model as data comes in... for ( i = 0; i < 10000; i++ ) { x1 = randu(); x2 = randu(); y = (3.0 * x1) + (-3.0 * x2) + 2.0 + rnorm(); accumulator( [ x1, x2 ], y ); } // Extract model coefficients: console.log( accumulator.coefs ); // Predict new observations: console.log( 'y_hat = %d; x1 = %d; x2 = %d', accumulator.predict( [0.9, 0.1] ), 0.9, 0.1 ); console.log( 'y_hat = %d; x1 = %d; x2 = %d', accumulator.predict( [0.1, 0.9] ), 0.1, 0.9 ); console.log( 'y_hat = %d; x1 = %d; x2 = %d', accumulator.predict( [0.9, 0.9] ), 0.9, 0.9 ); ```
* * * ## Notice This package is part of [stdlib][stdlib], a standard library for JavaScript and Node.js, with an emphasis on numerical and scientific computing. The library provides a collection of robust, high performance libraries for mathematics, statistics, streams, utilities, and more. For more information on the project, filing bug reports and feature requests, and guidance on how to develop [stdlib][stdlib], see the main project [repository][stdlib]. #### Community [![Chat][chat-image]][chat-url] --- ## License See [LICENSE][stdlib-license]. ## Copyright Copyright © 2016-2025. The Stdlib [Authors][stdlib-authors].

Owner

  • Name: stdlib
  • Login: stdlib-js
  • Kind: organization

Standard library for JavaScript.

Citation (CITATION.cff)

cff-version: 1.2.0
title: stdlib
message: >-
  If you use this software, please cite it using the
  metadata from this file.

type: software

authors:
  - name: The Stdlib Authors
    url: https://github.com/stdlib-js/stdlib/graphs/contributors

repository-code: https://github.com/stdlib-js/stdlib
url: https://stdlib.io

abstract: |
  Standard library for JavaScript and Node.js.

keywords:
  - JavaScript
  - Node.js
  - TypeScript
  - standard library
  - scientific computing
  - numerical computing
  - statistical computing

license: Apache-2.0 AND BSL-1.0

date-released: 2016

GitHub Events

Total
  • Push event: 31
Last Year
  • Push event: 31

Committers

Last synced: about 2 years ago

All Time
  • Total Commits: 55
  • Total Committers: 1
  • Avg Commits per committer: 55.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 14
  • Committers: 1
  • Avg Commits per committer: 14.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
stdlib-bot n****y@s****o 55
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • npm 31 last-month
  • Total dependent packages: 3
  • Total dependent repositories: 2
  • Total versions: 11
  • Total maintainers: 4
npmjs.org: @stdlib/ml-incr-sgd-regression

Online regression via stochastic gradient descent (SGD).

  • Homepage: https://stdlib.io
  • License: Apache-2.0
  • Latest release: 0.2.2
    published over 1 year ago
  • Versions: 11
  • Dependent Packages: 3
  • Dependent Repositories: 2
  • Downloads: 31 Last month
Rankings
Dependent packages count: 5.8%
Downloads: 6.6%
Dependent repos count: 8.0%
Average: 9.5%
Stargazers count: 11.2%
Forks count: 15.9%
Funding
  • type: opencollective
  • url: https://opencollective.com/stdlib
Last synced: 6 months ago

Dependencies

package.json npm
  • @stdlib/assert-is-function ^0.0.x development
  • @stdlib/random-base-normal ^0.0.x development
  • @stdlib/random-base-randu ^0.0.x development
  • istanbul ^0.4.1 development
  • tap-spec 5.x.x development
  • tape git+https://github.com/kgryte/tape.git#fix/globby development
  • @stdlib/assert-has-own-property ^0.0.x
  • @stdlib/assert-is-array ^0.0.x
  • @stdlib/assert-is-boolean ^0.0.x
  • @stdlib/assert-is-nonnegative-number ^0.0.x
  • @stdlib/assert-is-plain-object ^0.0.x
  • @stdlib/assert-is-positive-integer ^0.0.x
  • @stdlib/assert-is-positive-number ^0.0.x
  • @stdlib/assert-is-string ^0.0.x
  • @stdlib/math-base-special-max ^0.0.x
  • @stdlib/math-base-special-pow ^0.0.x
  • @stdlib/string-format ^0.0.x
  • @stdlib/types ^0.0.x
  • @stdlib/utils-copy ^0.0.x
  • @stdlib/utils-define-nonenumerable-read-only-accessor ^0.0.x
  • @stdlib/utils-define-nonenumerable-read-only-property ^0.0.x
.github/workflows/cancel.yml actions
  • styfle/cancel-workflow-action 0.11.0 composite
.github/workflows/close_pull_requests.yml actions
  • superbrothers/close-pull-request v3 composite
.github/workflows/examples.yml actions
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
.github/workflows/npm_downloads.yml actions
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
  • actions/upload-artifact v3 composite
  • distributhor/workflow-webhook v3 composite
.github/workflows/productionize.yml actions
  • act10ns/slack v1 composite
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
  • stdlib-js/bundle-action main composite
  • stdlib-js/transform-errors-action main composite
.github/workflows/publish.yml actions
  • JS-DevTools/npm-publish v1 composite
  • act10ns/slack v1 composite
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
  • styfle/cancel-workflow-action 0.11.0 composite
.github/workflows/test.yml actions
  • act10ns/slack v1 composite
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
.github/workflows/test_bundles.yml actions
  • act10ns/slack v1 composite
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
  • denoland/setup-deno v1 composite
.github/workflows/test_coverage.yml actions
  • act10ns/slack v1 composite
  • actions/checkout v3 composite
  • actions/setup-node v3 composite
  • codecov/codecov-action v3 composite
  • distributhor/workflow-webhook v3 composite
.github/workflows/test_install.yml actions
  • act10ns/slack v1 composite
  • actions/checkout v3 composite
  • actions/setup-node v3 composite