MatDL

MatDL: A Lightweight Deep Learning Library in MATLAB - Published in JOSS (2017)

https://github.com/haythamfayek/matdl

Science Score: 95.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 5 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org, zenodo.org
  • Committers with academic emails
    1 of 1 committers (100.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Keywords

deep-learning machine-learning neural-networks
Last synced: 4 months ago · JSON representation

Repository

A Lightweight Deep Learning Library in MATLAB

Basic Info
  • Host: GitHub
  • Owner: haythamfayek
  • License: mit
  • Language: MATLAB
  • Default Branch: master
  • Size: 49.8 KB
Statistics
  • Stars: 11
  • Watchers: 3
  • Forks: 0
  • Open Issues: 0
  • Releases: 2
Topics
deep-learning machine-learning neural-networks
Created over 8 years ago · Last pushed over 6 years ago
Metadata Files
Readme License

README.md

MatDL

status DOI

MatDL is an open-source lightweight deep learning library native in MATLAB that implements some most commonly used deep learning algorithms. The library comprises functions that implement the following: (1) basic building blocks of modern neural networks such as affine transformations, convolutions, nonlinear operations, dropout, batch normalization, etc.; (2) popular architectures such as deep neural networks (DNNs), convolutional neural networks (ConvNets), and recurrent neural networks (RNNs) and their variant, the long short-term memory (LSTM) RNNs; (3) optimizers such stochastic gradient descent (SGD), RMSProp and ADAM; as well as (4) boilerplate functions for training, gradients checking, etc.

Installation

  • Add MatDL to your path: matlab addpath(genpath('MatDL'));

  • Compile the C MEX files using the Makefile or using the following: matlab cd MatDL/convnet; mex col2im_mex.c; mex im2col_mex.c; cd ..; cd ..;

Project Layout

common/: Basic building blocks of neural networks such as nonlinear functions, etc.

convnet/: ConvNet specific functions such as convolutional layers, max pooling layers, etc.

nn/: NN specific functions.

optim/: Optimization algorithms such as SGD, RMSProp, ADAM, etc.

rnn/: RNN and LSTM functions.

train/: Functions for gradients checking, training, and prediction.

zoo/: Samples of various models definitions and initializations.

Usage

This is a sample complete minimum working example: (Examples: DNN (Below), ConvNet, RNN)

```matlab % A complete minimum working example. %% Init clear all addpath(genpath('../MatDL'));

%% Load data load('../Data/mnistuint8.mat'); % Replace with your data file X = double(trainx)/255; Y = double(trainy); XVal = double(testx)/255; YVal = double(test_y);

rng(0);

%% Initialize model opt = struct; [model, opt] = initsixnn_bn(784, 10, [100, 100, 100, 100, 100], opt);

%% Hyper-parameters opt.batchSize = 100;

opt.optim = @rmsprop; % opt.beta1 = 0.9; opt.beta2 = 0.999; opt.t = 0; opt.mgrads = opt.vgrads; opt.rmspropDecay = 0.99; % opt.initialMomentum = 0.5; opt.switchEpochMomentum = 1; opt.finalMomentum = 0.9; opt.learningRate = 0.01; opt.learningDecaySchedule = 'stepsave'; % 'no_decay', 't/T', 'step' opt.learningDecayRate = 0.5; opt.learningDecayRateStep = 5;

opt.dropout = 0.5; opt.weightDecay = false; opt.maxNorm = false;

opt.maxEpochs = 100; opt.earlyStoppingPatience = 20; opt.valFreq = 100;

opt.plotProgress = true; opt.extractFeature = false; opt.computeDX = false;

opt.useGPU = false; if (opt.useGPU) % Copy data, dropout, model, vgrads, BNParams X = gpuArray(X); Y = gpuArray(Y); XVal = gpuArray(XVal); YVal = gpuArray(YVal); opt.dropout = gpuArray(opt.dropout); p = fieldnames(model); for i = 1:numel(p), model.(p{i}) = gpuArray(model.(p{i})); opt.vgrads.(p{i}) = gpuArray(opt.vgrads.(p{i})); end p = fieldnames(opt); for i = 1:numel(p), if (strfind(p{i},'bnParam')), opt.(p{i}).runningMean = gpuArray(opt.(p{i}).runningMean); opt.(p{i}).runningVar = gpuArray(opt.(p{i}).runningVar); end; end end

%% Gradient check x = X(1:100,:); y = Y(1:100,:); maxRelError = gradcheck(@sixnnbn, x, model, y, opt, 10);

%% Train [model, trainLoss, trainAccuracy, valLoss, valAccuracy, opt] = train(X, Y, XVal, YVal, model, @sixnnbn, opt);

%% Predict [yplabel, confidence, classes, classConfidences, yp] = predict(XVal, @sixnnbn, model, opt) ```

Contributing

Contributions are highly welcome!

If you wish to contribute, follow these steps: - Create a personal fork of the MatDL GitHub repository. - Make your changes in a branch named other than master. - Follow the same coding and documentation style for consistency. - Make sure that your changes do not break any of the existing functionality. - Submit a pull request.

Please use Github issues to report bugs or request features.

Citation

If you use this library in your research, please cite:

Fayek, (2017). MatDL: A Lightweight Deep Learning Library in MATLAB. Journal of Open Source Software, 2(19), 413, doi:10.21105/joss.00413

@article{Fayek2017, author = {Haytham M. Fayek}, title = {{MatDL}: A Lightweight Deep Learning Library in {MATLAB}}, journal = {Journal of Open Source Software}, year = {2017}, month = {nov}, volume = {2}, number = {19}, doi = {10.21105/joss.00413}, url = {https://doi.org/10.21105/joss.00413}, publisher = {The Open Journal}, }

References

MatDL was inspired by Stanford's CS231n and Torch, and is conceptually similar to Keras and Lasagne. Torch, keras and Lasagne are more suited for large-scale experiments.

License

MIT

Owner

  • Name: Haytham Fayek
  • Login: haythamfayek
  • Kind: user

JOSS Publication

MatDL: A Lightweight Deep Learning Library in MATLAB
Published
November 07, 2017
Volume 2, Issue 19, Page 413
Authors
Haytham M. Fayek ORCID
RMIT University
Editor
Christopher R. Madan ORCID
Tags
Machine Learning Deep Learning Neural Networks

GitHub Events

Total
Last Year

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 10
  • Total Committers: 1
  • Avg Commits per committer: 10.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Haytham Fayek h****k@i****g 10
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 3
  • Total pull requests: 0
  • Average time to close issues: 1 day
  • Average time to close pull requests: N/A
  • Total issue authors: 2
  • Total pull request authors: 0
  • Average comments per issue: 1.33
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • nirum (2)
  • digantamisra98 (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels