Recent Releases of neurons
neurons - v2.6.2
Fixed-size vectors and reduced redundant cloning.
Use fixed-size vectors where possible to increase performance.
Modify tensor operations to both utilise parallelisation and reduce redundant cloning. #6
Reduce general redundant cloning in the network etc. #6
Benchmarking benches/benchmark.rs (mnist version):
- v2.6.2: 19.826974433s (1.92x speedup from v2.6.1)
- v2.6.1: 38.140101795s (2.31x slowdown from v2.0.1)
- v2.0.1: 16.504570304s
Note: v2.0.1 is massively outdated wrt. modularity etc., reflected through benchmarking.
- Rust
Published by hallvardnmbu about 1 year ago
neurons - v2.5.3
Architecture comparison.
Added examples comparing the performance of different architectures.
Probes the final network by turning off skips and feedbacks, etc.
* examples/compare/*
Corresponding plotting functionality.
* documentation/comparison.py
- Rust
Published by hallvardnmbu over 1 year ago
neurons - v2.4.0
Feedback blocks.
Thorough expansion of the feedback module. Feedback blocks automatically handle weight coupling and skip connections.
When defining a feedback block in the network's layers, the following syntax is used:
rs
network.feedback(
vec![feedback::Layer::Convolution(
1,
activation::Activation::ReLU,
(3, 3),
(1, 1),
(1, 1),
None,
)],
2,
true,
);
- Rust
Published by hallvardnmbu over 1 year ago
neurons - v2.1.0
Maxpool tensor consistency.
- Update maxpool logic to ensure consistency wrt. other layers.
- Maxpool layers now return a
tensor::Tensor(of shapetensor::Shape::Quintuple), instead of nestedVecs. - This will lead to consistency when implementing maxpool for
feedbackblocks.
- Rust
Published by hallvardnmbu over 1 year ago
neurons - v2.0.3
Improved optimizer creation.
Before: ```rs network.setoptimizer( optimizer::Optimizer::AdamW( optimizer::AdamW { learningrate: 0.001, beta1: 0.9, beta2: 0.999, epsilon: 1e-8, decay: 0.01,
// To be filled by the network:
momentum: vec![],
velocity: vec![],
}
)
); ```
Now:
rs
network.set_optimizer(optimizer::RMSprop::create(
0.001, // Learning rate
0.0, // Alpha
1e-8, // Epsilon
Some(0.01), // Decay
Some(0.01), // Momentum
true, // Centered
));
- Rust
Published by hallvardnmbu over 1 year ago