Recent Releases of Lux

Lux - LuxTestUtils-v2.0.1

LuxTestUtils LuxTestUtils-v2.0.1

Diff since LuxTestUtils-v1.7.2

Merged pull requests: - perf: benchmarking our models against Jax (Flax) (#1000) (@avik-pal) - fix: remove Optimisers.jl patch (#1247) (@avik-pal) - test: fix tests (#1278) (@avik-pal) - fix: try fixing broken tests (#1279) (@avik-pal) - show debug instead of warning if cant cuBLAS mult (#1280) (@ExpandingMan) - chore: bump crate-ci/typos from 1.30.2 to 1.31.0 (#1281) (@dependabot[bot]) - chore: remove debug functionalities of reactant (#1285) (@avik-pal) - fix: add a ReactantOptimisers wrapper (#1288) (@avik-pal) - chore: bump crate-ci/typos from 1.31.0 to 1.31.1 (#1297) (@dependabot[bot]) - ci: use JuliaFormatter v1 (#1299) (@avik-pal) - ci: multiple ci fixes (#1301) (@avik-pal) - CompatHelper: bump compat for JET to 0.10 for package LuxTestUtils, (keep existing compat) (#1302) (@github-actions[bot]) - fix: new reactant version (#1303) (@avik-pal) - fix: update Reactant training (#1304) (@avik-pal) - fix: chain rules for recurrence tuple inputs (#1306) (@avik-pal) - feat: fix return (#1307) (@avik-pal) - fix: try increasing the samples in CI (#1309) (@avik-pal) - fix: restrict dispatch types for cublaslt (#1311) (@avik-pal) - chore: bump julia-actions/julia-format from 3 to 4 (#1313) (@dependabot[bot]) - feat: use 3rd order derivatives using Reactant (#1315) (@avik-pal) - allow SelectDim to take arbitrary views (#1318) (@ExpandingMan) - chore: bump crate-ci/typos from 1.31.1 to 1.32.0 (#1320) (@dependabot[bot]) - docs: fix wrong function names in RNG admonition in interface.md (#1325) (@KristianHolme) - CompatHelper: bump compat for DocumenterVitepress to 0.2 for package docs, (keep existing compat) (#1328) (@github-actions[bot]) - CompatHelper: bump compat for Interpolations to 0.16 for package CIFAR10, (keep existing compat) (#1329) (@github-actions[bot]) - docs: lstm encoder decoder using Reactant (#1331) (@avik-pal) - feat: lower embedding to direct indexing (#1332) (@avik-pal) - fix: indexing (#1333) (@avik-pal) - fix: reactant gradients + precision config (#1334) (@avik-pal) - feat: emit batchnorm ops (#1336) (@avik-pal) - fix: run more under withconfig (#1340) (@avik-pal) - fix: update to use the new RNG from Reactant (#1341) (@avik-pal) - fix: use ignore derivatives for Reactant (#1342) (@avik-pal) - ci: taming down ci timings (#1343) (@avik-pal) - fix: remove onehotarrays patch (#1344) (@avik-pal) - fix: bump reactant min version (#1345) (@avik-pal) - CompatHelper: bump compat for MKL in [weakdeps] to 0.9 for package LuxLib, (keep existing compat) (#1346) (@github-actions[bot]) - CompatHelper: bump compat for MKL to 0.9 for package test, (keep existing compat) (#1347) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.32.0 to 1.33.1 (#1349) (@dependabot[bot]) - chore: use uv for python (#1350) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.14 for package DDIM, (keep existing compat) (#1351) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package GravitationalWaveForm, (keep existing compat) (#1352) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package LSTMEncoderDecoder, (keep existing compat) (#1353) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package OptimizationIntegration, (keep existing compat) (#1354) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PINN2DPDE, (keep existing compat) (#1355) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PolynomialFitting, (keep existing compat) (#1356) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package RealNVP, (keep existing compat) (#1357) (@github-actions[bot]) - fix: remove type piracy (#1360) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.15 for package DDIM, (keep existing compat) (#1362) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package GravitationalWaveForm, (keep existing compat) (#1363) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package LSTMEncoderDecoder, (keep existing compat) (#1364) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package OptimizationIntegration, (keep existing compat) (#1365) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PINN2DPDE, (keep existing compat) (#1366) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PolynomialFitting, (keep existing compat) (#1367) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package RealNVP, (keep existing compat) (#1368) (@github-actions[bot]) - fix: use Int32 for GCN cora example (#1369) (@avik-pal) - Fix backticks in examples/Basics (#1370) (@abhro) - Fix up minor docs/docstrings formatting (#1371) (@abhro) - feat: forwarddiff support for gather/scatter (#1373) (@avik-pal) - fix: handle multi-device reactant (#1374) (@avik-pal) - feat: serialization to tensorflow saved model (#1375) (@avik-pal) - chore: update version for release (#1376) (@avik-pal) - CompatHelper: add new compat entry for PythonCall at version 0.9 for package test, (keep existing compat) (#1377) (@github-actions[bot]) - fix: missing variable (#1379) (@avik-pal) - chore: bump crate-ci/typos from 1.33.1 to 1.34.0 (#1382) (@dependabot[bot]) - fix: State returned by MultiHeadAttention is incompatible with the initialized state (#1384) (@yeruoforever) - feat: annotate important parts of training loop (#1385) (@avik-pal) - fix: bypass fused kernels for mooncake (for now) (#1387) (@avik-pal) - feat: AutoMooncake for training lux models (#1388) (@avik-pal) - CompatHelper: add new compat entry for Mooncake at version 0.4 for package test, (keep existing compat) (#1389) (@github-actions[bot]) - docs: cleanup CIFAR 10 example dependencies (#1391) (@avik-pal) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package LuxLib, (keep existing compat) (#1394) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1395) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1396) (@github-actions[bot]) - De-fragment markdown list in distributedutils.md (#1397) (@abhro) - Precompile environment before running tutorials (#1398) (@abhro) - Add meaning of tuple elements in docs/tutorials.jl (#1399) (@abhro) - Split function into two for dataset/dataloader concept separation (#1400) (@abhro) - feat: add preserves_state_type to the interface (#1401) (@avik-pal) - refactor: move stateful layer into LuxCore (#1402) (@avik-pal) - Add and update links to external packages/resources in docs (#1403) (@abhro) - Change doc title in docs/src/introduction/index.md (#1404) (@abhro) - chore: bump julia-actions/julia-downgrade-compat from 1 to 2 (#1405) (@dependabot[bot]) - fix a typo in index.md (#1407) (@rzyu45) - Suppress output in docs and examples (#1408) (@abhro) - Add more explanatory text in tutorials' data generation (#1409) (@abhro) - Fix typos and fix up minor docs formatting (#1410) (@abhro) - Use write(filename, obj) for file I/O in docs (#1411) (@abhro) - Split up steps in PINN tutorial (#1412) (@abhro) - Use parenthesized version of @printf for better code formatting (#1413) (@abhro) - ci: streamline ci testing (#1415) (@avik-pal) - Improve logic for printing updates on an epoch (#1417) (@abhro) - chore: bump crate-ci/typos from 1.34.0 to 1.35.3 (#1420) (@dependabot[bot]) - chore: fix mooncake circular dep (#1421) (@avik-pal) - chore: bump actions/checkout from 4 to 5 (#1423) (@dependabot[bot]) - chore: bump crate-ci/typos from 1.35.3 to 1.35.4 (#1424) (@dependabot[bot]) - Allow connection in Parallel and fusion in BranchLayer to be layers (#1425) (@Copilot) - Add comprehensive GitHub Copilot instructions with JuliaFormatter v1 and temporary environments (#1427) (@Copilot) - test: Enzyme now works for upsample in 1.10 (#1428) (@avik-pal) - docs: Fix wrong function name (#1429) (@agdestein) - ci: enable gh actions telemetry (#1430) (@avik-pal) - feat: better precision control for Reactant training API (#1431) (@avik-pal) - fix: support CompactLayer in freeze (#1432) (@avik-pal) - docs: improvements to tutorials (#1433) (@avik-pal) - docs: add comprehensive documentation for supporting both Flux and Lux frameworks (#1434) (@Copilot) - docs: fix tutorial links (#1436) (@avik-pal) - feat: compact printing (#1437) (@avik-pal) - feat: Qwen3 model with weight loading from huggingface (#1438) (@avik-pal) - CompatHelper: bump compat for JLD2 to 0.6 for package DDIM, (keep existing compat) (#1439) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.6 for package ImageNet, (keep existing compat) (#1440) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.6 for package SimpleRNN, (keep existing compat) (#1441) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.35.4 to 1.35.5 (#1442) (@dependabot[bot]) - feat: add RMSNorm Layer (#1443) (@avik-pal) - feat: expose direct functions for computing RoPE (#1444) (@avik-pal) - refactor: cleanup WeightInitializers to reduce extensions (#1447) (@avik-pal) - fix: propagate runtime activity from AutoEnzyme (#1448) (@avik-pal) - fix: more streamlined testing (#1455) (@avik-pal) - ci: run more CUDA tests in parallel (#1459) (@avik-pal) - chore: bump crate-ci/typos from 1.35.5 to 1.35.7 (#1460) (@dependabot[bot])

Closed issues: - Let connection in Parallel be a layer (#377) - Export trained model for Tensorflow/PyTorch/C++? (#453) - Externalize gradient computations to DifferentiationInterface.jl? (#544) - Per-Layer Profiling (#864) - Integration of oneDNN for CPU operations (#1013) - Optimisers.jl patch for Reactant Support (#1146) - Emit Batchnorm Op for Training (#1208) - Add a AutoMooncake dispatch for training Lux models (#1238) - How to maintain a package that supports both Flux & Lux? (#1243) - Convolutional VAE for MNIST using Reactant failed to produce right results (#1274) - GPU Inefficiency in Gradient Computation with Custom Recurrence (#1284) - PINN2DPDE broke in the latest Optimisers Patch removal (#1286) - Duplicating Scalars for Optimisers prevents CSE (#1289) - Reactant 0.2.61 produces incorrect gradients (#1292) - missing or incorrect ProjectTo method breaks Recurrence with Zygote (#1305) - dense operations fail on views on nvidia due to missing method (#1308) - Gradient of while loop with Reactant seems broken (#1316) - getting concatenating and splitting working with Reactant/Enzyme (#1317) - Allow freeze for @compact defined model layers (#1319) - LuxLib doing LinearAlgebra.mul! on non-arrays can cause fallback to errors (#1322) - Don't materialize OneHotArrays with ReactantDevice (#1326) - gpudevice(deviceid) attaches to first/zeroth device regardless of deviceid (#1330) - Preference to control precision config (#1335) - LSTMEncoderDecoder example broken (#1337) - could not load library Reactant.TracedLinearAlgebra on Windows (#1339) - Error when freezing part of a model + Reactant (#1348) - TrainState with mutliple Reactant Devices (#1358) - ComponentArrays.jl type piracy? (#1359) - Reactant.jl pass pipeline broke GCN Cora (#1361) - jacobianvectorproduct for Embedding (#1372) - UndefVarError (:fname, LuxReactantExt) (#1378) - Unable to evaluate a Lux model at a specific parameter set. (#1380) - State Tuple returned by MultiHeadAttention is incompatible with the initialized tuple (#1383) - MLIR Operand #1 doesn't dominate usage: Reactant + Cifar10 Example (#1386) - Conversion of Arrays of StaticArray to Device (#1406) - Move LuxLib Mooncake Ext to Mooncake (#1416) - Question: If I have a Lux model with a single input, how I I create one with two inputs (#1419) - ✨ Set up Copilot instructions (#1426) - Complex kaiminguniform initializations, only positive imaginary weights (#1445) - Constant memory is stored (or returned) to a differentiable variable. when broadcasting vector (#1446) - Bump LuxCUDA compat for CUDA 13 (#1449) - Reactant testing rework (#1454)

- Julia
Published by github-actions[bot] 6 months ago

Lux - v1.21.0

Lux v1.21.0

Diff since v1.17.0

Merged pull requests: - refactor: move stateful layer into LuxCore (#1402) (@avik-pal) - chore: bump actions/checkout from 4 to 5 (#1423) (@dependabot[bot]) - chore: bump crate-ci/typos from 1.35.3 to 1.35.4 (#1424) (@dependabot[bot]) - Allow connection in Parallel and fusion in BranchLayer to be layers (#1425) (@Copilot) - Add comprehensive GitHub Copilot instructions with JuliaFormatter v1 and temporary environments (#1427) (@Copilot) - test: Enzyme now works for upsample in 1.10 (#1428) (@avik-pal) - docs: Fix wrong function name (#1429) (@agdestein) - ci: enable gh actions telemetry (#1430) (@avik-pal) - feat: better precision control for Reactant training API (#1431) (@avik-pal) - fix: support CompactLayer in freeze (#1432) (@avik-pal) - docs: improvements to tutorials (#1433) (@avik-pal) - docs: add comprehensive documentation for supporting both Flux and Lux frameworks (#1434) (@Copilot) - docs: fix tutorial links (#1436) (@avik-pal) - feat: compact printing (#1437) (@avik-pal) - feat: Qwen3 model with weight loading from huggingface (#1438) (@avik-pal) - CompatHelper: bump compat for JLD2 to 0.6 for package DDIM, (keep existing compat) (#1439) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.6 for package ImageNet, (keep existing compat) (#1440) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.6 for package SimpleRNN, (keep existing compat) (#1441) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.35.4 to 1.35.5 (#1442) (@dependabot[bot]) - feat: add RMSNorm Layer (#1443) (@avik-pal) - feat: expose direct functions for computing RoPE (#1444) (@avik-pal) - refactor: cleanup WeightInitializers to reduce extensions (#1447) (@avik-pal) - fix: propagate runtime activity from AutoEnzyme (#1448) (@avik-pal) - feat: faster scaleddotproductattention for reactant (#1452) (@avik-pal) - fix: more streamlined testing (#1455) (@avik-pal) - ci: run more CUDA tests in parallel (#1459) (@avik-pal) - chore: bump crate-ci/typos from 1.35.5 to 1.35.7 (#1460) (@dependabot[bot]) - feat(LuxLib): faster scaleddotproductattention for reactant (#1461) (@avik-pal) - Update overview page to highlight Reactant-first performance and deployment approach (#1462) (@Copilot) - fix(MLDataDevices): mapreduce over empty structures (#1463) (@avik-pal)

Closed issues: - Let connection in Parallel be a layer (#377) - How to maintain a package that supports both Flux & Lux? (#1243) - Allow freeze for @compact defined model layers (#1319) - Preference to control precision config (#1335) - ✨ Set up Copilot instructions (#1426) - Complex kaiming_uniform initializations, only positive imaginary weights (#1445) - Constant memory is stored (or returned) to a differentiable variable. when broadcasting vector (#1446) - Bump LuxCUDA compat for CUDA 13 (#1449) - Optimized LuxLib operations for Reactant (#1450) - Reactant testing rework (#1454) - Rework old documentation (#1456)

- Julia
Published by github-actions[bot] 6 months ago

Lux - MLDataDevices-v1.11.2

MLDataDevices MLDataDevices-v1.11.2

Diff since MLDataDevices-v1.11.1

Merged pull requests: - Split function into two for dataset/dataloader concept separation (#1400) (@abhro) - feat: add preserves_state_type to the interface (#1401) (@avik-pal) - refactor: move stateful layer into LuxCore (#1402) (@avik-pal) - Add and update links to external packages/resources in docs (#1403) (@abhro) - Change doc title in docs/src/introduction/index.md (#1404) (@abhro) - chore: bump julia-actions/julia-downgrade-compat from 1 to 2 (#1405) (@dependabot[bot]) - fix a typo in index.md (#1407) (@rzyu45) - Suppress output in docs and examples (#1408) (@abhro) - Add more explanatory text in tutorials' data generation (#1409) (@abhro) - Fix typos and fix up minor docs formatting (#1410) (@abhro) - Use write(filename, obj) for file I/O in docs (#1411) (@abhro) - Split up steps in PINN tutorial (#1412) (@abhro) - Use parenthesized version of @printf for better code formatting (#1413) (@abhro) - ci: streamline ci testing (#1415) (@avik-pal) - Improve logic for printing updates on an epoch (#1417) (@abhro) - chore: bump crate-ci/typos from 1.34.0 to 1.35.3 (#1420) (@dependabot[bot]) - chore: fix mooncake circular dep (#1421) (@avik-pal) - chore: bump actions/checkout from 4 to 5 (#1423) (@dependabot[bot]) - chore: bump crate-ci/typos from 1.35.3 to 1.35.4 (#1424) (@dependabot[bot]) - Allow connection in Parallel and fusion in BranchLayer to be layers (#1425) (@Copilot) - Add comprehensive GitHub Copilot instructions with JuliaFormatter v1 and temporary environments (#1427) (@Copilot) - test: Enzyme now works for upsample in 1.10 (#1428) (@avik-pal) - docs: Fix wrong function name (#1429) (@agdestein) - ci: enable gh actions telemetry (#1430) (@avik-pal) - feat: better precision control for Reactant training API (#1431) (@avik-pal) - fix: support CompactLayer in freeze (#1432) (@avik-pal) - docs: improvements to tutorials (#1433) (@avik-pal) - docs: add comprehensive documentation for supporting both Flux and Lux frameworks (#1434) (@Copilot) - docs: fix tutorial links (#1436) (@avik-pal) - feat: compact printing (#1437) (@avik-pal) - feat: Qwen3 model with weight loading from huggingface (#1438) (@avik-pal) - CompatHelper: bump compat for JLD2 to 0.6 for package DDIM, (keep existing compat) (#1439) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.6 for package ImageNet, (keep existing compat) (#1440) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.6 for package SimpleRNN, (keep existing compat) (#1441) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.35.4 to 1.35.5 (#1442) (@dependabot[bot]) - feat: add RMSNorm Layer (#1443) (@avik-pal) - feat: expose direct functions for computing RoPE (#1444) (@avik-pal) - refactor: cleanup WeightInitializers to reduce extensions (#1447) (@avik-pal) - fix: propagate runtime activity from AutoEnzyme (#1448) (@avik-pal) - fix: more streamlined testing (#1455) (@avik-pal) - ci: run more CUDA tests in parallel (#1459) (@avik-pal) - chore: bump crate-ci/typos from 1.35.5 to 1.35.7 (#1460) (@dependabot[bot]) - Update overview page to highlight Reactant-first performance and deployment approach (#1462) (@Copilot) - fix(MLDataDevices): mapreduce over empty structures (#1463) (@avik-pal)

Closed issues: - Let connection in Parallel be a layer (#377) - How to maintain a package that supports both Flux & Lux? (#1243) - Allow freeze for @compact defined model layers (#1319) - Preference to control precision config (#1335) - could not load library Reactant.TracedLinearAlgebra on Windows (#1339) - Conversion of Arrays of StaticArray to Device (#1406) - Move LuxLib Mooncake Ext to Mooncake (#1416) - Question: If I have a Lux model with a single input, how I I create one with two inputs (#1419) - ✨ Set up Copilot instructions (#1426) - Complex kaiming_uniform initializations, only positive imaginary weights (#1445) - Constant memory is stored (or returned) to a differentiable variable. when broadcasting vector (#1446) - Bump LuxCUDA compat for CUDA 13 (#1449) - Reactant testing rework (#1454) - Rework old documentation (#1456)

- Julia
Published by github-actions[bot] 6 months ago

Lux - LuxLib-v1.11.0

LuxLib LuxLib-v1.11.0

Diff since LuxLib-v1.10.2

Merged pull requests: - refactor: move stateful layer into LuxCore (#1402) (@avik-pal) - chore: bump actions/checkout from 4 to 5 (#1423) (@dependabot[bot]) - chore: bump crate-ci/typos from 1.35.3 to 1.35.4 (#1424) (@dependabot[bot]) - Allow connection in Parallel and fusion in BranchLayer to be layers (#1425) (@Copilot) - Add comprehensive GitHub Copilot instructions with JuliaFormatter v1 and temporary environments (#1427) (@Copilot) - test: Enzyme now works for upsample in 1.10 (#1428) (@avik-pal) - docs: Fix wrong function name (#1429) (@agdestein) - ci: enable gh actions telemetry (#1430) (@avik-pal) - feat: better precision control for Reactant training API (#1431) (@avik-pal) - fix: support CompactLayer in freeze (#1432) (@avik-pal) - docs: improvements to tutorials (#1433) (@avik-pal) - docs: add comprehensive documentation for supporting both Flux and Lux frameworks (#1434) (@Copilot) - docs: fix tutorial links (#1436) (@avik-pal) - feat: compact printing (#1437) (@avik-pal) - feat: Qwen3 model with weight loading from huggingface (#1438) (@avik-pal) - CompatHelper: bump compat for JLD2 to 0.6 for package DDIM, (keep existing compat) (#1439) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.6 for package ImageNet, (keep existing compat) (#1440) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.6 for package SimpleRNN, (keep existing compat) (#1441) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.35.4 to 1.35.5 (#1442) (@dependabot[bot]) - feat: add RMSNorm Layer (#1443) (@avik-pal) - feat: expose direct functions for computing RoPE (#1444) (@avik-pal) - refactor: cleanup WeightInitializers to reduce extensions (#1447) (@avik-pal) - fix: propagate runtime activity from AutoEnzyme (#1448) (@avik-pal) - fix: more streamlined testing (#1455) (@avik-pal) - ci: run more CUDA tests in parallel (#1459) (@avik-pal) - chore: bump crate-ci/typos from 1.35.5 to 1.35.7 (#1460) (@dependabot[bot]) - feat(LuxLib): faster scaleddotproduct_attention for reactant (#1461) (@avik-pal) - Update overview page to highlight Reactant-first performance and deployment approach (#1462) (@Copilot) - fix(MLDataDevices): mapreduce over empty structures (#1463) (@avik-pal)

Closed issues: - Let connection in Parallel be a layer (#377) - How to maintain a package that supports both Flux & Lux? (#1243) - Allow freeze for @compact defined model layers (#1319) - Preference to control precision config (#1335) - ✨ Set up Copilot instructions (#1426) - Complex kaiming_uniform initializations, only positive imaginary weights (#1445) - Constant memory is stored (or returned) to a differentiable variable. when broadcasting vector (#1446) - Bump LuxCUDA compat for CUDA 13 (#1449) - Optimized LuxLib operations for Reactant (#1450) - Reactant testing rework (#1454) - Rework old documentation (#1456)

- Julia
Published by github-actions[bot] 6 months ago

Lux - v1.17.0

Lux v1.17.0

Diff since v1.16.0

Merged pull requests: - CompatHelper: add new compat entry for Mooncake at version 0.4 for package test, (keep existing compat) (#1389) (@github-actions[bot]) - docs: cleanup CIFAR 10 example dependencies (#1391) (@avik-pal) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package LuxLib, (keep existing compat) (#1394) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1395) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1396) (@github-actions[bot]) - De-fragment markdown list in distributedutils.md (#1397) (@abhro) - Precompile environment before running tutorials (#1398) (@abhro) - Add meaning of tuple elements in docs/tutorials.jl (#1399) (@abhro) - Split function into two for dataset/dataloader concept separation (#1400) (@abhro) - feat: add `preservesstate_typeto the interface (#1401) (@avik-pal) - Add and update links to external packages/resources in docs (#1403) (@abhro) - Change doc title in docs/src/introduction/index.md (#1404) (@abhro) - chore: bump julia-actions/julia-downgrade-compat from 1 to 2 (#1405) (@dependabot[bot]) - fix a typo in index.md (#1407) (@rzyu45) - Suppress output in docs and examples (#1408) (@abhro) - Add more explanatory text in tutorials' data generation (#1409) (@abhro) - Fix typos and fix up minor docs formatting (#1410) (@abhro) - Usewrite(filename, obj)for file I/O in docs (#1411) (@abhro) - Split up steps in PINN tutorial (#1412) (@abhro) - Use parenthesized version of@printf` for better code formatting (#1413) (@abhro) - ci: streamline ci testing (#1415) (@avik-pal) - Improve logic for printing updates on an epoch (#1417) (@abhro) - chore: bump crate-ci/typos from 1.34.0 to 1.35.3 (#1420) (@dependabot[bot]) - chore: fix mooncake circular dep (#1421) (@avik-pal)

Closed issues: - could not load library Reactant.TracedLinearAlgebra on Windows (#1339) - MLIR Operand #1 doesn't dominate usage: Reactant + Cifar10 Example (#1386) - Conversion of Arrays of StaticArray to Device (#1406) - Move LuxLib Mooncake Ext to Mooncake (#1416) - Question: If I have a Lux model with a single input, how I I create one with two inputs (#1419)

- Julia
Published by github-actions[bot] 6 months ago

Lux - LuxLib-v1.10.2

LuxLib LuxLib-v1.10.2

Diff since LuxLib-v1.10.1

Merged pull requests: - Split function into two for dataset/dataloader concept separation (#1400) (@abhro) - feat: add preserves_state_type to the interface (#1401) (@avik-pal) - Add and update links to external packages/resources in docs (#1403) (@abhro) - Change doc title in docs/src/introduction/index.md (#1404) (@abhro) - chore: bump julia-actions/julia-downgrade-compat from 1 to 2 (#1405) (@dependabot[bot]) - fix a typo in index.md (#1407) (@rzyu45) - Suppress output in docs and examples (#1408) (@abhro) - Add more explanatory text in tutorials' data generation (#1409) (@abhro) - Fix typos and fix up minor docs formatting (#1410) (@abhro) - Use write(filename, obj) for file I/O in docs (#1411) (@abhro) - Split up steps in PINN tutorial (#1412) (@abhro) - Use parenthesized version of @printf for better code formatting (#1413) (@abhro) - ci: streamline ci testing (#1415) (@avik-pal) - Improve logic for printing updates on an epoch (#1417) (@abhro) - chore: bump crate-ci/typos from 1.34.0 to 1.35.3 (#1420) (@dependabot[bot]) - chore: fix mooncake circular dep (#1421) (@avik-pal)

Closed issues: - Conversion of Arrays of StaticArray to Device (#1406) - Move LuxLib Mooncake Ext to Mooncake (#1416) - Question: If I have a Lux model with a single input, how I I create one with two inputs (#1419)

- Julia
Published by github-actions[bot] 6 months ago

Lux - LuxCore-v1.3.0

LuxCore LuxCore-v1.3.0

Diff since LuxCore-v1.2.6

Merged pull requests: - perf: benchmarking our models against Jax (Flax) (#1000) (@avik-pal) - fix: use ignore derivatives for Reactant (#1342) (@avik-pal) - ci: taming down ci timings (#1343) (@avik-pal) - fix: remove onehotarrays patch (#1344) (@avik-pal) - fix: bump reactant min version (#1345) (@avik-pal) - CompatHelper: bump compat for MKL in [weakdeps] to 0.9 for package LuxLib, (keep existing compat) (#1346) (@github-actions[bot]) - CompatHelper: bump compat for MKL to 0.9 for package test, (keep existing compat) (#1347) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.32.0 to 1.33.1 (#1349) (@dependabot[bot]) - chore: use uv for python (#1350) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.14 for package DDIM, (keep existing compat) (#1351) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package GravitationalWaveForm, (keep existing compat) (#1352) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package LSTMEncoderDecoder, (keep existing compat) (#1353) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package OptimizationIntegration, (keep existing compat) (#1354) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PINN2DPDE, (keep existing compat) (#1355) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PolynomialFitting, (keep existing compat) (#1356) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package RealNVP, (keep existing compat) (#1357) (@github-actions[bot]) - fix: remove type piracy (#1360) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.15 for package DDIM, (keep existing compat) (#1362) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package GravitationalWaveForm, (keep existing compat) (#1363) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package LSTMEncoderDecoder, (keep existing compat) (#1364) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package OptimizationIntegration, (keep existing compat) (#1365) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PINN2DPDE, (keep existing compat) (#1366) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PolynomialFitting, (keep existing compat) (#1367) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package RealNVP, (keep existing compat) (#1368) (@github-actions[bot]) - fix: use Int32 for GCN cora example (#1369) (@avik-pal) - Fix backticks in examples/Basics (#1370) (@abhro) - Fix up minor docs/docstrings formatting (#1371) (@abhro) - feat: forwarddiff support for gather/scatter (#1373) (@avik-pal) - fix: handle multi-device reactant (#1374) (@avik-pal) - feat: serialization to tensorflow saved model (#1375) (@avik-pal) - chore: update version for release (#1376) (@avik-pal) - CompatHelper: add new compat entry for PythonCall at version 0.9 for package test, (keep existing compat) (#1377) (@github-actions[bot]) - fix: missing variable (#1379) (@avik-pal) - chore: bump crate-ci/typos from 1.33.1 to 1.34.0 (#1382) (@dependabot[bot]) - fix: State returned by MultiHeadAttention is incompatible with the initialized state (#1384) (@yeruoforever) - feat: annotate important parts of training loop (#1385) (@avik-pal) - fix: bypass fused kernels for mooncake (for now) (#1387) (@avik-pal) - feat: AutoMooncake for training lux models (#1388) (@avik-pal) - CompatHelper: add new compat entry for Mooncake at version 0.4 for package test, (keep existing compat) (#1389) (@github-actions[bot]) - docs: cleanup CIFAR 10 example dependencies (#1391) (@avik-pal) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package LuxLib, (keep existing compat) (#1394) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1395) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1396) (@github-actions[bot]) - De-fragment markdown list in distributedutils.md (#1397) (@abhro) - Precompile environment before running tutorials (#1398) (@abhro) - Add meaning of tuple elements in docs/tutorials.jl (#1399) (@abhro) - Split function into two for dataset/dataloader concept separation (#1400) (@abhro) - feat: add `preservesstate_typeto the interface (#1401) (@avik-pal) - Add and update links to external packages/resources in docs (#1403) (@abhro) - Change doc title in docs/src/introduction/index.md (#1404) (@abhro) - chore: bump julia-actions/julia-downgrade-compat from 1 to 2 (#1405) (@dependabot[bot]) - fix a typo in index.md (#1407) (@rzyu45) - Suppress output in docs and examples (#1408) (@abhro) - Add more explanatory text in tutorials' data generation (#1409) (@abhro) - Fix typos and fix up minor docs formatting (#1410) (@abhro) - Usewrite(filename, obj)for file I/O in docs (#1411) (@abhro) - Split up steps in PINN tutorial (#1412) (@abhro) - Use parenthesized version of@printf` for better code formatting (#1413) (@abhro) - ci: streamline ci testing (#1415) (@avik-pal)

Closed issues: - Export trained model for Tensorflow/PyTorch/C++? (#453) - Externalize gradient computations to DifferentiationInterface.jl? (#544) - Add a AutoMooncake dispatch for training Lux models (#1238) - Don't materialize OneHotArrays with ReactantDevice (#1326) - LSTMEncoderDecoder example broken (#1337) - Error when freezing part of a model + Reactant (#1348) - TrainState with mutliple Reactant Devices (#1358) - ComponentArrays.jl type piracy? (#1359) - Reactant.jl pass pipeline broke GCN Cora (#1361) - jacobianvectorproduct for Embedding (#1372) - UndefVarError (:fname, LuxReactantExt) (#1378) - Unable to evaluate a Lux model at a specific parameter set. (#1380) - State Tuple returned by MultiHeadAttention is incompatible with the initialized tuple (#1383) - MLIR Operand #1 doesn't dominate usage: Reactant + Cifar10 Example (#1386)

- Julia
Published by github-actions[bot] 7 months ago

Lux - MLDataDevices-v1.11.1

MLDataDevices MLDataDevices-v1.11.1

Diff since MLDataDevices-v1.11.0

Merged pull requests: - fix: bypass fused kernels for mooncake (for now) (#1387) (@avik-pal) - feat: AutoMooncake for training lux models (#1388) (@avik-pal) - CompatHelper: add new compat entry for Mooncake at version 0.4 for package test, (keep existing compat) (#1389) (@github-actions[bot]) - docs: cleanup CIFAR 10 example dependencies (#1391) (@avik-pal) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package LuxLib, (keep existing compat) (#1394) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1395) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1396) (@github-actions[bot]) - De-fragment markdown list in distributed_utils.md (#1397) (@abhro) - Precompile environment before running tutorials (#1398) (@abhro) - Add meaning of tuple elements in docs/tutorials.jl (#1399) (@abhro)

Closed issues: - Add a AutoMooncake dispatch for training Lux models (#1238) - MLIR Operand #1 doesn't dominate usage: Reactant + Cifar10 Example (#1386)

- Julia
Published by github-actions[bot] 7 months ago

Lux - LuxLib-v1.10.1

LuxLib LuxLib-v1.10.1

Diff since LuxLib-v1.10.0

Merged pull requests: - feat: AutoMooncake for training lux models (#1388) (@avik-pal) - CompatHelper: add new compat entry for Mooncake at version 0.4 for package test, (keep existing compat) (#1389) (@github-actions[bot]) - docs: cleanup CIFAR 10 example dependencies (#1391) (@avik-pal) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package LuxLib, (keep existing compat) (#1394) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1395) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1396) (@github-actions[bot]) - De-fragment markdown list in distributed_utils.md (#1397) (@abhro) - Precompile environment before running tutorials (#1398) (@abhro) - Add meaning of tuple elements in docs/tutorials.jl (#1399) (@abhro)

Closed issues: - Add a AutoMooncake dispatch for training Lux models (#1238) - MLIR Operand #1 doesn't dominate usage: Reactant + Cifar10 Example (#1386)

- Julia
Published by github-actions[bot] 7 months ago

Lux - WeightInitializers-v1.1.4

WeightInitializers WeightInitializers-v1.1.4

Diff since WeightInitializers-v1.1.3

Merged pull requests: - perf: benchmarking our models against Jax (Flax) (#1000) (@avik-pal) - fix: use ignore derivatives for Reactant (#1342) (@avik-pal) - ci: taming down ci timings (#1343) (@avik-pal) - fix: remove onehotarrays patch (#1344) (@avik-pal) - fix: bump reactant min version (#1345) (@avik-pal) - CompatHelper: bump compat for MKL in [weakdeps] to 0.9 for package LuxLib, (keep existing compat) (#1346) (@github-actions[bot]) - CompatHelper: bump compat for MKL to 0.9 for package test, (keep existing compat) (#1347) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.32.0 to 1.33.1 (#1349) (@dependabot[bot]) - chore: use uv for python (#1350) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.14 for package DDIM, (keep existing compat) (#1351) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package GravitationalWaveForm, (keep existing compat) (#1352) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package LSTMEncoderDecoder, (keep existing compat) (#1353) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package OptimizationIntegration, (keep existing compat) (#1354) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PINN2DPDE, (keep existing compat) (#1355) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PolynomialFitting, (keep existing compat) (#1356) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package RealNVP, (keep existing compat) (#1357) (@github-actions[bot]) - fix: remove type piracy (#1360) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.15 for package DDIM, (keep existing compat) (#1362) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package GravitationalWaveForm, (keep existing compat) (#1363) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package LSTMEncoderDecoder, (keep existing compat) (#1364) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package OptimizationIntegration, (keep existing compat) (#1365) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PINN2DPDE, (keep existing compat) (#1366) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PolynomialFitting, (keep existing compat) (#1367) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package RealNVP, (keep existing compat) (#1368) (@github-actions[bot]) - fix: use Int32 for GCN cora example (#1369) (@avik-pal) - Fix backticks in examples/Basics (#1370) (@abhro) - Fix up minor docs/docstrings formatting (#1371) (@abhro) - feat: forwarddiff support for gather/scatter (#1373) (@avik-pal) - fix: handle multi-device reactant (#1374) (@avik-pal) - feat: serialization to tensorflow saved model (#1375) (@avik-pal) - chore: update version for release (#1376) (@avik-pal) - CompatHelper: add new compat entry for PythonCall at version 0.9 for package test, (keep existing compat) (#1377) (@github-actions[bot]) - fix: missing variable (#1379) (@avik-pal) - chore: bump crate-ci/typos from 1.33.1 to 1.34.0 (#1382) (@dependabot[bot]) - fix: State returned by MultiHeadAttention is incompatible with the initialized state (#1384) (@yeruoforever) - feat: annotate important parts of training loop (#1385) (@avik-pal) - fix: bypass fused kernels for mooncake (for now) (#1387) (@avik-pal) - feat: AutoMooncake for training lux models (#1388) (@avik-pal) - CompatHelper: add new compat entry for Mooncake at version 0.4 for package test, (keep existing compat) (#1389) (@github-actions[bot]) - docs: cleanup CIFAR 10 example dependencies (#1391) (@avik-pal) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package LuxLib, (keep existing compat) (#1394) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1395) (@github-actions[bot]) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1396) (@github-actions[bot]) - De-fragment markdown list in distributed_utils.md (#1397) (@abhro) - Precompile environment before running tutorials (#1398) (@abhro) - Add meaning of tuple elements in docs/tutorials.jl (#1399) (@abhro)

Closed issues: - Export trained model for Tensorflow/PyTorch/C++? (#453) - Externalize gradient computations to DifferentiationInterface.jl? (#544) - Add a AutoMooncake dispatch for training Lux models (#1238) - Don't materialize OneHotArrays with ReactantDevice (#1326) - LSTMEncoderDecoder example broken (#1337) - Error when freezing part of a model + Reactant (#1348) - TrainState with mutliple Reactant Devices (#1358) - ComponentArrays.jl type piracy? (#1359) - Reactant.jl pass pipeline broke GCN Cora (#1361) - jacobianvectorproduct for Embedding (#1372) - UndefVarError (:fname, LuxReactantExt) (#1378) - Unable to evaluate a Lux model at a specific parameter set. (#1380) - State Tuple returned by MultiHeadAttention is incompatible with the initialized tuple (#1383) - MLIR Operand #1 doesn't dominate usage: Reactant + Cifar10 Example (#1386)

- Julia
Published by github-actions[bot] 7 months ago

Lux - v1.16.0

Lux v1.16.0

Diff since v1.15.0

Merged pull requests: - fix: bypass fused kernels for mooncake (for now) (#1387) (@avik-pal) - feat: AutoMooncake for training lux models (#1388) (@avik-pal)

Closed issues: - Add a AutoMooncake dispatch for training Lux models (#1238)

- Julia
Published by github-actions[bot] 8 months ago

Lux - LuxLib-v1.10.0

LuxLib LuxLib-v1.10.0

Diff since LuxLib-v1.9.0

Merged pull requests: - fix: handle multi-device reactant (#1374) (@avik-pal) - feat: serialization to tensorflow saved model (#1375) (@avik-pal) - chore: update version for release (#1376) (@avik-pal) - CompatHelper: add new compat entry for PythonCall at version 0.9 for package test, (keep existing compat) (#1377) (@github-actions[bot]) - fix: missing variable (#1379) (@avik-pal) - chore: bump crate-ci/typos from 1.33.1 to 1.34.0 (#1382) (@dependabot[bot]) - fix: State returned by MultiHeadAttention is incompatible with the initialized state (#1384) (@yeruoforever) - feat: annotate important parts of training loop (#1385) (@avik-pal) - fix: bypass fused kernels for mooncake (for now) (#1387) (@avik-pal)

Closed issues: - Export trained model for Tensorflow/PyTorch/C++? (#453) - TrainState with mutliple Reactant Devices (#1358) - jacobianvectorproduct for Embedding (#1372) - UndefVarError (:fname, LuxReactantExt) (#1378) - Unable to evaluate a Lux model at a specific parameter set. (#1380) - State Tuple returned by MultiHeadAttention is incompatible with the initialized tuple (#1383)

- Julia
Published by github-actions[bot] 8 months ago

Lux - MLDataDevices-v1.11.0

MLDataDevices MLDataDevices-v1.11.0

Diff since MLDataDevices-v1.10.1

Merged pull requests: - feat: serialization to tensorflow saved model (#1375) (@avik-pal) - chore: update version for release (#1376) (@avik-pal) - CompatHelper: add new compat entry for PythonCall at version 0.9 for package test, (keep existing compat) (#1377) (@github-actions[bot]) - fix: missing variable (#1379) (@avik-pal) - chore: bump crate-ci/typos from 1.33.1 to 1.34.0 (#1382) (@dependabot[bot]) - fix: State returned by MultiHeadAttention is incompatible with the initialized state (#1384) (@yeruoforever) - feat: annotate important parts of training loop (#1385) (@avik-pal)

Closed issues: - Export trained model for Tensorflow/PyTorch/C++? (#453) - jacobianvectorproduct for Embedding (#1372) - UndefVarError (:fname, LuxReactantExt) (#1378) - Unable to evaluate a Lux model at a specific parameter set. (#1380) - State Tuple returned by MultiHeadAttention is incompatible with the initialized tuple (#1383)

- Julia
Published by github-actions[bot] 8 months ago

Lux - v1.15.0

Lux v1.15.0

Diff since v1.14.2

Merged pull requests: - feat: annotate important parts of training loop (#1385) (@avik-pal)

- Julia
Published by github-actions[bot] 8 months ago

Lux - v1.14.2

Lux v1.14.2

Diff since v1.14.1

Merged pull requests: - chore: bump crate-ci/typos from 1.33.1 to 1.34.0 (#1382) (@dependabot[bot]) - fix: State returned by MultiHeadAttention is incompatible with the initialized state (#1384) (@yeruoforever)

Closed issues: - Unable to evaluate a Lux model at a specific parameter set. (#1380) - State Tuple returned by MultiHeadAttention is incompatible with the initialized tuple (#1383)

- Julia
Published by github-actions[bot] 8 months ago

Lux - v1.14.1

Lux v1.14.1

Diff since v1.14.0

Merged pull requests: - CompatHelper: add new compat entry for PythonCall at version 0.9 for package test, (keep existing compat) (#1377) (@github-actions[bot]) - fix: missing variable (#1379) (@avik-pal)

Closed issues: - jacobianvectorproduct for Embedding (#1372) - UndefVarError (:fname, LuxReactantExt) (#1378)

- Julia
Published by github-actions[bot] 8 months ago

Lux - v1.14.0

Lux v1.14.0

Diff since v1.13.6

Merged pull requests: - feat: serialization to tensorflow saved model (#1375) (@avik-pal) - chore: update version for release (#1376) (@avik-pal)

Closed issues: - Export trained model for Tensorflow/PyTorch/C++? (#453)

- Julia
Published by github-actions[bot] 8 months ago

Lux - MLDataDevices-v1.10.1

MLDataDevices MLDataDevices-v1.10.1

Diff since MLDataDevices-v1.10.0

Merged pull requests: - perf: benchmarking our models against Jax (Flax) (#1000) (@avik-pal) - fix: use ignore derivatives for Reactant (#1342) (@avik-pal) - fix: bump reactant min version (#1345) (@avik-pal) - CompatHelper: bump compat for MKL in [weakdeps] to 0.9 for package LuxLib, (keep existing compat) (#1346) (@github-actions[bot]) - CompatHelper: bump compat for MKL to 0.9 for package test, (keep existing compat) (#1347) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.32.0 to 1.33.1 (#1349) (@dependabot[bot]) - chore: use uv for python (#1350) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.14 for package DDIM, (keep existing compat) (#1351) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package GravitationalWaveForm, (keep existing compat) (#1352) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package LSTMEncoderDecoder, (keep existing compat) (#1353) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package OptimizationIntegration, (keep existing compat) (#1354) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PINN2DPDE, (keep existing compat) (#1355) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PolynomialFitting, (keep existing compat) (#1356) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package RealNVP, (keep existing compat) (#1357) (@github-actions[bot]) - fix: remove type piracy (#1360) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.15 for package DDIM, (keep existing compat) (#1362) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package GravitationalWaveForm, (keep existing compat) (#1363) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package LSTMEncoderDecoder, (keep existing compat) (#1364) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package OptimizationIntegration, (keep existing compat) (#1365) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PINN2DPDE, (keep existing compat) (#1366) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PolynomialFitting, (keep existing compat) (#1367) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package RealNVP, (keep existing compat) (#1368) (@github-actions[bot]) - fix: use Int32 for GCN cora example (#1369) (@avik-pal) - Fix backticks in examples/Basics (#1370) (@abhro) - Fix up minor docs/docstrings formatting (#1371) (@abhro) - feat: forwarddiff support for gather/scatter (#1373) (@avik-pal) - fix: handle multi-device reactant (#1374) (@avik-pal)

Closed issues: - Externalize gradient computations to DifferentiationInterface.jl? (#544) - LSTMEncoderDecoder example broken (#1337) - Error when freezing part of a model + Reactant (#1348) - TrainState with mutliple Reactant Devices (#1358) - ComponentArrays.jl type piracy? (#1359) - Reactant.jl pass pipeline broke GCN Cora (#1361)

- Julia
Published by github-actions[bot] 8 months ago

Lux - v1.13.6

Lux v1.13.6

Diff since v1.13.5

Merged pull requests: - CompatHelper: bump compat for CairoMakie to 0.15 for package DDIM, (keep existing compat) (#1362) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package GravitationalWaveForm, (keep existing compat) (#1363) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package LSTMEncoderDecoder, (keep existing compat) (#1364) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package OptimizationIntegration, (keep existing compat) (#1365) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PINN2DPDE, (keep existing compat) (#1366) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PolynomialFitting, (keep existing compat) (#1367) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package RealNVP, (keep existing compat) (#1368) (@github-actions[bot]) - fix: use Int32 for GCN cora example (#1369) (@avik-pal) - Fix backticks in examples/Basics (#1370) (@abhro) - Fix up minor docs/docstrings formatting (#1371) (@abhro) - feat: forwarddiff support for gather/scatter (#1373) (@avik-pal) - fix: handle multi-device reactant (#1374) (@avik-pal)

Closed issues: - TrainState with mutliple Reactant Devices (#1358) - Reactant.jl pass pipeline broke GCN Cora (#1361)

- Julia
Published by github-actions[bot] 8 months ago

Lux - LuxLib-v1.9.0

LuxLib LuxLib-v1.9.0

Diff since LuxLib-v1.8.0

Merged pull requests: - perf: benchmarking our models against Jax (Flax) (#1000) (@avik-pal) - fix: run more under with_config (#1340) (@avik-pal) - fix: update to use the new RNG from Reactant (#1341) (@avik-pal) - fix: use ignore derivatives for Reactant (#1342) (@avik-pal) - ci: taming down ci timings (#1343) (@avik-pal) - fix: remove onehotarrays patch (#1344) (@avik-pal) - fix: bump reactant min version (#1345) (@avik-pal) - CompatHelper: bump compat for MKL in [weakdeps] to 0.9 for package LuxLib, (keep existing compat) (#1346) (@github-actions[bot]) - CompatHelper: bump compat for MKL to 0.9 for package test, (keep existing compat) (#1347) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.32.0 to 1.33.1 (#1349) (@dependabot[bot]) - chore: use uv for python (#1350) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.14 for package DDIM, (keep existing compat) (#1351) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package GravitationalWaveForm, (keep existing compat) (#1352) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package LSTMEncoderDecoder, (keep existing compat) (#1353) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package OptimizationIntegration, (keep existing compat) (#1354) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PINN2DPDE, (keep existing compat) (#1355) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PolynomialFitting, (keep existing compat) (#1356) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package RealNVP, (keep existing compat) (#1357) (@github-actions[bot]) - fix: remove type piracy (#1360) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.15 for package DDIM, (keep existing compat) (#1362) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package GravitationalWaveForm, (keep existing compat) (#1363) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package LSTMEncoderDecoder, (keep existing compat) (#1364) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package OptimizationIntegration, (keep existing compat) (#1365) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PINN2DPDE, (keep existing compat) (#1366) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package PolynomialFitting, (keep existing compat) (#1367) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.15 for package RealNVP, (keep existing compat) (#1368) (@github-actions[bot]) - fix: use Int32 for GCN cora example (#1369) (@avik-pal) - Fix backticks in examples/Basics (#1370) (@abhro) - Fix up minor docs/docstrings formatting (#1371) (@abhro) - feat: forwarddiff support for gather/scatter (#1373) (@avik-pal)

Closed issues: - Externalize gradient computations to DifferentiationInterface.jl? (#544) - Don't materialize OneHotArrays with ReactantDevice (#1326) - LSTMEncoderDecoder example broken (#1337) - Error when freezing part of a model + Reactant (#1348) - ComponentArrays.jl type piracy? (#1359) - Reactant.jl pass pipeline broke GCN Cora (#1361)

- Julia
Published by github-actions[bot] 8 months ago

Lux - v1.13.5

Lux v1.13.5

Diff since v1.13.4

Merged pull requests: - chore: bump crate-ci/typos from 1.32.0 to 1.33.1 (#1349) (@dependabot[bot]) - chore: use uv for python (#1350) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.14 for package DDIM, (keep existing compat) (#1351) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package GravitationalWaveForm, (keep existing compat) (#1352) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package LSTMEncoderDecoder, (keep existing compat) (#1353) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package OptimizationIntegration, (keep existing compat) (#1354) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PINN2DPDE, (keep existing compat) (#1355) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package PolynomialFitting, (keep existing compat) (#1356) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.14 for package RealNVP, (keep existing compat) (#1357) (@github-actions[bot]) - fix: remove type piracy (#1360) (@avik-pal)

Closed issues: - ComponentArrays.jl type piracy? (#1359)

- Julia
Published by github-actions[bot] 8 months ago

Lux - v1.13.4

Lux v1.13.4

Diff since v1.13.3

Merged pull requests: - perf: benchmarking our models against Jax (Flax) (#1000) (@avik-pal) - fix: use ignore derivatives for Reactant (#1342) (@avik-pal) - ci: taming down ci timings (#1343) (@avik-pal) - fix: remove onehotarrays patch (#1344) (@avik-pal) - fix: bump reactant min version (#1345) (@avik-pal) - CompatHelper: bump compat for MKL in [weakdeps] to 0.9 for package LuxLib, (keep existing compat) (#1346) (@github-actions[bot]) - CompatHelper: bump compat for MKL to 0.9 for package test, (keep existing compat) (#1347) (@github-actions[bot])

Closed issues: - Externalize gradient computations to DifferentiationInterface.jl? (#544) - Don't materialize OneHotArrays with ReactantDevice (#1326) - LSTMEncoderDecoder example broken (#1337) - Error when freezing part of a model + Reactant (#1348)

- Julia
Published by github-actions[bot] 9 months ago

Lux - MLDataDevices-v1.10.0

MLDataDevices MLDataDevices-v1.10.0

Diff since MLDataDevices-v1.9.3

Merged pull requests: - ci: taming down ci timings (#1343) (@avik-pal) - fix: remove onehotarrays patch (#1344) (@avik-pal)

Closed issues: - Don't materialize OneHotArrays with ReactantDevice (#1326)

- Julia
Published by github-actions[bot] 9 months ago

Lux - WeightInitializers-v1.1.3

WeightInitializers WeightInitializers-v1.1.3

Diff since WeightInitializers-v1.1.2

Merged pull requests: - fix: run more under with_config (#1340) (@avik-pal) - fix: update to use the new RNG from Reactant (#1341) (@avik-pal)

- Julia
Published by github-actions[bot] 9 months ago

Lux - LuxCore-v1.2.6

LuxCore LuxCore-v1.2.6

Diff since LuxCore-v1.2.5

Merged pull requests: - fix: run more under with_config (#1340) (@avik-pal) - fix: update to use the new RNG from Reactant (#1341) (@avik-pal)

- Julia
Published by github-actions[bot] 9 months ago

Lux - MLDataDevices-v1.9.3

MLDataDevices MLDataDevices-v1.9.3

Diff since MLDataDevices-v1.9.2

Merged pull requests: - fix: run more under with_config (#1340) (@avik-pal) - fix: update to use the new RNG from Reactant (#1341) (@avik-pal)

- Julia
Published by github-actions[bot] 9 months ago

Lux - v1.13.3

Lux v1.13.3

Diff since v1.13.2

Merged pull requests: - fix: run more under with_config (#1340) (@avik-pal) - fix: update to use the new RNG from Reactant (#1341) (@avik-pal)

- Julia
Published by github-actions[bot] 9 months ago

Lux - LuxCore-v1.2.5

LuxCore LuxCore-v1.2.5

Diff since LuxCore-v1.2.4

Merged pull requests: - fix: remove Optimisers.jl patch (#1247) (@avik-pal) - docs: refer to the Turing docs for BayesianNN (#1272) (@github-actions[bot]) - feat: support for ForwardDiff training (#1273) (@hstrey) - chore: bump forwarddiff to 1.0 (#1277) (@avik-pal) - test: fix tests (#1278) (@avik-pal) - fix: try fixing broken tests (#1279) (@avik-pal) - show debug instead of warning if cant cuBLAS mult (#1280) (@ExpandingMan) - chore: bump crate-ci/typos from 1.30.2 to 1.31.0 (#1281) (@dependabot[bot]) - chore: remove debug functionalities of reactant (#1285) (@avik-pal) - fix: add a ReactantOptimisers wrapper (#1288) (@avik-pal) - chore: bump crate-ci/typos from 1.31.0 to 1.31.1 (#1297) (@dependabot[bot]) - ci: use JuliaFormatter v1 (#1299) (@avik-pal) - ci: multiple ci fixes (#1301) (@avik-pal) - CompatHelper: bump compat for JET to 0.10 for package LuxTestUtils, (keep existing compat) (#1302) (@github-actions[bot]) - fix: new reactant version (#1303) (@avik-pal) - fix: update Reactant training (#1304) (@avik-pal) - fix: chain rules for recurrence tuple inputs (#1306) (@avik-pal) - feat: fix return (#1307) (@avik-pal) - fix: try increasing the samples in CI (#1309) (@avik-pal) - fix: restrict dispatch types for cublaslt (#1311) (@avik-pal) - chore: bump julia-actions/julia-format from 3 to 4 (#1313) (@dependabot[bot]) - feat: use 3rd order derivatives using Reactant (#1315) (@avik-pal) - allow SelectDim to take arbitrary views (#1318) (@ExpandingMan) - chore: bump crate-ci/typos from 1.31.1 to 1.32.0 (#1320) (@dependabot[bot]) - docs: fix wrong function names in RNG admonition in interface.md (#1325) (@KristianHolme) - CompatHelper: bump compat for DocumenterVitepress to 0.2 for package docs, (keep existing compat) (#1328) (@github-actions[bot]) - CompatHelper: bump compat for Interpolations to 0.16 for package CIFAR10, (keep existing compat) (#1329) (@github-actions[bot]) - docs: lstm encoder decoder using Reactant (#1331) (@avik-pal) - feat: lower embedding to direct indexing (#1332) (@avik-pal) - fix: indexing (#1333) (@avik-pal) - fix: reactant gradients + precision config (#1334) (@avik-pal) - feat: emit batchnorm ops (#1336) (@avik-pal)

Closed issues: - Per-Layer Profiling (#864) - Integration of oneDNN for CPU operations (#1013) - Optimisers.jl patch for Reactant Support (#1146) - Emit Batchnorm Op for Training (#1208) - Convolutional VAE for MNIST using Reactant failed to produce right results (#1274) - GPU Inefficiency in Gradient Computation with Custom Recurrence (#1284) - PINN2DPDE broke in the latest Optimisers Patch removal (#1286) - Duplicating Scalars for Optimisers prevents CSE (#1289) - Reactant 0.2.61 produces incorrect gradients (#1292) - missing or incorrect ProjectTo method breaks Recurrence with Zygote (#1305) - dense operations fail on views on nvidia due to missing method (#1308) - Gradient of while loop with Reactant seems broken (#1316) - getting concatenating and splitting working with Reactant/Enzyme (#1317) - LuxLib doing LinearAlgebra.mul! on non-arrays can cause fallback to errors (#1322) - gpudevice(deviceid) attaches to first/zeroth device regardless of device_id (#1330)

- Julia
Published by github-actions[bot] 9 months ago

Lux - WeightInitializers-v1.1.2

WeightInitializers WeightInitializers-v1.1.2

Diff since WeightInitializers-v1.1.1

Merged pull requests: - feat: emit batchnorm ops from stablehlo (#1142) (@avik-pal) - chore: bump Zygote version (#1182) (@avik-pal) - chore: bump CairoMakie to 0.13 (#1206) (@avik-pal) - CompatHelper: bump compat for Turing to 0.36 for package BayesianNN, (keep existing compat) (#1207) (@github-actions[bot]) - feat: don't unroll Recurrence (#1209) (@avik-pal) - docs: add GCN Cora example (#1210) (@avik-pal) - CompatHelper: add new compat entry for GNNGraphs at version 1 for package GCNCora, (keep existing compat) (#1211) (@github-actions[bot]) - CompatHelper: add new compat entry for OneHotArrays at version 0.2 for package GCN_Cora, (keep existing compat) (#1212) (@github-actions[bot]) - docs: Normalizing Flow (RealNVP) example (#1215) (@avik-pal) - CompatHelper: add new compat entry for Lux at version 1 for package RealNVP, (keep existing compat) (#1221) (@github-actions[bot]) - fix: relax input types (#1226) (@avik-pal) - feat: compile hypernet with reactant (#1227) (@avik-pal) - feat: show how to use model explorer (#1228) (@avik-pal) - Fix small typo in docs: "to compiler models" => "to compile models" (#1229) (@Dale-Black) - CompatHelper: bump compat for MKL in [weakdeps] to 0.8 for package LuxLib, (keep existing compat) (#1231) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.29.4 to 1.29.5 (#1232) (@dependabot[bot]) - feat: high-level implementations for Positional Embeddings (#1237) (@avik-pal) - chore: bump crate-ci/typos from 1.29.5 to 1.29.7 (#1239) (@dependabot[bot]) - Update LUX.jl/examples/SimpleRNN/main.jl (#1240) (@aligurbu) - feat: distinct recurrent initializer option for recurrent layers (#1242) (@MartinuzziFrancesco) - fix: adapt to upcoming reactant changes (#1246) (@avik-pal) - fix: remove Optimisers.jl patch (#1247) (@avik-pal) - chore: bump crate-ci/typos from 1.29.7 to 1.29.9 (#1248) (@dependabot[bot]) - docs: update to new DocumenterVitepress Improvements (#1250) (@avik-pal) - feat: relax conditions (#1252) (@avik-pal) - chore: bump crate-ci/typos from 1.29.9 to 1.30.0 (#1253) (@dependabot[bot]) - feat: allow specifying sharding in ReactantDevice (#1254) (@avik-pal) - refactor: use runic for formatting (#1255) (@avik-pal) - Add way to silence gpu warning (#1256) (@IanButterworth) - feat: tracing stateful layers (#1257) (@avik-pal) - chore: bump crate-ci/typos from 1.30.0 to 1.30.1 (#1258) (@dependabot[bot]) - Docs: Update autodiff.md to correct typos (#1259) (@sinhtrung) - Update OptimizationIntegration/main.jl comment to match link title (#1261) (@sinhtrung) - test: mark a test as broken (#1263) (@avik-pal) - refactor: cleanup dependencies (Hwloc + CpuId) (#1264) (@avik-pal) - docs: update readme (#1265) (@avik-pal) - feat: add MultiHeadAttention (#1266) (@avik-pal) - docs: use documenter citations (#1267) (@avik-pal) - refactor: revert runic formatting (#1268) (@avik-pal) - chore: bump crate-ci/typos from 1.30.1 to 1.30.2 (#1270) (@dependabot[bot]) - chore: remove unused imports (#1271) (@avik-pal) - docs: refer to the Turing docs for BayesianNN (#1272) (@github-actions[bot]) - feat: support for ForwardDiff training (#1273) (@hstrey) - fix: reactant typo (#1275) (@avik-pal) - fix: outputsize of Dense (#1276) (@avik-pal) - chore: bump forwarddiff to 1.0 (#1277) (@avik-pal) - test: fix tests (#1278) (@avik-pal) - fix: try fixing broken tests (#1279) (@avik-pal) - show debug instead of warning if cant cuBLAS mult (#1280) (@ExpandingMan) - chore: bump crate-ci/typos from 1.30.2 to 1.31.0 (#1281) (@dependabot[bot]) - chore: remove debug functionalities of reactant (#1285) (@avik-pal) - fix: add a ReactantOptimisers wrapper (#1288) (@avik-pal) - chore: bump crate-ci/typos from 1.31.0 to 1.31.1 (#1297) (@dependabot[bot]) - ci: use JuliaFormatter v1 (#1299) (@avik-pal) - ci: multiple ci fixes (#1301) (@avik-pal) - CompatHelper: bump compat for JET to 0.10 for package LuxTestUtils, (keep existing compat) (#1302) (@github-actions[bot]) - fix: new reactant version (#1303) (@avik-pal) - fix: update Reactant training (#1304) (@avik-pal) - fix: chain rules for recurrence tuple inputs (#1306) (@avik-pal) - feat: fix return (#1307) (@avik-pal) - fix: try increasing the samples in CI (#1309) (@avik-pal) - fix: restrict dispatch types for cublaslt (#1311) (@avik-pal) - chore: bump julia-actions/julia-format from 3 to 4 (#1313) (@dependabot[bot]) - feat: use 3rd order derivatives using Reactant (#1315) (@avik-pal) - allow SelectDim to take arbitrary views (#1318) (@ExpandingMan) - chore: bump crate-ci/typos from 1.31.1 to 1.32.0 (#1320) (@dependabot[bot]) - docs: fix wrong function names in RNG admonition in interface.md (#1325) (@KristianHolme) - CompatHelper: bump compat for DocumenterVitepress to 0.2 for package docs, (keep existing compat) (#1328) (@github-actions[bot]) - CompatHelper: bump compat for Interpolations to 0.16 for package CIFAR10, (keep existing compat) (#1329) (@github-actions[bot]) - docs: lstm encoder decoder using Reactant (#1331) (@avik-pal) - feat: lower embedding to direct indexing (#1332) (@avik-pal) - fix: indexing (#1333) (@avik-pal) - fix: reactant gradients + precision config (#1334) (@avik-pal) - feat: emit batchnorm ops (#1336) (@avik-pal)

Closed issues: - Per-Layer Profiling (#864) - Integration of oneDNN for CPU operations (#1013) - Use TestExtras.jl for inference testing (#1098) - Optimisers.jl patch for Reactant Support (#1146) - Parallel is incompatible with Zygote nested gradient (#1199) - Emit Batchnorm Op for Training (#1208) - Reactant grabbing wrong CUDA version (#1225) - Allow different initializers for input and hidden in recurrent layers (#1241) - Relax MLDataDevices.combine_devices for ReactantDevice and CPUDevice (#1244) - AutoEnzyme() errors on training dense layer (#1249) - New Reactant Release breaks ConcreteRNG (#1251) - Example from Training Lux Models using Optimization.jl not working in CPU (#1260) - outputsize(::Dense) (#1262) - Convolutional VAE for MNIST using Reactant failed to produce right results (#1274) - GPU Inefficiency in Gradient Computation with Custom Recurrence (#1284) - PINN2DPDE broke in the latest Optimisers Patch removal (#1286) - Duplicating Scalars for Optimisers prevents CSE (#1289) - Reactant 0.2.61 produces incorrect gradients (#1292) - missing or incorrect ProjectTo method breaks Recurrence with Zygote (#1305) - dense operations fail on views on nvidia due to missing method (#1308) - Gradient of while loop with Reactant seems broken (#1316) - getting concatenating and splitting working with Reactant/Enzyme (#1317) - LuxLib doing LinearAlgebra.mul! on non-arrays can cause fallback to errors (#1322) - gpudevice(deviceid) attaches to first/zeroth device regardless of device_id (#1330)

- Julia
Published by github-actions[bot] 9 months ago

Lux - v1.13.2

Lux v1.13.2

Diff since v1.13.1

Merged pull requests: - feat: emit batchnorm ops (#1336) (@avik-pal)

Closed issues: - Emit Batchnorm Op for Training (#1208)

- Julia
Published by github-actions[bot] 9 months ago

Lux - LuxLib-v1.8.0

LuxLib LuxLib-v1.8.0

Diff since LuxLib-v1.7.3

Merged pull requests: - chore: bump julia-actions/julia-format from 3 to 4 (#1313) (@dependabot[bot]) - feat: use 3rd order derivatives using Reactant (#1315) (@avik-pal) - allow SelectDim to take arbitrary views (#1318) (@ExpandingMan) - chore: bump crate-ci/typos from 1.31.1 to 1.32.0 (#1320) (@dependabot[bot]) - docs: fix wrong function names in RNG admonition in interface.md (#1325) (@KristianHolme) - CompatHelper: bump compat for DocumenterVitepress to 0.2 for package docs, (keep existing compat) (#1328) (@github-actions[bot]) - CompatHelper: bump compat for Interpolations to 0.16 for package CIFAR10, (keep existing compat) (#1329) (@github-actions[bot]) - docs: lstm encoder decoder using Reactant (#1331) (@avik-pal) - feat: lower embedding to direct indexing (#1332) (@avik-pal) - fix: indexing (#1333) (@avik-pal) - fix: reactant gradients + precision config (#1334) (@avik-pal) - feat: emit batchnorm ops (#1336) (@avik-pal)

Closed issues: - Per-Layer Profiling (#864) - Integration of oneDNN for CPU operations (#1013) - Emit Batchnorm Op for Training (#1208) - Gradient of while loop with Reactant seems broken (#1316) - getting concatenating and splitting working with Reactant/Enzyme (#1317) - LuxLib doing LinearAlgebra.mul! on non-arrays can cause fallback to errors (#1322) - gpudevice(deviceid) attaches to first/zeroth device regardless of device_id (#1330)

- Julia
Published by github-actions[bot] 9 months ago

Lux - MLDataDevices-v1.9.2

MLDataDevices MLDataDevices-v1.9.2

Diff since MLDataDevices-v1.9.1

Merged pull requests: - fix: remove Optimisers.jl patch (#1247) (@avik-pal) - chore: bump crate-ci/typos from 1.30.0 to 1.30.1 (#1258) (@dependabot[bot]) - Docs: Update autodiff.md to correct typos (#1259) (@sinhtrung) - Update OptimizationIntegration/main.jl comment to match link title (#1261) (@sinhtrung) - test: mark a test as broken (#1263) (@avik-pal) - refactor: cleanup dependencies (Hwloc + CpuId) (#1264) (@avik-pal) - docs: update readme (#1265) (@avik-pal) - feat: add MultiHeadAttention (#1266) (@avik-pal) - docs: use documenter citations (#1267) (@avik-pal) - refactor: revert runic formatting (#1268) (@avik-pal) - chore: bump crate-ci/typos from 1.30.1 to 1.30.2 (#1270) (@dependabot[bot]) - chore: remove unused imports (#1271) (@avik-pal) - docs: refer to the Turing docs for BayesianNN (#1272) (@github-actions[bot]) - feat: support for ForwardDiff training (#1273) (@hstrey) - fix: reactant typo (#1275) (@avik-pal) - fix: outputsize of Dense (#1276) (@avik-pal) - chore: bump forwarddiff to 1.0 (#1277) (@avik-pal) - test: fix tests (#1278) (@avik-pal) - fix: try fixing broken tests (#1279) (@avik-pal) - show debug instead of warning if cant cuBLAS mult (#1280) (@ExpandingMan) - chore: bump crate-ci/typos from 1.30.2 to 1.31.0 (#1281) (@dependabot[bot]) - chore: remove debug functionalities of reactant (#1285) (@avik-pal) - fix: add a ReactantOptimisers wrapper (#1288) (@avik-pal) - chore: bump crate-ci/typos from 1.31.0 to 1.31.1 (#1297) (@dependabot[bot]) - ci: use JuliaFormatter v1 (#1299) (@avik-pal) - ci: multiple ci fixes (#1301) (@avik-pal) - CompatHelper: bump compat for JET to 0.10 for package LuxTestUtils, (keep existing compat) (#1302) (@github-actions[bot]) - fix: new reactant version (#1303) (@avik-pal) - fix: update Reactant training (#1304) (@avik-pal) - fix: chain rules for recurrence tuple inputs (#1306) (@avik-pal) - feat: fix return (#1307) (@avik-pal) - fix: try increasing the samples in CI (#1309) (@avik-pal) - fix: restrict dispatch types for cublaslt (#1311) (@avik-pal) - chore: bump julia-actions/julia-format from 3 to 4 (#1313) (@dependabot[bot]) - feat: use 3rd order derivatives using Reactant (#1315) (@avik-pal) - allow SelectDim to take arbitrary views (#1318) (@ExpandingMan) - chore: bump crate-ci/typos from 1.31.1 to 1.32.0 (#1320) (@dependabot[bot]) - docs: fix wrong function names in RNG admonition in interface.md (#1325) (@KristianHolme) - CompatHelper: bump compat for DocumenterVitepress to 0.2 for package docs, (keep existing compat) (#1328) (@github-actions[bot]) - CompatHelper: bump compat for Interpolations to 0.16 for package CIFAR10, (keep existing compat) (#1329) (@github-actions[bot]) - docs: lstm encoder decoder using Reactant (#1331) (@avik-pal) - feat: lower embedding to direct indexing (#1332) (@avik-pal) - fix: indexing (#1333) (@avik-pal) - fix: reactant gradients + precision config (#1334) (@avik-pal) - feat: emit batchnorm ops (#1336) (@avik-pal)

Closed issues: - Per-Layer Profiling (#864) - Integration of oneDNN for CPU operations (#1013) - Optimisers.jl patch for Reactant Support (#1146) - Emit Batchnorm Op for Training (#1208) - Example from Training Lux Models using Optimization.jl not working in CPU (#1260) - outputsize(::Dense) (#1262) - Convolutional VAE for MNIST using Reactant failed to produce right results (#1274) - GPU Inefficiency in Gradient Computation with Custom Recurrence (#1284) - PINN2DPDE broke in the latest Optimisers Patch removal (#1286) - Duplicating Scalars for Optimisers prevents CSE (#1289) - Reactant 0.2.61 produces incorrect gradients (#1292) - missing or incorrect ProjectTo method breaks Recurrence with Zygote (#1305) - dense operations fail on views on nvidia due to missing method (#1308) - Gradient of while loop with Reactant seems broken (#1316) - getting concatenating and splitting working with Reactant/Enzyme (#1317) - LuxLib doing LinearAlgebra.mul! on non-arrays can cause fallback to errors (#1322) - gpudevice(deviceid) attaches to first/zeroth device regardless of device_id (#1330)

- Julia
Published by github-actions[bot] 9 months ago

Lux - v1.13.1

Lux v1.13.1

Diff since v1.13.0

Merged pull requests: - fix: reactant gradients + precision config (#1334) (@avik-pal)

- Julia
Published by github-actions[bot] 9 months ago

Lux - v1.13.0

Lux v1.13.0

Diff since v1.12.4

Merged pull requests: - fix: try increasing the samples in CI (#1309) (@avik-pal) - fix: restrict dispatch types for cublaslt (#1311) (@avik-pal) - chore: bump julia-actions/julia-format from 3 to 4 (#1313) (@dependabot[bot]) - feat: use 3rd order derivatives using Reactant (#1315) (@avik-pal) - allow SelectDim to take arbitrary views (#1318) (@ExpandingMan) - chore: bump crate-ci/typos from 1.31.1 to 1.32.0 (#1320) (@dependabot[bot]) - docs: fix wrong function names in RNG admonition in interface.md (#1325) (@KristianHolme) - CompatHelper: bump compat for DocumenterVitepress to 0.2 for package docs, (keep existing compat) (#1328) (@github-actions[bot]) - CompatHelper: bump compat for Interpolations to 0.16 for package CIFAR10, (keep existing compat) (#1329) (@github-actions[bot]) - docs: lstm encoder decoder using Reactant (#1331) (@avik-pal) - feat: lower embedding to direct indexing (#1332) (@avik-pal) - fix: indexing (#1333) (@avik-pal)

Closed issues: - Per-Layer Profiling (#864) - Integration of oneDNN for CPU operations (#1013) - Convolutional VAE for MNIST using Reactant failed to produce right results (#1274) - GPU Inefficiency in Gradient Computation with Custom Recurrence (#1284) - dense operations fail on views on nvidia due to missing method (#1308) - Gradient of while loop with Reactant seems broken (#1316) - getting concatenating and splitting working with Reactant/Enzyme (#1317) - LuxLib doing LinearAlgebra.mul! on non-arrays can cause fallback to errors (#1322) - gpudevice(deviceid) attaches to first/zeroth device regardless of device_id (#1330)

- Julia
Published by github-actions[bot] 9 months ago

Lux - LuxLib-v1.7.3

LuxLib LuxLib-v1.7.3

Diff since LuxLib-v1.7.2

Merged pull requests: - fix: remove Optimisers.jl patch (#1247) (@avik-pal) - show debug instead of warning if cant cuBLAS mult (#1280) (@ExpandingMan) - chore: bump crate-ci/typos from 1.30.2 to 1.31.0 (#1281) (@dependabot[bot]) - chore: remove debug functionalities of reactant (#1285) (@avik-pal) - fix: add a ReactantOptimisers wrapper (#1288) (@avik-pal) - chore: bump crate-ci/typos from 1.31.0 to 1.31.1 (#1297) (@dependabot[bot]) - ci: use JuliaFormatter v1 (#1299) (@avik-pal) - ci: multiple ci fixes (#1301) (@avik-pal) - CompatHelper: bump compat for JET to 0.10 for package LuxTestUtils, (keep existing compat) (#1302) (@github-actions[bot]) - fix: new reactant version (#1303) (@avik-pal) - fix: update Reactant training (#1304) (@avik-pal) - fix: chain rules for recurrence tuple inputs (#1306) (@avik-pal) - feat: fix return (#1307) (@avik-pal) - fix: try increasing the samples in CI (#1309) (@avik-pal) - fix: restrict dispatch types for cublaslt (#1311) (@avik-pal)

Closed issues: - Optimisers.jl patch for Reactant Support (#1146) - Convolutional VAE for MNIST using Reactant failed to produce right results (#1274) - GPU Inefficiency in Gradient Computation with Custom Recurrence (#1284) - PINN2DPDE broke in the latest Optimisers Patch removal (#1286) - Duplicating Scalars for Optimisers prevents CSE (#1289) - Reactant 0.2.61 produces incorrect gradients (#1292) - missing or incorrect ProjectTo method breaks Recurrence with Zygote (#1305) - dense operations fail on views on nvidia due to missing method (#1308)

- Julia
Published by github-actions[bot] 10 months ago

Lux - v1.12.4

Lux v1.12.4

Diff since v1.12.3

Merged pull requests: - feat: fix return (#1307) (@avik-pal)

- Julia
Published by github-actions[bot] 10 months ago

Lux - v1.12.3

Lux v1.12.3

Diff since v1.12.2

Merged pull requests: - fix: update Reactant training (#1304) (@avik-pal) - fix: chain rules for recurrence tuple inputs (#1306) (@avik-pal)

Closed issues: - Duplicating Scalars for Optimisers prevents CSE (#1289) - missing or incorrect ProjectTo method breaks Recurrence with Zygote (#1305)

- Julia
Published by github-actions[bot] 10 months ago

Lux - v1.12.2

Lux v1.12.2

Diff since v1.12.1

Merged pull requests: - chore: bump crate-ci/typos from 1.31.0 to 1.31.1 (#1297) (@dependabot[bot]) - ci: use JuliaFormatter v1 (#1299) (@avik-pal) - ci: multiple ci fixes (#1301) (@avik-pal) - CompatHelper: bump compat for JET to 0.10 for package LuxTestUtils, (keep existing compat) (#1302) (@github-actions[bot]) - fix: new reactant version (#1303) (@avik-pal)

Closed issues: - Reactant 0.2.61 produces incorrect gradients (#1292)

- Julia
Published by github-actions[bot] 10 months ago

Lux - v1.12.1

Lux v1.12.1

Diff since v1.12.0

Merged pull requests: - fix: add a ReactantOptimisers wrapper (#1288) (@avik-pal)

Closed issues: - PINN2DPDE broke in the latest Optimisers Patch removal (#1286)

- Julia
Published by github-actions[bot] 11 months ago

Lux - v1.12.0

Lux v1.12.0

Diff since v1.11.2

Merged pull requests: - fix: remove Optimisers.jl patch (#1247) (@avik-pal) - show debug instead of warning if cant cuBLAS mult (#1280) (@ExpandingMan) - chore: bump crate-ci/typos from 1.30.2 to 1.31.0 (#1281) (@dependabot[bot]) - chore: remove debug functionalities of reactant (#1285) (@avik-pal)

Closed issues: - Optimisers.jl patch for Reactant Support (#1146)

- Julia
Published by github-actions[bot] 11 months ago

Lux - v1.11.2

Lux v1.11.2

Diff since v1.11.1

Merged pull requests: - test: fix tests (#1278) (@avik-pal) - fix: try fixing broken tests (#1279) (@avik-pal)

- Julia
Published by github-actions[bot] 11 months ago

Lux - LuxLib-v1.7.2

LuxLib LuxLib-v1.7.2

Diff since LuxLib-v1.7.1

Merged pull requests: - test: fix tests (#1278) (@avik-pal) - fix: try fixing broken tests (#1279) (@avik-pal)

- Julia
Published by github-actions[bot] 11 months ago

Lux - LuxLib-v1.7.1

LuxLib LuxLib-v1.7.1

Diff since LuxLib-v1.7.0

Merged pull requests: - docs: refer to the Turing docs for BayesianNN (#1272) (@github-actions[bot]) - feat: support for ForwardDiff training (#1273) (@hstrey) - chore: bump forwarddiff to 1.0 (#1277) (@avik-pal)

- Julia
Published by github-actions[bot] 11 months ago

Lux - v1.11.1

Lux v1.11.1

Diff since v1.11.0

Merged pull requests: - chore: bump forwarddiff to 1.0 (#1277) (@avik-pal)

- Julia
Published by github-actions[bot] 11 months ago

Lux - LuxTestUtils-v1.7.2

LuxTestUtils LuxTestUtils-v1.7.2

Diff since LuxTestUtils-v1.7.1

Merged pull requests: - feat: don't unroll Recurrence (#1209) (@avik-pal) - fix: relax input types (#1226) (@avik-pal) - feat: compile hypernet with reactant (#1227) (@avik-pal) - feat: show how to use model explorer (#1228) (@avik-pal) - Fix small typo in docs: "to compiler models" => "to compile models" (#1229) (@Dale-Black) - CompatHelper: bump compat for MKL in [weakdeps] to 0.8 for package LuxLib, (keep existing compat) (#1231) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.29.4 to 1.29.5 (#1232) (@dependabot[bot]) - feat: high-level implementations for Positional Embeddings (#1237) (@avik-pal) - chore: bump crate-ci/typos from 1.29.5 to 1.29.7 (#1239) (@dependabot[bot]) - Update LUX.jl/examples/SimpleRNN/main.jl (#1240) (@aligurbu) - feat: distinct recurrent initializer option for recurrent layers (#1242) (@MartinuzziFrancesco) - fix: adapt to upcoming reactant changes (#1246) (@avik-pal) - chore: bump crate-ci/typos from 1.29.7 to 1.29.9 (#1248) (@dependabot[bot]) - docs: update to new DocumenterVitepress Improvements (#1250) (@avik-pal) - feat: relax conditions (#1252) (@avik-pal) - chore: bump crate-ci/typos from 1.29.9 to 1.30.0 (#1253) (@dependabot[bot]) - feat: allow specifying sharding in ReactantDevice (#1254) (@avik-pal) - refactor: use runic for formatting (#1255) (@avik-pal) - Add way to silence gpu warning (#1256) (@IanButterworth) - feat: tracing stateful layers (#1257) (@avik-pal) - chore: bump crate-ci/typos from 1.30.0 to 1.30.1 (#1258) (@dependabot[bot]) - Docs: Update autodiff.md to correct typos (#1259) (@sinhtrung) - Update OptimizationIntegration/main.jl comment to match link title (#1261) (@sinhtrung) - test: mark a test as broken (#1263) (@avik-pal) - refactor: cleanup dependencies (Hwloc + CpuId) (#1264) (@avik-pal) - docs: update readme (#1265) (@avik-pal) - feat: add MultiHeadAttention (#1266) (@avik-pal) - docs: use documenter citations (#1267) (@avik-pal) - refactor: revert runic formatting (#1268) (@avik-pal) - chore: bump crate-ci/typos from 1.30.1 to 1.30.2 (#1270) (@dependabot[bot]) - chore: remove unused imports (#1271) (@avik-pal) - docs: refer to the Turing docs for BayesianNN (#1272) (@github-actions[bot]) - feat: support for ForwardDiff training (#1273) (@hstrey) - fix: reactant typo (#1275) (@avik-pal) - fix: outputsize of Dense (#1276) (@avik-pal) - chore: bump forwarddiff to 1.0 (#1277) (@avik-pal)

Closed issues: - Emit Batchnorm Op for Training (#1208) - Lux.jl model visualizer (#1214) - Allow different initializers for input and hidden in recurrent layers (#1241) - Relax MLDataDevices.combine_devices for ReactantDevice and CPUDevice (#1244) - AutoEnzyme() errors on training dense layer (#1249) - New Reactant Release breaks ConcreteRNG (#1251) - Example from Training Lux Models using Optimization.jl not working in CPU (#1260) - outputsize(::Dense) (#1262)

- Julia
Published by github-actions[bot] 11 months ago

Lux - v1.11.0

Lux v1.11.0

Diff since v1.10.1

Merged pull requests: - docs: refer to the Turing docs for BayesianNN (#1272) (@github-actions[bot]) - feat: support for ForwardDiff training (#1273) (@hstrey)

- Julia
Published by github-actions[bot] 11 months ago

Lux - LuxCore-v1.2.4

LuxCore LuxCore-v1.2.4

Diff since LuxCore-v1.2.3

Merged pull requests: - chore: bump crate-ci/typos from 1.30.0 to 1.30.1 (#1258) (@dependabot[bot]) - Docs: Update autodiff.md to correct typos (#1259) (@sinhtrung) - Update OptimizationIntegration/main.jl comment to match link title (#1261) (@sinhtrung) - test: mark a test as broken (#1263) (@avik-pal) - refactor: cleanup dependencies (Hwloc + CpuId) (#1264) (@avik-pal) - docs: update readme (#1265) (@avik-pal) - feat: add MultiHeadAttention (#1266) (@avik-pal) - docs: use documenter citations (#1267) (@avik-pal) - refactor: revert runic formatting (#1268) (@avik-pal) - chore: bump crate-ci/typos from 1.30.1 to 1.30.2 (#1270) (@dependabot[bot]) - chore: remove unused imports (#1271) (@avik-pal) - fix: reactant typo (#1275) (@avik-pal) - fix: outputsize of Dense (#1276) (@avik-pal)

Closed issues: - Emit Batchnorm Op for Training (#1208) - Example from Training Lux Models using Optimization.jl not working in CPU (#1260) - outputsize(::Dense) (#1262)

- Julia
Published by github-actions[bot] 11 months ago

Lux - v1.10.1

Lux v1.10.1

Diff since v1.10.0

Merged pull requests: - fix: reactant typo (#1275) (@avik-pal) - fix: outputsize of Dense (#1276) (@avik-pal)

Closed issues: - Emit Batchnorm Op for Training (#1208) - outputsize(::Dense) (#1262)

- Julia
Published by github-actions[bot] 11 months ago

Lux - LuxLib-v1.7.0

LuxLib LuxLib-v1.7.0

Diff since LuxLib-v1.6.2

Merged pull requests: - docs: update readme (#1265) (@avik-pal) - feat: add MultiHeadAttention (#1266) (@avik-pal) - docs: use documenter citations (#1267) (@avik-pal) - refactor: revert runic formatting (#1268) (@avik-pal) - chore: bump crate-ci/typos from 1.30.1 to 1.30.2 (#1270) (@dependabot[bot]) - chore: remove unused imports (#1271) (@avik-pal) - fix: reactant typo (#1275) (@avik-pal) - fix: outputsize of Dense (#1276) (@avik-pal)

Closed issues: - Emit Batchnorm Op for Training (#1208) - outputsize(::Dense) (#1262)

- Julia
Published by github-actions[bot] 11 months ago

Lux - v1.10.0

Lux v1.10.0

Diff since v1.9.0

Merged pull requests: - chore: bump crate-ci/typos from 1.30.0 to 1.30.1 (#1258) (@dependabot[bot]) - Docs: Update autodiff.md to correct typos (#1259) (@sinhtrung) - Update OptimizationIntegration/main.jl comment to match link title (#1261) (@sinhtrung) - test: mark a test as broken (#1263) (@avik-pal) - refactor: cleanup dependencies (Hwloc + CpuId) (#1264) (@avik-pal) - docs: update readme (#1265) (@avik-pal) - feat: add MultiHeadAttention (#1266) (@avik-pal) - docs: use documenter citations (#1267) (@avik-pal) - refactor: revert runic formatting (#1268) (@avik-pal) - chore: bump crate-ci/typos from 1.30.1 to 1.30.2 (#1270) (@dependabot[bot]) - chore: remove unused imports (#1271) (@avik-pal)

Closed issues: - Example from Training Lux Models using Optimization.jl not working in CPU (#1260)

- Julia
Published by github-actions[bot] 11 months ago

Lux - LuxLib-v1.6.2

LuxLib LuxLib-v1.6.2

Diff since LuxLib-v1.6.1

Merged pull requests: - feat: don't unroll Recurrence (#1209) (@avik-pal) - feat: compile hypernet with reactant (#1227) (@avik-pal) - feat: show how to use model explorer (#1228) (@avik-pal) - Fix small typo in docs: "to compiler models" => "to compile models" (#1229) (@Dale-Black) - CompatHelper: bump compat for MKL in [weakdeps] to 0.8 for package LuxLib, (keep existing compat) (#1231) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.29.4 to 1.29.5 (#1232) (@dependabot[bot]) - feat: high-level implementations for Positional Embeddings (#1237) (@avik-pal) - chore: bump crate-ci/typos from 1.29.5 to 1.29.7 (#1239) (@dependabot[bot]) - Update LUX.jl/examples/SimpleRNN/main.jl (#1240) (@aligurbu) - feat: distinct recurrent initializer option for recurrent layers (#1242) (@MartinuzziFrancesco) - fix: adapt to upcoming reactant changes (#1246) (@avik-pal) - chore: bump crate-ci/typos from 1.29.7 to 1.29.9 (#1248) (@dependabot[bot]) - docs: update to new DocumenterVitepress Improvements (#1250) (@avik-pal) - feat: relax conditions (#1252) (@avik-pal) - chore: bump crate-ci/typos from 1.29.9 to 1.30.0 (#1253) (@dependabot[bot]) - feat: allow specifying sharding in ReactantDevice (#1254) (@avik-pal) - refactor: use runic for formatting (#1255) (@avik-pal) - Add way to silence gpu warning (#1256) (@IanButterworth) - feat: tracing stateful layers (#1257) (@avik-pal) - chore: bump crate-ci/typos from 1.30.0 to 1.30.1 (#1258) (@dependabot[bot]) - Docs: Update autodiff.md to correct typos (#1259) (@sinhtrung) - Update OptimizationIntegration/main.jl comment to match link title (#1261) (@sinhtrung) - test: mark a test as broken (#1263) (@avik-pal) - refactor: cleanup dependencies (Hwloc + CpuId) (#1264) (@avik-pal)

Closed issues: - Lux.jl model visualizer (#1214) - Allow different initializers for input and hidden in recurrent layers (#1241) - Relax MLDataDevices.combine_devices for ReactantDevice and CPUDevice (#1244) - AutoEnzyme() errors on training dense layer (#1249) - New Reactant Release breaks ConcreteRNG (#1251) - Example from Training Lux Models using Optimization.jl not working in CPU (#1260)

- Julia
Published by github-actions[bot] 12 months ago

Lux - MLDataDevices-v1.9.1

MLDataDevices MLDataDevices-v1.9.1

Diff since MLDataDevices-v1.9.0

Merged pull requests: - feat: high-level implementations for Positional Embeddings (#1237) (@avik-pal) - feat: tracing stateful layers (#1257) (@avik-pal)

- Julia
Published by github-actions[bot] 12 months ago

Lux - LuxCore-v1.2.3

LuxCore LuxCore-v1.2.3

Diff since LuxCore-v1.2.2

Merged pull requests: - CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#1127) (@github-actions[bot]) - feat: emit batchnorm ops from stablehlo (#1142) (@avik-pal) - chore: bump Zygote version (#1182) (@avik-pal) - feat: allow no grad option for reactant (#1190) (@avik-pal) - CompatHelper: add new compat entry for Enzyme at version 0.13 for package PINN2DPDE, (keep existing compat) (#1191) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package PINN2DPDE, (keep existing compat) (#1192) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleChains, (keep existing compat) (#1193) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleRNN, (keep existing compat) (#1194) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1195) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1196) (@github-actions[bot]) - chore: bump CairoMakie to 0.13 (#1206) (@avik-pal) - CompatHelper: bump compat for Turing to 0.36 for package BayesianNN, (keep existing compat) (#1207) (@github-actions[bot]) - feat: don't unroll Recurrence (#1209) (@avik-pal) - docs: add GCN Cora example (#1210) (@avik-pal) - CompatHelper: add new compat entry for GNNGraphs at version 1 for package GCNCora, (keep existing compat) (#1211) (@github-actions[bot]) - CompatHelper: add new compat entry for OneHotArrays at version 0.2 for package GCN_Cora, (keep existing compat) (#1212) (@github-actions[bot]) - docs: Normalizing Flow (RealNVP) example (#1215) (@avik-pal) - CompatHelper: add new compat entry for Lux at version 1 for package RealNVP, (keep existing compat) (#1221) (@github-actions[bot]) - fix: relax input types (#1226) (@avik-pal) - feat: compile hypernet with reactant (#1227) (@avik-pal) - feat: show how to use model explorer (#1228) (@avik-pal) - Fix small typo in docs: "to compiler models" => "to compile models" (#1229) (@Dale-Black) - CompatHelper: bump compat for MKL in [weakdeps] to 0.8 for package LuxLib, (keep existing compat) (#1231) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.29.4 to 1.29.5 (#1232) (@dependabot[bot]) - feat: high-level implementations for Positional Embeddings (#1237) (@avik-pal) - chore: bump crate-ci/typos from 1.29.5 to 1.29.7 (#1239) (@dependabot[bot]) - Update LUX.jl/examples/SimpleRNN/main.jl (#1240) (@aligurbu) - feat: distinct recurrent initializer option for recurrent layers (#1242) (@MartinuzziFrancesco) - fix: adapt to upcoming reactant changes (#1246) (@avik-pal) - chore: bump crate-ci/typos from 1.29.7 to 1.29.9 (#1248) (@dependabot[bot]) - docs: update to new DocumenterVitepress Improvements (#1250) (@avik-pal) - feat: relax conditions (#1252) (@avik-pal) - chore: bump crate-ci/typos from 1.29.9 to 1.30.0 (#1253) (@dependabot[bot]) - feat: allow specifying sharding in ReactantDevice (#1254) (@avik-pal) - refactor: use runic for formatting (#1255) (@avik-pal) - Add way to silence gpu warning (#1256) (@IanButterworth) - feat: tracing stateful layers (#1257) (@avik-pal)

Closed issues: - Use TestExtras.jl for inference testing (#1098) - No Grad option for TrainState single_train_step(!) (#1181) - Parallel is incompatible with Zygote nested gradient (#1199) - Lux.jl model visualizer (#1214) - Reactant grabbing wrong CUDA version (#1225) - Allow different initializers for input and hidden in recurrent layers (#1241) - Relax MLDataDevices.combine_devices for ReactantDevice and CPUDevice (#1244) - AutoEnzyme() errors on training dense layer (#1249) - New Reactant Release breaks ConcreteRNG (#1251)

- Julia
Published by github-actions[bot] 12 months ago

Lux - v1.9.0

Lux v1.9.0

Diff since v1.8.0

Merged pull requests: - feat: high-level implementations for Positional Embeddings (#1237) (@avik-pal) - Update LUX.jl/examples/SimpleRNN/main.jl (#1240) (@aligurbu) - fix: adapt to upcoming reactant changes (#1246) (@avik-pal) - chore: bump crate-ci/typos from 1.29.7 to 1.29.9 (#1248) (@dependabot[bot]) - docs: update to new DocumenterVitepress Improvements (#1250) (@avik-pal) - feat: relax conditions (#1252) (@avik-pal) - chore: bump crate-ci/typos from 1.29.9 to 1.30.0 (#1253) (@dependabot[bot]) - feat: allow specifying sharding in ReactantDevice (#1254) (@avik-pal) - refactor: use runic for formatting (#1255) (@avik-pal) - Add way to silence gpu warning (#1256) (@IanButterworth) - feat: tracing stateful layers (#1257) (@avik-pal)

Closed issues: - Relax MLDataDevices.combine_devices for ReactantDevice and CPUDevice (#1244) - AutoEnzyme() errors on training dense layer (#1249) - New Reactant Release breaks ConcreteRNG (#1251)

- Julia
Published by github-actions[bot] 12 months ago

Lux - MLDataDevices-v1.9.0

MLDataDevices MLDataDevices-v1.9.0

Diff since MLDataDevices-v1.8.0

Merged pull requests: - refactor: use runic for formatting (#1255) (@avik-pal) - Add way to silence gpu warning (#1256) (@IanButterworth)

- Julia
Published by github-actions[bot] 12 months ago

Lux - MLDataDevices-v1.8.0

MLDataDevices MLDataDevices-v1.8.0

Diff since MLDataDevices-v1.7.0

Merged pull requests: - chore: bump crate-ci/typos from 1.29.9 to 1.30.0 (#1253) (@dependabot[bot]) - feat: allow specifying sharding in ReactantDevice (#1254) (@avik-pal)

- Julia
Published by github-actions[bot] 12 months ago

Lux - MLDataDevices-v1.7.0

MLDataDevices MLDataDevices-v1.7.0

Diff since MLDataDevices-v1.6.11

Merged pull requests: - chore: bump crate-ci/typos from 1.29.7 to 1.29.9 (#1248) (@dependabot[bot]) - docs: update to new DocumenterVitepress Improvements (#1250) (@avik-pal) - feat: relax conditions (#1252) (@avik-pal)

Closed issues: - Relax MLDataDevices.combine_devices for ReactantDevice and CPUDevice (#1244) - AutoEnzyme() errors on training dense layer (#1249) - New Reactant Release breaks ConcreteRNG (#1251)

- Julia
Published by github-actions[bot] 12 months ago

Lux - MLDataDevices-v1.6.11

MLDataDevices MLDataDevices-v1.6.11

Diff since MLDataDevices-v1.6.10

Merged pull requests: - feat: compile hypernet with reactant (#1227) (@avik-pal) - chore: bump crate-ci/typos from 1.29.5 to 1.29.7 (#1239) (@dependabot[bot]) - Update LUX.jl/examples/SimpleRNN/main.jl (#1240) (@aligurbu) - feat: distinct recurrent initializer option for recurrent layers (#1242) (@MartinuzziFrancesco) - fix: adapt to upcoming reactant changes (#1246) (@avik-pal)

Closed issues: - Allow different initializers for input and hidden in recurrent layers (#1241)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.8.0

Lux v1.8.0

Diff since v1.7.0

Merged pull requests: - feat: compile hypernet with reactant (#1227) (@avik-pal) - chore: bump crate-ci/typos from 1.29.5 to 1.29.7 (#1239) (@dependabot[bot]) - feat: distinct recurrent initializer option for recurrent layers (#1242) (@MartinuzziFrancesco)

Closed issues: - Allow different initializers for input and hidden in recurrent layers (#1241)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.7.0

Lux v1.7.0

Diff since v1.6.0

Merged pull requests: - feat: don't unroll Recurrence (#1209) (@avik-pal) - fix: relax input types (#1226) (@avik-pal) - feat: show how to use model explorer (#1228) (@avik-pal) - Fix small typo in docs: "to compiler models" => "to compile models" (#1229) (@Dale-Black) - CompatHelper: bump compat for MKL in [weakdeps] to 0.8 for package LuxLib, (keep existing compat) (#1231) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.29.4 to 1.29.5 (#1232) (@dependabot[bot])

Closed issues: - Lux.jl model visualizer (#1214)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - MLDataDevices-v1.6.10

MLDataDevices MLDataDevices-v1.6.10

Diff since MLDataDevices-v1.6.9

Merged pull requests: - feat: don't unroll Recurrence (#1209) (@avik-pal) - fix: relax input types (#1226) (@avik-pal) - feat: show how to use model explorer (#1228) (@avik-pal) - Fix small typo in docs: "to compiler models" => "to compile models" (#1229) (@Dale-Black) - CompatHelper: bump compat for MKL in [weakdeps] to 0.8 for package LuxLib, (keep existing compat) (#1231) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.29.4 to 1.29.5 (#1232) (@dependabot[bot])

Closed issues: - Lux.jl model visualizer (#1214)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxLib-v1.6.1

LuxLib LuxLib-v1.6.1

Diff since LuxLib-v1.6.0

Merged pull requests: - fix: relax input types (#1226) (@avik-pal)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxTestUtils-v1.7.1

LuxTestUtils LuxTestUtils-v1.7.1

Diff since LuxTestUtils-v1.7.0

Merged pull requests: - feat: update ConvMixer to support reactant (#1063) (@avik-pal) - ci: use sources for docs (#1100) (@avik-pal) - Add Reactant and TPU to autodiff.md (#1101) (@wsmoses) - refactor: cleanup some old pre-1.0 hacks (#1102) (@avik-pal) - feat: add bf16 function (#1104) (@avik-pal) - docs: add CUDA.CURAND.defaultrng() to docs (#1105) (@avik-pal) - fix: use generic broadcasting for complex numbers (#1106) (@avik-pal) - Update exportingtojax.md (#1107) (@wsmoses) - CompatHelper: bump compat for LossFunctions in [weakdeps] to 1, (keep existing compat) (#1108) (@github-actions[bot]) - Add TrainState docstring with Optimisers API (#1110) (@abhro) - Fix markdown list in docstring (#1111) (@abhro) - chore: bump crate-ci/typos from 1.27.3 to 1.28.1 (#1113) (@dependabot[bot]) - fix: handle debug leafs with dispatch (#1115) (@avik-pal) - test: allow the latest AMDGPU to be installed (#1116) (@avik-pal) - test: add unsafefree to skip list (#1117) (@avik-pal) - fix: use the correct dispatches for device overloads (#1118) (@avik-pal) - test: try fixing enzyme test (#1119) (@avik-pal) - ci(github-actions): use julia-actions/cache (#1122) (@avik-pal) - test: re-enable flux testing (#1123) (@avik-pal) - chore: bump minimum Reactant version (#1125) (@avik-pal) - fix: try fixing cuda install in tests (#1126) (@avik-pal) - CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#1127) (@github-actions[bot]) - docs: run partial dataset only on CI (#1128) (@avik-pal) - chore: bump crate-ci/typos from 1.28.1 to 1.28.2 (#1130) (@dependabot[bot]) - fix: preserve object when device is same (#1133) (@avik-pal) - fix: use functors for testing wrapped arrays (#1134) (@avik-pal) - fix: remove old patches around reactant bug (#1135) (@avik-pal) - CompatHelper: bump compat for Flux in [weakdeps] to 0.16, (keep existing compat) (#1136) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.28.2 to 1.28.3 (#1137) (@dependabot[bot]) - fix: update to new reactant changes (#1140) (@avik-pal) - feat: emit batchnorm ops from stablehlo (#1142) (@avik-pal) - chore: bump crate-ci/typos from 1.28.3 to 1.28.4 (#1144) (@dependabot[bot]) - don't declare implicitly exported functions public (#1147) (@simeonschaub) - use `returntypeinstead ofreturntype(#1148) (@simeonschaub) - feat: more nested AD rules (#1151) (@avik-pal) - fix: update default rng for reactant (#1152) (@avik-pal) - Change update step to thousands in PINN 2D PDE (#1153) (@abhro) - CompatHelper: add new compat entry for ProgressTables at version 0.1 for package CIFAR10, (keep existing compat) (#1154) (@github-actions[bot]) -device(NN)` should only give warning once (#1156) (@vpuri3) - feat: conditional VAE (#1157) (@avik-pal) - fix: ConditionalVAE on CI (#1159) (@avik-pal) - CompatHelper: add new compat entry for Optimisers at version 0.4 for package ConditionalVAE, (keep existing compat) (#1161) (@github-actions[bot]) - CompatHelper: add new compat entry for StableRNGs at version 1 for package ConditionalVAE, (keep existing compat) (#1162) (@github-actions[bot]) - CompatHelper: add new compat entry for Comonicon at version 1 for package ConditionalVAE, (keep existing compat) (#1163) (@github-actions[bot]) - [MLDataDevices] Bump deps (#1164) (@pxl-th) - docs: migrate most examples to Reactant (#1180) (@avik-pal) - chore: bump Zygote version (#1182) (@avik-pal) - chore: bump crate-ci/typos from 1.28.4 to 1.29.4 (#1183) (@dependabot[bot]) - fix: pass in RNG to shuffle (#1188) (@avik-pal) - feat: allow no grad option for reactant (#1190) (@avik-pal) - CompatHelper: add new compat entry for Enzyme at version 0.13 for package PINN2DPDE, (keep existing compat) (#1191) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package PINN2DPDE, (keep existing compat) (#1192) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleChains, (keep existing compat) (#1193) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleRNN, (keep existing compat) (#1194) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1195) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1196) (@github-actions[bot]) - chore: bump CairoMakie to 0.13 (#1206) (@avik-pal) - CompatHelper: bump compat for Turing to 0.36 for package BayesianNN, (keep existing compat) (#1207) (@github-actions[bot]) - docs: add GCN Cora example (#1210) (@avik-pal) - CompatHelper: add new compat entry for GNNGraphs at version 1 for package GCNCora, (keep existing compat) (#1211) (@github-actions[bot]) - CompatHelper: add new compat entry for OneHotArrays at version 0.2 for package GCNCora, (keep existing compat) (#1212) (@github-actions[bot]) - docs: Normalizing Flow (RealNVP) example (#1215) (@avik-pal) - CompatHelper: add new compat entry for Lux at version 1 for package RealNVP, (keep existing compat) (#1221) (@github-actions[bot])

Closed issues: - Immutable Arrays (#8) - Downstream Compat Updates (#880) - Zygote + ForwardDiff support for complex differentiation (#977) - Add CUDA.CURAND.default_rng() to the table (#1003) - Enzyme 0.13 fails with batched matrix multiply (#1024) - getkeypath and layer_map not fully working with model with Parallel layers (#1068) - Re-enable Flux compatibility testing (#1070) - [AMDGPU CI] Circular dependencies disabling precompilation (#1095) - Use TestExtras.jl for inference testing (#1098) - LuxTestUtils.Constant clashes with DifferentiationInterface.Constant (#1103) - Error in trying to use Optimization.jl for LSTM training based on Lux.jl (#1114) - Documentation Build Stalls (#1120) - CUDA Test CI is broken (#1121) - [MLDataDevices] devices don't preserve identity (#1129) - Random Numbers & Reactant (#1131) - Problem with Lux & SymbolicsLuxExt (#1132) - How to implement a detach operation similar to Pytorch? (#1138) - Unexpected handling of LR Schedulers in TrainState (#1143) - Directly construct Optimiser state on Reactant buffers (#1145) - (AbstractDevice)(x) should respect Adapt.adapt_structure (#1149) - CUDA 2nd order AD with MaxPool and logsoftmax (#1150) - Recurrent cells cannot be chained with other layers (#1155) - No Grad option for TrainState single_train_step(!) (#1181) - sparse_init doesn't use provided rng fully (#1185) - Incorrect IR generated for some neural networks (#1186) - WeightInitializers.DeviceAgnostic doesn't respect Reactant (#1187) - Export utilities in partial.jl (#1189) - Parallel is incompatible with Zygote nested gradient (#1199)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - MLDataDevices-v1.6.9

MLDataDevices MLDataDevices-v1.6.9

Diff since MLDataDevices-v1.6.8

Merged pull requests: - feat: emit batchnorm ops from stablehlo (#1142) (@avik-pal) - chore: bump Zygote version (#1182) (@avik-pal) - chore: bump CairoMakie to 0.13 (#1206) (@avik-pal) - CompatHelper: bump compat for Turing to 0.36 for package BayesianNN, (keep existing compat) (#1207) (@github-actions[bot]) - docs: add GCN Cora example (#1210) (@avik-pal) - CompatHelper: add new compat entry for GNNGraphs at version 1 for package GCNCora, (keep existing compat) (#1211) (@github-actions[bot]) - CompatHelper: add new compat entry for OneHotArrays at version 0.2 for package GCN_Cora, (keep existing compat) (#1212) (@github-actions[bot]) - docs: Normalizing Flow (RealNVP) example (#1215) (@avik-pal) - CompatHelper: add new compat entry for Lux at version 1 for package RealNVP, (keep existing compat) (#1221) (@github-actions[bot])

Closed issues: - Use TestExtras.jl for inference testing (#1098) - Parallel is incompatible with Zygote nested gradient (#1199)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxLib-v1.6.0

LuxLib LuxLib-v1.6.0

Diff since LuxLib-v1.5.0

Merged pull requests: - chore: bump Zygote version (#1182) (@avik-pal) - docs: add GCN Cora example (#1210) (@avik-pal) - CompatHelper: add new compat entry for GNNGraphs at version 1 for package GCNCora, (keep existing compat) (#1211) (@github-actions[bot]) - CompatHelper: add new compat entry for OneHotArrays at version 0.2 for package GCNCora, (keep existing compat) (#1212) (@github-actions[bot]) - docs: Normalizing Flow (RealNVP) example (#1215) (@avik-pal) - CompatHelper: add new compat entry for Lux at version 1 for package RealNVP, (keep existing compat) (#1221) (@github-actions[bot])

Closed issues: - Use TestExtras.jl for inference testing (#1098)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.6.0

Lux v1.6.0

Diff since v1.5.2

Merged pull requests: - chore: bump Zygote version (#1182) (@avik-pal) - CompatHelper: add new compat entry for Lux at version 1 for package RealNVP, (keep existing compat) (#1221) (@github-actions[bot])

Closed issues: - Use TestExtras.jl for inference testing (#1098)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.5.2

Lux v1.5.2

Diff since v1.5.1

Merged pull requests: - feat: emit batchnorm ops from stablehlo (#1142) (@avik-pal) - chore: bump CairoMakie to 0.13 (#1206) (@avik-pal) - CompatHelper: bump compat for Turing to 0.36 for package BayesianNN, (keep existing compat) (#1207) (@github-actions[bot]) - docs: add GCN Cora example (#1210) (@avik-pal) - CompatHelper: add new compat entry for GNNGraphs at version 1 for package GCNCora, (keep existing compat) (#1211) (@github-actions[bot]) - CompatHelper: add new compat entry for OneHotArrays at version 0.2 for package GCN_Cora, (keep existing compat) (#1212) (@github-actions[bot]) - docs: Normalizing Flow (RealNVP) example (#1215) (@avik-pal)

Closed issues: - Parallel is incompatible with Zygote nested gradient (#1199)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxLib-v1.5.0

LuxLib LuxLib-v1.5.0

Diff since LuxLib-v1.4.1

Merged pull requests: - CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#1127) (@github-actions[bot]) - feat: emit batch_norm ops from stablehlo (#1142) (@avik-pal) - feat: allow no grad option for reactant (#1190) (@avik-pal) - CompatHelper: add new compat entry for Enzyme at version 0.13 for package PINN2DPDE, (keep existing compat) (#1191) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package PINN2DPDE, (keep existing compat) (#1192) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleChains, (keep existing compat) (#1193) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleRNN, (keep existing compat) (#1194) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1195) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1196) (@github-actions[bot]) - chore: bump CairoMakie to 0.13 (#1206) (@avik-pal) - CompatHelper: bump compat for Turing to 0.36 for package BayesianNN, (keep existing compat) (#1207) (@github-actions[bot])

Closed issues: - No Grad option for TrainState single_train_step(!) (#1181) - Parallel is incompatible with Zygote nested gradient (#1199)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - WeightInitializers-v1.1.1

WeightInitializers WeightInitializers-v1.1.1

Diff since WeightInitializers-v1.1.0

Merged pull requests: - CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#1127) (@github-actions[bot]) - feat: allow no grad option for reactant (#1190) (@avik-pal) - CompatHelper: add new compat entry for Enzyme at version 0.13 for package PINN2DPDE, (keep existing compat) (#1191) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package PINN2DPDE, (keep existing compat) (#1192) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleChains, (keep existing compat) (#1193) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleRNN, (keep existing compat) (#1194) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1195) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1196) (@github-actions[bot])

Closed issues: - No Grad option for TrainState single_train_step(!) (#1181)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - MLDataDevices-v1.6.8

MLDataDevices MLDataDevices-v1.6.8

Diff since MLDataDevices-v1.6.7

Merged pull requests: - CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#1127) (@github-actions[bot]) - docs: migrate most examples to Reactant (#1180) (@avik-pal) - chore: bump crate-ci/typos from 1.28.4 to 1.29.4 (#1183) (@dependabot[bot]) - fix: pass in RNG to shuffle (#1188) (@avik-pal) - feat: allow no grad option for reactant (#1190) (@avik-pal) - CompatHelper: add new compat entry for Enzyme at version 0.13 for package PINN2DPDE, (keep existing compat) (#1191) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package PINN2DPDE, (keep existing compat) (#1192) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleChains, (keep existing compat) (#1193) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleRNN, (keep existing compat) (#1194) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1195) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1196) (@github-actions[bot])

Closed issues: - No Grad option for TrainState single_train_step(!) (#1181) - sparse_init doesn't use provided rng fully (#1185) - Incorrect IR generated for some neural networks (#1186) - WeightInitializers.DeviceAgnostic doesn't respect Reactant (#1187) - Export utilities in partial.jl (#1189)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.5.1

Lux v1.5.1

Diff since v1.5.0

Merged pull requests: - CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#1127) (@github-actions[bot]) - CompatHelper: add new compat entry for Enzyme at version 0.13 for package PINN2DPDE, (keep existing compat) (#1191) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package PINN2DPDE, (keep existing compat) (#1192) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleChains, (keep existing compat) (#1193) (@github-actions[bot]) - CompatHelper: add new compat entry for Reactant at version 0.2 for package SimpleRNN, (keep existing compat) (#1194) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package MLDataDevices, (keep existing compat) (#1195) (@github-actions[bot]) - CompatHelper: bump compat for oneAPI in [weakdeps] to 2 for package WeightInitializers, (keep existing compat) (#1196) (@github-actions[bot])

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.5.0

Lux v1.5.0

Diff since v1.4.4

Merged pull requests: - CompatHelper: add new compat entry for ProgressTables at version 0.1 for package CIFAR10, (keep existing compat) (#1154) (@github-actions[bot]) - device(NN) should only give warning once (#1156) (@vpuri3) - feat: conditional VAE (#1157) (@avik-pal) - fix: ConditionalVAE on CI (#1159) (@avik-pal) - CompatHelper: add new compat entry for Optimisers at version 0.4 for package ConditionalVAE, (keep existing compat) (#1161) (@github-actions[bot]) - CompatHelper: add new compat entry for StableRNGs at version 1 for package ConditionalVAE, (keep existing compat) (#1162) (@github-actions[bot]) - CompatHelper: add new compat entry for Comonicon at version 1 for package ConditionalVAE, (keep existing compat) (#1163) (@github-actions[bot]) - [MLDataDevices] Bump deps (#1164) (@pxl-th) - docs: migrate most examples to Reactant (#1180) (@avik-pal) - chore: bump crate-ci/typos from 1.28.4 to 1.29.4 (#1183) (@dependabot[bot]) - fix: pass in RNG to shuffle (#1188) (@avik-pal) - feat: allow no grad option for reactant (#1190) (@avik-pal)

Closed issues: - Recurrent cells cannot be chained with other layers (#1155) - No Grad option for TrainState single_train_step(!) (#1181) - sparse_init doesn't use provided rng fully (#1185) - Incorrect IR generated for some neural networks (#1186) - WeightInitializers.DeviceAgnostic doesn't respect Reactant (#1187) - Export utilities in partial.jl (#1189)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxCore-v1.2.2

LuxCore LuxCore-v1.2.2

Diff since LuxCore-v1.2.1

Merged pull requests: - feat: update ConvMixer to support reactant (#1063) (@avik-pal) - test: re-enable flux testing (#1123) (@avik-pal) - chore: bump minimum Reactant version (#1125) (@avik-pal) - fix: try fixing cuda install in tests (#1126) (@avik-pal) - docs: run partial dataset only on CI (#1128) (@avik-pal) - chore: bump crate-ci/typos from 1.28.1 to 1.28.2 (#1130) (@dependabot[bot]) - fix: preserve object when device is same (#1133) (@avik-pal) - fix: use functors for testing wrapped arrays (#1134) (@avik-pal) - fix: remove old patches around reactant bug (#1135) (@avik-pal) - CompatHelper: bump compat for Flux in [weakdeps] to 0.16, (keep existing compat) (#1136) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.28.2 to 1.28.3 (#1137) (@dependabot[bot]) - fix: update to new reactant changes (#1140) (@avik-pal) - chore: bump crate-ci/typos from 1.28.3 to 1.28.4 (#1144) (@dependabot[bot]) - don't declare implicitly exported functions public (#1147) (@simeonschaub) - use return_type instead of _return_type (#1148) (@simeonschaub) - feat: more nested AD rules (#1151) (@avik-pal) - fix: update default rng for reactant (#1152) (@avik-pal) - Change update step to thousands in PINN 2D PDE (#1153) (@abhro) - CompatHelper: add new compat entry for ProgressTables at version 0.1 for package CIFAR10, (keep existing compat) (#1154) (@github-actions[bot]) - device(NN) should only give warning once (#1156) (@vpuri3) - feat: conditional VAE (#1157) (@avik-pal) - fix: ConditionalVAE on CI (#1159) (@avik-pal) - CompatHelper: add new compat entry for Optimisers at version 0.4 for package ConditionalVAE, (keep existing compat) (#1161) (@github-actions[bot]) - CompatHelper: add new compat entry for StableRNGs at version 1 for package ConditionalVAE, (keep existing compat) (#1162) (@github-actions[bot]) - CompatHelper: add new compat entry for Comonicon at version 1 for package ConditionalVAE, (keep existing compat) (#1163) (@github-actions[bot]) - [MLDataDevices] Bump deps (#1164) (@pxl-th) - docs: migrate most examples to Reactant (#1180) (@avik-pal) - chore: bump crate-ci/typos from 1.28.4 to 1.29.4 (#1183) (@dependabot[bot]) - fix: pass in RNG to shuffle (#1188) (@avik-pal)

Closed issues: - Immutable Arrays (#8) - Downstream Compat Updates (#880) - Re-enable Flux compatibility testing (#1070) - Documentation Build Stalls (#1120) - CUDA Test CI is broken (#1121) - [MLDataDevices] devices don't preserve identity (#1129) - Random Numbers & Reactant (#1131) - Problem with Lux & SymbolicsLuxExt (#1132) - How to implement a detach operation similar to Pytorch? (#1138) - Unexpected handling of LR Schedulers in TrainState (#1143) - Directly construct Optimiser state on Reactant buffers (#1145) - (AbstractDevice)(x) should respect Adapt.adapt_structure (#1149) - CUDA 2nd order AD with MaxPool and logsoftmax (#1150) - Recurrent cells cannot be chained with other layers (#1155) - sparse_init doesn't use provided rng fully (#1185) - Incorrect IR generated for some neural networks (#1186) - WeightInitializers.DeviceAgnostic doesn't respect Reactant (#1187)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxLib-v1.4.1

LuxLib LuxLib-v1.4.1

Diff since LuxLib-v1.4.0

Merged pull requests: - feat: update ConvMixer to support reactant (#1063) (@avik-pal) - fix: update default rng for reactant (#1152) (@avik-pal) - Change update step to thousands in PINN 2D PDE (#1153) (@abhro) - CompatHelper: add new compat entry for ProgressTables at version 0.1 for package CIFAR10, (keep existing compat) (#1154) (@github-actions[bot]) - device(NN) should only give warning once (#1156) (@vpuri3) - feat: conditional VAE (#1157) (@avik-pal) - fix: ConditionalVAE on CI (#1159) (@avik-pal) - CompatHelper: add new compat entry for Optimisers at version 0.4 for package ConditionalVAE, (keep existing compat) (#1161) (@github-actions[bot]) - CompatHelper: add new compat entry for StableRNGs at version 1 for package ConditionalVAE, (keep existing compat) (#1162) (@github-actions[bot]) - CompatHelper: add new compat entry for Comonicon at version 1 for package ConditionalVAE, (keep existing compat) (#1163) (@github-actions[bot]) - [MLDataDevices] Bump deps (#1164) (@pxl-th) - docs: migrate most examples to Reactant (#1180) (@avik-pal) - chore: bump crate-ci/typos from 1.28.4 to 1.29.4 (#1183) (@dependabot[bot]) - fix: pass in RNG to shuffle (#1188) (@avik-pal)

Closed issues: - Random Numbers & Reactant (#1131) - Recurrent cells cannot be chained with other layers (#1155) - sparse_init doesn't use provided rng fully (#1185) - Incorrect IR generated for some neural networks (#1186) - WeightInitializers.DeviceAgnostic doesn't respect Reactant (#1187)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - WeightInitializers-v1.1.0

WeightInitializers WeightInitializers-v1.1.0

Diff since WeightInitializers-v1.0.5

Merged pull requests: - docs: migrate most examples to Reactant (#1180) (@avik-pal)

Closed issues: - WeightInitializers.DeviceAgnostic doesn't respect Reactant (#1187)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - WeightInitializers-v1.0.5

WeightInitializers WeightInitializers-v1.0.5

Merged pull requests: - Rewrite (#7) (@avik-pal) - Rename to Lux (#11) (@avik-pal) - Initial Documentation (#14) (@avik-pal) - Minor Updates (#15) (@avik-pal) - Better CUDNN Dispatches (#16) (@avik-pal) - Tutorials (#21) (@avik-pal) - Proper dispatch for types not supported by CUDNN (#23) (@avik-pal) - [WIP] Recurrent Neural Networks (#24) (@avik-pal) - Fix math display in docs (#27) (@gdalle) - Initial ViT Implementation & Pretrained ImageNet Models (#29) (@avik-pal) - CompatHelper: bump compat for Setfield to 1, (keep existing compat) (#30) (@github-actions[bot]) - Code Formatting -- SciMLStyle (#31) (@avik-pal) - Cleanup generated function style (#33) (@avik-pal) - Update README.md (#37) (@zsz00) - Fix doc for PairwiseFusion (#39) (@theabhirath) - Extending Scale to allow for multiple dimension inputs (#40) (@theabhirath) - Fix Zygote error caused due to fill! (#41) (@theabhirath) - CompatHelper: bump compat for ComponentArrays to 0.12, (keep existing compat) (#43) (@github-actions[bot]) - Update JET tests to allow julia v1.6 (#47) (@avik-pal) - Formatting updates and relax parameter type (#48) (@avik-pal) - Enable doctests in CI (#51) (@avik-pal) - fix quickstart example (#52) (@visr) - Test on 1.8 (#54) (@avik-pal) - Separate out testing unreleased julia versions (#55) (@avik-pal) - Cleaner and Better Documentation (#56) (@avik-pal) - Bump Pkg Compats (#66) (@avik-pal) - CompatHelper: bump compat for MLDatasets to 0.7 for package examples, (keep existing compat) (#67) (@github-actions[bot]) - Manual to translate Flux to Lux (#69) (@avik-pal) - Try codecov for doctests (#70) (@avik-pal) - Add tests for utility functions (#74) (@avik-pal) - Add tip to install packages (#76) (@Karthik-d-k) - More Testing + Deprecate Nonsensical Functions + Better Naming for Kwargs (#80) (@avik-pal) - CompatHelper: add new compat entry for Optimisers at version 0.2, (keep existing compat) (#82) (@github-actions[bot]) - Update rrules so that we can support Yota (#85) (@avik-pal) - CompatHelper: bump compat for FluxMPI to 0.6 for package examples, (keep existing compat) (#86) (@github-actions[bot]) - Update comparison section in overview.md (#88) (@ToucheSir) - Fix typos (#89) (@claforte) - Fix minor typos in the docs (#93) (@gabrevaya) - making x Float32 in migrate from Flux example (#97) (@gabrevaya) - add inithiddenstate function (#101) (@gabrevaya) - JLArray is now registered (#103) (@YichengDWu) - [LuxTraining] Wrappers for less clunky training loops (#104) (@avik-pal) - Use OneHotArrays (#105) (@YichengDWu) - Fixes WeightNorm with zero Parameter bug (#106) (@avik-pal) - fix state update in NeuralODE example (#107) (@gabrevaya) - Deprecate elementwise_* and applyactivation (#113) (@avik-pal) - Go through the dense bias deprecation (#114) (@avik-pal) - Fix Scale's paramlength (#116) (@lungd) - Trainable hidden states (#117) (@lungd) - Rnn bias deprecation (#120) (@lungd) - Add usebias kwarg to LSTMCell and GRUCell (#121) (@lungd) - Update docs for dense layer (#124) (@avik-pal) - Upper bound ComponentArrays (#125) (@avik-pal) - Relax ComponentArrays compat (#126) (@avik-pal) - Layer Normalization Implementation (#127) (@avik-pal) - LSTM docs: don't go over first element in sequence twice (#132) (@visr) - fix PairwiseFusion docs (#133) (@YichengDWu) - Generic recurrent cells (#136) (@jumerckx) - relu tests with finite diff is too unreliable (#137) (@avik-pal) - Add kaiming initialization (#138) (@YichengDWu) - Remove Val in typeinfo of WeightNorm (#140) (@avik-pal) - Named Layers inside Generic Containers (#143) (@avik-pal) - Allow fmapping over the model (#144) (@avik-pal) - Update Imagenet example (#147) (@avik-pal) - Make normalization more AD friendly (Diffractor) (#148) (@avik-pal) - Fix CuArray -> Array rrule (#149) (@avik-pal) - Allow indexing into Chains (#150) (@avik-pal) - API for freezing layers (#151) (@avik-pal) - Allow controlling fast activation transformation (#153) (@avik-pal) - Introducing LuxLib.jl: Effectively pullout some of the custom layer implementations from Lux.jl (#154) (@avik-pal) - Try relaxing JET version (#155) (@avik-pal) - Update to use LuxLib (#156) (@avik-pal) - Allow dispatch using Lux.apply (#158) (@avik-pal) - Mark non differentiable code paths (#160) (@avik-pal) - Fix generic GN dispatch for non 4D arrays (#161) (@avik-pal) - Add dispatch for subarray (#162) (@avik-pal) - Add More Layers (#163) (@avik-pal) - Fix type stability in normalization implementation (#164) (@avik-pal) - Codecov for lib directories Take 2 (#165) (@avik-pal) - Add freeze tests to runtests (#166) (@avik-pal) - Precompile common workflows + check invalidations (#167) (@avik-pal) - Make normalization typestable (#168) (@avik-pal) - Add a manual page on precompilation (#169) (@avik-pal) - Deprecate Lux.transform in favor of Flux2Lux.jl (#170) (@avik-pal) - Remove dead code and improve var for Tracker.jl support (#171) (@avik-pal) - Hyper Network Example (#172) (@avik-pal) - Modify mkdocs settings (#173) (@avik-pal) - Make ViT work on GPUs (#174) (@avik-pal) - Add sensible recurrent layer wrappers (#175) (@avik-pal) - setup only on AbstractRules (#176) (@avik-pal) - Start using Flux2Lux (#177) (@avik-pal) - Fix some displays (#178) (@avik-pal) - Relax dropout types (#179) (@avik-pal) - Add instancenorm and alphadropout implementations (#180) (@avik-pal) - Add InstanceNorm and AlphaDropout (#181) (@avik-pal) - CompatHelper: bump compat for MLUtils to 0.3 for package examples, (keep existing compat) (#184) (@github-actions[bot]) - remove convert rrule (#185) (@ArnoStrouwen) - CompatHelper: bump compat for OneHotArrays to 0.2 for package examples, (keep existing compat) (#186) (@github-actions[bot]) - CompatHelper: bump compat for Turing to 0.22 for package examples, (keep existing compat) (#188) (@github-actions[bot]) - Fix layermap for custom layers (#189) (@avik-pal) - add example of DDIM implementation (#190) (@yng87) - LuxCore.jl: Extremely light dependency for Lux Compatibility (#191) (@avik-pal) - Revert github workflows for merged LuxCore.jl (#193) (@avik-pal) - CompatHelper: bump compat for MLUtils to 0.3 for package ImageNet, (keep existing compat) (#194) (@github-actions[bot]) - CompatHelper: bump compat for Setfield to 1 for package ImageNet, (keep existing compat) (#195) (@github-actions[bot]) - CompatHelper: bump compat for OneHotArrays to 0.2 for package ImageNet, (keep existing compat) (#196) (@github-actions[bot]) - ADAM -> Adam (#197) (@cossio) - CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#199) (@github-actions[bot]) - CompatHelper: bump compat for Functors to 0.4 for package examples, (keep existing compat) (#200) (@github-actions[bot]) - CompatHelper: bump compat for Functors to 0.4 for package ImageNet, (keep existing compat) (#201) (@github-actions[bot]) - Add easy tied weights/parameter sharing support (#202) (@avik-pal) - CompatHelper: bump compat for Functors to 0.4 for package LuxCore, (keep existing compat) (#203) (@github-actions[bot]) - CompatHelper: add new compat entry for Zygote at version 0.6 for package DDIM, (keep existing compat) (#218) (@github-actions[bot]) - Update DDIM compat requirements (#219) (@avik-pal) - Update examples (#221) (@avik-pal) - CompatHelper: bump compat for Turing to 0.23 for package examples, (keep existing compat) (#222) (@github-actions[bot]) - Fix docs (#223) (@avik-pal) - CompatHelper: bump compat for MLUtils to 0.4 for package examples, (keep existing compat) (#226) (@github-actions[bot]) - CompatHelper: bump compat for MLUtils to 0.4 for package ImageNet, (keep existing compat) (#227) (@github-actions[bot]) - CompatHelper: bump compat for MLUtils to 0.4 for package DDIM, (keep existing compat) (#228) (@github-actions[bot]) - Functor ambiguity fix (#229) (@avik-pal) - Add all compats together (#238) (@avik-pal) - CompatHelper: bump compat for Turing to 0.24 for package examples, (keep existing compat) (#241) (@github-actions[bot]) - CompatHelper: bump compat for JET to 0.7 for package test, (keep existing compat) (#251) (@github-actions[bot]) - [WIP] Use Extensions for Flux2Lux (#261) (@avik-pal) - Cleaner test workflow (#262) (@avik-pal) - Add a patch for #243 (#263) (@avik-pal) - Update LuxLib dependencies (#265) (@avik-pal) - Dropping Julia 1.6 support for Lux (#266) (@avik-pal) - Purge unnecessary dependencies into weak dependencies (#267) (@avik-pal) - Add ForwardDiff Extension: Dropout (#269) (@avik-pal) - Add Tracker as an Extension (#272) (@avik-pal) - CompatHelper: bump compat for AbstractDifferentiation to 0.5 for package examples, (keep existing compat) (#273) (@github-actions[bot]) - Some Improvements (#274) (@avik-pal) - Tracker has some of the rules (#275) (@avik-pal) - Temporary CA + Tracker Patches (#276) (@avik-pal) - Add CUDA and AMDGPU trigger packages (#277) (@avik-pal) - ReverseDiff Extension (#280) (@avik-pal) - Bump peter-evans/create-pull-request from 3 to 4 (#283) (@dependabot[bot]) - Bump actions/cache from 1 to 3 (#284) (@dependabot[bot]) - Bump actions/checkout from 1 to 3 (#285) (@dependabot[bot]) - Return the history for Recurrence (#287) (@avik-pal) - Truncate tuples and namedtuples (#290) (@avik-pal) - [WIP] Remove projects from lib to LuxDL (#291) (@avik-pal) - Patch freeze (#292) (@avik-pal) - Add dispatch for no activation (#293) (@avik-pal) - Remove weakdeps from deps (#295) (@avik-pal) - Try restoring lts support (#296) (@avik-pal) - Testing using LuxTestUtils.jl (#297) (@avik-pal) - CompatHelper: bump compat for Boltz to 0.2 for package ImageNet, (kee… (#298) (@avik-pal) - Bump peter-evans/create-pull-request from 4 to 5 (#299) (@dependabot[bot]) - remove Dataloaders (#300) (@avik-pal) - Update docs (#301) (@avik-pal) - Fix bug in recurrence ordering (#303) (@avik-pal) - Update LuxComponentArraysExt.jl (#304) (@avik-pal) - CompatHelper: bump compat for Turing to 0.25 for package examples, (keep existing compat) (#306) (@github-actions[bot]) - propertynames of CA from type (#307) (@avik-pal) - Fix GRUCell docstring (#309) (@andreuvall) - Fix enzyme doc to reflect custom rules (#310) (@wsmoses) - Fixed link to sciml book in NeuralODE example (#311) (@MartinuzziFrancesco) - Move documentation build to buildkite (#314) (@avik-pal) - Fixed Boltz.jl link in docs (#316) (@MartinuzziFrancesco) - Allow container layers to have custom names (#317) (@avik-pal) - Small grammar and style fixes (#318) (@MartinuzziFrancesco) - Added 'applyactivation' to 'RNNCell's (#319) (@MartinuzziFrancesco) - Added AbstractRecurrentCell (#322) (@MartinuzziFrancesco) - Towards v0.5 Take II (@avik-pal) - Fix errors in applying bilinear layer to ND arrays (#333) (@vpuri3) - Use WeightInitializers.jl (#334) (@avik-pal) - Use PackageExtensionCompat (#335) (@avik-pal) - CompatHelper: add new compat entry for LuxCUDA at version 0.1 for package ImageNet, (keep existing compat) (#337) (@github-actions[bot]) - CompatHelper: add new compat entry for LuxAMDGPU at version 0.1 for package ImageNet, (keep existing compat) (#338) (@github-actions[bot]) - Basic 2nd order support (#339) (@avik-pal) - Use LuxLib 0.3 (#340) (@avik-pal) - Workaround https://github.com/cjdoris/PackageExtensionCompat.jl/issues/9 (#344) (@avik-pal) - Merge pull request #344 from LuxDL/ap/lux0.4 (#346) (@avik-pal) - Fixes for compat (#350) (@avik-pal) - Fix ext docs (#351) (@avik-pal) - Allow modifying ordering of data for recurrence (#353) (@avik-pal) - CompatHelper: bump compat for ComponentArrays to 0.14 for package examples, (keep existing compat) (#355) (@github-actions[bot]) - Fix AMDGPU tests and versions (#356) (@avik-pal) - Clean up the codebase (#357) (@avik-pal) - Add example on how to save the models (#358) (@avik-pal) - DOCFIX: LayerNorm's affine default value was incorrectly noted as 'false' in doc. (#359) (@srikumarks) - CompatHelper: bump compat for Lux to 0.5 for package ImageNet, (keep existing compat) (#362) (@github-actions[bot]) - CompatHelper: bump compat for Lux to 0.5 for package DDIM, (keep existing compat) (#363) (@github-actions[bot]) - CompatHelper: bump compat for Images to 0.26 for package ImageNet, (keep existing compat) (#365) (@github-actions[bot]) - CompatHelper: bump compat for Images to 0.26 for package DDIM, (keep existing compat) (#366) (@github-actions[bot]) - Fix url link to Deep learning with Flux tutorial (#367) (@pnavaro) - CompatHelper: bump compat for Turing to 0.27 for package examples, (keep existing compat) (#368) (@github-actions[bot]) - CompatHelper: bump compat for Turing to 0.28 for package examples, (keep existing compat) (#372) (@github-actions[bot]) - Boltz Link was not working, updated (#373) (@ashwanirathee) - Formatting fix (#379) (@avik-pal) - CompatHelper: bump compat for ADTypes to 0.2, (keep existing compat) (#380) (@github-actions[bot]) - Move experimental code to Experimental (#381) (@avik-pal) - CompatHelper: bump compat for Boltz to 0.3 for package ImageNet, (keep existing compat) (#382) (@github-actions[bot]) - Migrate Docs to using Vitepress (#383) (@avik-pal) - Add Potential CUDA Grouped Conv segfault test (#388) (@avik-pal) - Add Tutorial on modeling gravitational waveforms (#389) (@avik-pal) - CompatHelper: bump compat for Optimisers to 0.3, (keep existing compat) (#390) (@github-actions[bot]) - CompatHelper: add new compat entry for CSV at version 0.10 for package examples, (keep existing compat) (#391) (@github-actions[bot]) - CompatHelper: add new compat entry for Optimization at version 3 for package examples, (keep existing compat) (#392) (@github-actions[bot]) - CompatHelper: bump compat for Optimisers to 0.3 for package examples, (keep existing compat) (#393) (@github-actions[bot]) - CompatHelper: add new compat entry for LineSearches at version 7 for package examples, (keep existing compat) (#394) (@github-actions[bot]) - CompatHelper: add new compat entry for OptimizationOptimJL at version 0.1 for package examples, (keep existing compat) (#395) (@github-actions[bot]) - CompatHelper: bump compat for Optimisers to 0.3 for package ImageNet, (keep existing compat) (#396) (@github-actions[bot]) - CompatHelper: bump compat for Optimisers to 0.3 for package DDIM, (keep existing compat) (#397) (@github-actions[bot]) - Restructure for autosidebar (#398) (@avik-pal) - Use separate Project and Manifest files (#399) (@avik-pal) - Use separate processes to generate the tutorials (#400) (@avik-pal) - Add f16, f32, f64 functions for easy parameter eltype conversions (#401) (@avik-pal) - Add a @debug_mode for debugging NaNs and Errors (#402) (@avik-pal) - Add a stateful layer which prevents boxing in SciML Layers (#404) (@avik-pal) - CompatHelper: bump compat for Turing to 0.29 for package BayesianNN, (keep existing compat) (#405) (@github-actions[bot]) - CompatHelper: bump compat for ComponentArrays to 0.15 for package Basics, (keep existing compat) (#408) (@github-actions[bot]) - CompatHelper: bump compat for ComponentArrays to 0.15 for package GravitationalWaveForm, (keep existing compat) (#409) (@github-actions[bot]) - CompatHelper: bump compat for ComponentArrays to 0.15 for package HyperNet, (keep existing compat) (#410) (@github-actions[bot]) - CompatHelper: bump compat for ComponentArrays to 0.15 for package NeuralODE, (keep existing compat) (#411) (@github-actions[bot]) - Bump actions/checkout from 3 to 4 (#412) (@dependabot[bot]) - Change Mean to Max Pooling layer in docstring skip ci (@roflmaostc) - Upstream CA patches for AD Packages (#414) (@avik-pal) - docs: fix the ecosystem link (#419) (@sathvikbhagavan) - GPU Downstream testing (#421) (@avik-pal) - Neural PDE downstream (#422) (@avik-pal) - Minor Fixes (#425) (@avik-pal) - Ensure ReverseDiff and Gauss Adjoint is also tested (#431) (@avik-pal) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package DDIM, (keep existing compat) (#433) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package GravitationalWaveForm, (keep existing compat) (#434) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package HyperNet, (keep existing compat) (#435) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package ImageNet, (keep existing compat) (#436) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package NeuralODE, (keep existing compat) (#437) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package PolynomialFitting, (keep existing compat) (#438) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package SimpleRNN, (keep existing compat) (#439) (@github-actions[bot]) - Update Project.toml (#440) (@avik-pal) - Emergency patch the ChainRules bug for Vector of CuArrays (#442) (@avik-pal) - CompatHelper: add new compat entry for Statistics at version 1, (keep existing compat) (#443) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package DDIM, (keep existing compat) (#444) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package HyperNet, (keep existing compat) (#445) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package ImageNet, (keep existing compat) (#446) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package NeuralODE, (keep existing compat) (#447) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package PolynomialFitting, (keep existing compat) (#448) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package SimpleRNN, (keep existing compat) (#449) (@github-actions[bot]) - Add perdiodic padding to documentation (#452) (@maximilian-gelbrecht) - Fix link to documentation in README.md (#454) (@pierre-haessig) - Add CA test for Nested AutoDiff (#458) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.11 for package BayesianNN, (keep existing compat) (#459) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.11 for package GravitationalWaveForm, (keep existing compat) (#460) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.11 for package PolynomialFitting, (keep existing compat) (#461) (@github-actions[bot]) - Update WeightInitializers documentation (#465) (@avik-pal) - Allow dispatch on compact layers and use let blocks for faster closures (#466) (@avik-pal) - Add a RepeatedLayer (#467) (@avik-pal) - Fix check (#469) (@avik-pal) - CompatHelper: bump compat for Adapt to 4, (keep existing compat) (#470) (@github-actions[bot]) - Patch Metal Recurrent Neural Networks (#474) (@avik-pal) - Bump actions/cache from 3 to 4 (#479) (@dependabot[bot]) - Bump codecov/codecov-action from 3 to 4 (#484) (@dependabot[bot]) - Bump peter-evans/create-pull-request from 5 to 6 (#485) (@dependabot[bot]) - Drop 1.6 support + Patches to Fix Tests (#487) (@avik-pal) - Remove extensions in favor of GPUArraysCore (#488) (@avik-pal) - Parallel Testing + Distributed Docs build (#490) (@avik-pal) - Add output lengths for layers (#491) (@SebastianM-C) - Format code (#493) (@avik-pal) - Try using DocumenterVitepress.jl (#496) (@avik-pal) - Move Stateful lux layer out of experimental (#497) (@avik-pal) - Inbuilt-Distributed Setup (#500) (@avik-pal) - Remove ComponentArrays type-piracies (#501) (@avik-pal) - Add outputsize for Chain (#503) (@SebastianM-C) - fixes ImageNet, SimpleRNN examples (#504) (@avik-pal) - Documentation Fixes (#505) (@avik-pal) - Fix tutorial numbering (#509) (@avik-pal) - CompatHelper: add new compat entry for LuxAMDGPU at version 0.2 for package Basics, (keep existing compat) (#510) (@github-actions[bot]) - CompatHelper: add new compat entry for Metalhead at version 0.9 for package ImageNet, (keep existing compat) (#511) (@github-actions[bot]) - CompatHelper: add new compat entry for Flux at version 0.14 for package ImageNet, (keep existing compat) (#512) (@github-actions[bot]) - Patches (#519) (@avik-pal) - Docs Again (#520) (@avik-pal) - General Quality of Life Enhancements (#521) (@avik-pal) - CompatHelper: add new compat entry for Literate at version 2 for package Basics, (keep existing compat) (#522) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package BayesianNN, (keep existing compat) (#523) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package GravitationalWaveForm, (keep existing compat) (#524) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package HyperNet, (keep existing compat) (#525) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package NeuralODE, (keep existing compat) (#526) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package PolynomialFitting, (keep existing compat) (#527) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package SimpleRNN, (keep existing compat) (#528) (@github-actions[bot]) - New Interface to switch between frameworks (#529) (@avik-pal) - CompatHelper: add new compat entry for MLUtils at version 0.4 for package SimpleChains, (keep existing compat) (#530) (@github-actions[bot]) - Move replicate to LuxCore (#532) (@MartinuzziFrancesco) - Test for implicit imports (#533) (@avik-pal) - Fix https://github.com/LuxDL/Lux.jl/issues/534 (#535) (@avik-pal) - Fix Dense documentation (#539) (@Sleort) - Fix typo: l to layer (#546) (@prbzrg) - Minor fixes (#547) (@avik-pal) - QoL improvements for tracing based AD (#548) (@avik-pal) - Fix SimpleChains for single dims (#552) (@avik-pal) - Standardize the handling of states (#553) (@avik-pal) - CompatHelper: add new compat entry for ADTypes at version 0.2 for package HyperNet, (keep existing compat) (#555) (@github-actions[bot]) - CompatHelper: add new compat entry for ADTypes at version 0.2 for package PolynomialFitting, (keep existing compat) (#556) (@github-actions[bot]) - CompatHelper: add new compat entry for ADTypes at version 0.2 for package SimpleChains, (keep existing compat) (#557) (@github-actions[bot]) - LuxSimpleChainsExt: specify rng when initializing (#559) (@pao) - Update SimpleRNN docs (#561) (@avik-pal) - Remove TruncatedStacktraces (#562) (@avik-pal) - Use @closure to make closures type-stable (#563) (@avik-pal) - Add set_device! to docs (#569) (@avik-pal) - Fuse the activation and bias (#570) (@avik-pal) - Try fixing the hydration error (#571) (@avik-pal) - Test continuous benchmarking (#572) (@avik-pal) - Add more benchmarks (#574) (@avik-pal) - More Continuous Benchmarks (#575) (@avik-pal) - Make the AD benchmarks type stable (#576) (@avik-pal) - Bump julia-actions/setup-julia from 1 to 2 (#577) (@dependabot[bot]) - Fix numbering in the docs (#578) (@avik-pal) - Add a gallery component (#579) (@avik-pal) - AD Housekeeping (#580) (@avik-pal) - Update style.css to disable 'calt' feature for monospace (#581) (@cormullion) - Improvement to the @compact API (#584) (@avik-pal) - Add dynamic expressions extension (#585) (@avik-pal) - Convert examples to doctests (#586) (@avik-pal) - Bump crate-ci/typos from 1.18.0 to 1.20.8 (#587) (@dependabot[bot]) - CompatHelper: add new compat entry for Lux at version 0.5 for package SymbolicOptimalControl, (keep existing compat) (#589) (@github-actions[bot]) - Allow @set! for Stateful Layers (#590) (@avik-pal) - Used New Fused Ops from LuxLib (#591) (@avik-pal) - CompatHelper: bump compat for ADTypes to 1, (keep existing compat) (#592) (@github-actions[bot]) - CompatHelper: bump compat for ADTypes to 1 for package HyperNet, (keep existing compat) (#593) (@github-actions[bot]) - CompatHelper: bump compat for ADTypes to 1 for package PolynomialFitting, (keep existing compat) (#594) (@github-actions[bot]) - CompatHelper: bump compat for ADTypes to 1 for package SimpleChains, (keep existing compat) (#595) (@github-actions[bot]) - CompatHelper: bump compat for ADTypes to 1 for package SimpleRNN, (keep existing compat) (#596) (@github-actions[bot]) - Bump crate-ci/typos from 1.20.8 to 1.20.9 (#597) (@dependabot[bot]) - Native Nested AD support for Lux Models (#598) (@avik-pal) - CompatHelper: bump compat for Turing to 0.31 for package BayesianNN, (keep existing compat) (#599) (@github-actions[bot]) - Faster testing (#601) (@avik-pal) - Unstructure structured inputs for reasonable broadcasting (#603) (@avik-pal) - Bump crate-ci/typos from 1.20.9 to 1.20.10 (#607) (@dependabot[bot]) - Add 3rd party tutorial (#609) (@agdestein) - CompatHelper: bump compat for DynamicExpressions to 0.17 for package SymbolicOptimalControl, (keep existing compat) (#611) (@github-actions[bot]) - Improvements to Nested AD (#612) (@avik-pal) - Add missing table of contents entry (#613) (@agdestein) - Attempt to build the tutorials in parallel (#616) (@avik-pal) - Add field access syntax to Chain (#619) (@Sleort) - Add vector_jacobian_product and jacobian_vector_product functions (#623) (@avik-pal) - Bump crate-ci/typos from 1.20.10 to 1.21.0 (#624) (@dependabot[bot]) - Bring in batched_jacobian (#625) (@avik-pal) - Added layer for periodic inputs (#626) (@nicholaskl97) - Cleanup (#629) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.12 for package BayesianNN, (keep existing compat) (#631) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.12 for package GravitationalWaveForm, (keep existing compat) (#632) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.12 for package PolynomialFitting, (keep existing compat) (#633) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.12 for package SymbolicOptimalControl, (keep existing compat) (#634) (@github-actions[bot]) - Fixes to type stability of Zygote (#635) (@avik-pal) - Reduce max chunksize (#637) (@avik-pal) - missing keyword in docstring (#638) (@RoyCCWang) - Adding Enzyme Tests (#639) (@avik-pal) - Enzyme Testing + Caching in compute_gradients (#640) (@avik-pal) - Add Enzyme to benchmark infra (#641) (@wsmoses) - Add Enzyme to benchmark infra (#643) (@avik-pal) - Add a warning on using Tracker with SimpleChains (#645) (@avik-pal) - Improvements to Batched Jacobian (#646) (@avik-pal) - Patch a compact bug (#648) (@avik-pal) - update makie (#649) (@avik-pal) - Test on multiple os (#650) (@avik-pal) - Fix DocumenterVitepress compat (#651) (@avik-pal) - Prevent infinite loop in Tracker (#652) (@avik-pal) - Test ComponentArrays with Enzyme (#653) (@avik-pal) - Update DocumenterVitepress compat in docs (#654) (@asinghvi17) - Use ArgCheck.jl for helpful error messages (#655) (@avik-pal) - CompatHelper: bump compat for OptimizationOptimJL to 0.3 for package GravitationalWaveForm, (keep existing compat) (#656) (@github-actions[bot]) - CompatHelper: bump compat for OptimizationOptimJL to 0.3 for package SymbolicOptimalControl, (keep existing compat) (#657) (@github-actions[bot]) - CompatHelper: bump compat for Turing to 0.32 for package BayesianNN, (keep existing compat) (#658) (@github-actions[bot]) - Restore the rrule for merge (#659) (@avik-pal) - Bump julia-actions/julia-format from 2 to 3 (#660) (@dependabot[bot]) - Update & Rewrite the DDIM example (#661) (@avik-pal) - Quality of Life Improvements (#666) (@avik-pal) - CompatHelper: bump compat for SymbolicUtils to 2 for package SymbolicOptimalControl, (keep existing compat) (#669) (@github-actions[bot]) - Add Cartesian Embedding methods (#670) (@ldeso) - More principled rewrite of layermap (#671) (@avik-pal) - Clean up the code for debug mode (#674) (@avik-pal) - CompatHelper: add new compat entry for TensorBoardLogger at version 0.1 for package DDIM, (keep existing compat) (#676) (@github-actions[bot]) - CompatHelper: add new compat entry for CairoMakie at version 0.12 for package DDIM, (keep existing compat) (#677) (@github-actions[bot]) - Remove rrule for merge (#679) (@avik-pal) - Minor optimizations (#681) (@avik-pal) - CompatHelper: bump compat for Turing to 0.33 for package BayesianNN, (keep existing compat) (#688) (@github-actions[bot]) - Newer public functions (#690) (@avik-pal) - Update Boltz API Docs (#691) (@avik-pal) - Bump crate-ci/typos from 1.21.0 to 1.22.3 (#693) (@dependabot[bot]) - More API updates (#696) (@avik-pal) - Add ReverseSequence (#698) (@NeroBlackstone) - Training ConvMixer on CIFAR10 in 10mins (#700) (@avik-pal) - Add activation functions doc reference (Rebase #694) (#702) (@avik-pal) - Clean up the CI scripts (#703) (@avik-pal) - Loss functions module (#704) (@avik-pal) - Add test guide documentation (#705) (@NeroBlackstone) - Add ReverseSequence() docs (#706) (@NeroBlackstone) - Bidirectional RNN (#708) (@NeroBlackstone) - Run doctests in the test CI + Lazy install test dependencies (#710) (@avik-pal) - Bump crate-ci/typos from 1.22.3 to 1.22.7 (#711) (@dependabot[bot]) - Mark unexported symbols as public (#712) (@avik-pal) - Install packages before loading (#713) (@avik-pal) - Extend training API and update examples (#714) (@avik-pal) - Try fixing AMDGPU test stalling (#716) (@avik-pal) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 0.9, (keep existing compat) (#717) (@github-actions[bot]) - Try to improve coverage (#718) (@avik-pal) - Try wider docs (#721) (@avik-pal) - Compiled ReverseDiff for training on CPU (#722) (@avik-pal) - Makes name concrete types (#723) (@avik-pal) - CompatHelper: add new compat entry for StaticArrays at version 1 for package docs, (keep existing compat) (#724) (@github-actions[bot]) - CompatHelper: add new compat entry for KernelAbstractions at version 0.9 for package docs, (keep existing compat) (#725) (@github-actions[bot]) - Bump crate-ci/typos from 1.22.7 to 1.22.9 (#726) (@dependabot[bot]) - Performance Pitfalls and How to Catch them (#727) (@avik-pal) - CompatHelper: bump compat for DynamicExpressions in [weakdeps] to 0.18, (keep existing compat) (#728) (@github-actions[bot]) - CompatHelper: bump compat for DynamicExpressions to 0.18 for package SymbolicOptimalControl, (keep existing compat) (#729) (@github-actions[bot]) - Store the optimizer in TrainState (#731) (@avik-pal) - Simply show implementations and make them round-trippable (#732) (@avik-pal) - Try removing the type assert with this (#734) (@avik-pal) - Add enzyme support for loss functions from LossFunctions.jl (#736) (@avik-pal) - Mark cartersian index tests on cuda broken for now (#737) (@avik-pal) - Run CI on pre (#739) (@avik-pal) - Revert bee2de7-1188db7 (#740) (@avik-pal) - Use shorthand syntax of @concrete (#741) (@avik-pal) - Check status of broken tests (#742) (@avik-pal) - Aggregate changes for v1 (#744) (@avik-pal) - fix: nested ad when using direct eval in function (#745) (@avik-pal) - CompatHelper: add new compat entry for GPUArraysCore at version 0.1 for package docs, (keep existing compat) (#746) (@github-actions[bot]) - Bump crate-ci/typos from 1.22.9 to 1.23.1 (#748) (@dependabot[bot]) - chore: bump simplechains version (#749) (@avik-pal) - CompatHelper: bump compat for SciMLSensitivity to 7 for package NeuralODE, (keep existing compat) (#750) (@github-actions[bot]) - docs: restructure the manual entries a bit (#751) (@avik-pal) - refactor: bring Optimisers.jl into main deps (#754) (@avik-pal) - refactor: drop the AMDGPU extension (#755) (@avik-pal) - rearrange code in extensions (#756) (@avik-pal) - fix: use proper qualified accesses for modules (#757) (@avik-pal) - docs: remove redundant old preferences (#759) (@avik-pal) - feat: allow multiple @return (#760) (@avik-pal) - Making all eltypes Float32 in Fitting a Polynomial using MLP (#761) (@Sleort) - docs: fix inline math rendering (#762) (@avik-pal) - refactor: use the faster `getdevicetype(#763) (@avik-pal) - refactor: move ForwardDiff.jl into main deps (#764) (@avik-pal) - test: set st to training (#765) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.23.1 to 1.23.2 (#766) (@dependabot[bot]) - Update docstring dropout (#770) (@dmetivie) - chore: recommend GH Discussions for Q/A (#774) (@avik-pal) - Allow 2d input if RNN order is BatchLastIndex (#778) (@NeroBlackstone) - test: remove@testnowarntesting (#781) (@avik-pal) - fix: don't reuse pullback for safety (#782) (@avik-pal) - improvements to compact macro (#783) (@avik-pal) - test: warp@inferredwith@test(#784) (@avik-pal) - chore: add NNlib as a direct dep (#785) (@avik-pal) - fix: update to latest LuxLib API + deprecations (#786) (@avik-pal) - perf: fix enzyme benchmarks (#787) (@avik-pal) - test: trigger enzyme tests (#788) (@avik-pal) - docs: fix typo in "JVP & VJP Wrappers" (#789) (@ldeso) - docs: update docs from downstream changes (#790) (@avik-pal) - CompatHelper: bump compat for WeightInitializers to 1, (keep existing compat) (#791) (@github-actions[bot]) - CompatHelper: bump compat for WeightInitializers to 1 for package docs, (keep existing compat) (#792) (@github-actions[bot]) - test: improved testing (#793) (@avik-pal) - feat: improvements to the Training API (#794) (@avik-pal) - feat: easy mechanism to set preferences (#798) (@avik-pal) - CompatHelper: bump compat for SymbolicUtils to 3 for package SymbolicOptimalControl, (keep existing compat) (#799) (@github-actions[bot]) - test: update to the newer LuxTestUtils (#800) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.23.2 to 1.23.5 (#804) (@dependabot[bot]) - refactor: move TrackerExt in a directory (#806) (@avik-pal) - feat:NilArrayfor fast size propagation (#811) (@avik-pal) - docs: add new function to docs (#813) (@avik-pal) - fix: update Dynamic Expressions to 0.19 (#814) (@avik-pal) - docs: add documentation forMLDataDevices(#815) (@avik-pal) - CompatHelper: add new compat entry for MLDataDevices at version 1 for package docs, (keep existing compat) (#818) (@github-actions[bot]) - test: try separating the test Project files (#819) (@avik-pal) - feat: use faster version of batched matmul (#820) (@avik-pal) - ci: setup benchmarking CI (#821) (@avik-pal) - ci: add CI to benchmark load times (#822) (@avik-pal) - chore(deps): bump actions/checkout from 2 to 4 (#823) (@dependabot[bot]) - chore(deps): bump peter-evans/create-or-update-comment from 3 to 4 (#824) (@dependabot[bot]) - chore(deps): bump julia-actions/setup-julia from 1 to 2 (#825) (@dependabot[bot]) - chore(deps): bump peter-evans/find-comment from 2 to 3 (#826) (@dependabot[bot]) - chore(deps): bump julia-actions/cache from 1 to 2 (#827) (@dependabot[bot]) - fix: mark objective function asConst(#835) (@avik-pal) - ci: separate testing for groups in buildkite (#836) (@avik-pal) - chore: update all AMDGPU compats (#837) (@avik-pal) - test: remove Flux as a direct test dep (#838) (@avik-pal) - test: remove some of the unnecessary Flux tests (#839) (@avik-pal) - refactor: cleanup of internals (#840) (@avik-pal) - fix: remove type pirated functions from Lux (#843) (@avik-pal) - chore(deps): bump actions/upload-artifact from 2 to 4 (#844) (@dependabot[bot]) - chore(deps): bump crate-ci/typos from 1.23.5 to 1.23.6 (#845) (@dependabot[bot]) - CompatHelper: add new compat entry for Static at version 1 for package test, (keep existing compat) (#846) (@github-actions[bot]) - feat: improve batched jacobian (#848) (@avik-pal) - chore: bump minimum LuxTestUtils version (#850) (@avik-pal) - docs: minor documentation changes (#855) (@avik-pal) - chore: marking layers as deprecated (#856) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.23.6 to 1.24.1 (#857) (@dependabot[bot]) - docs: more details in performance pitfalls (#859) (@avik-pal) - fix: remove hacky usage of module getproperty rrules (#865) (@avik-pal) - feat: expandtrainmode,testmode,updatestate` to support Stateful Layers (#866) (@avik-pal) - CompatHelper: bump compat for Turing to 0.34 for package BayesianNN, (keep existing compat) (#870) (@github-actions[bot]) - chore(deps): bump crate-ci/typos from 1.24.1 to 1.24.3 (#871) (@dependabot[bot]) - test: don't run doctests on pre-releases (#873) (@avik-pal) - test: run with DD error mode (#874) (@avik-pal) - refactor: static fields in layers (#875) (@avik-pal) - CompatHelper: bump compat for DataAugmentation to 0.3 for package ConvMixer, (keep existing compat) (#876) (@github-actions[bot]) - CompatHelper: bump compat for DataAugmentation to 0.3 for package DDIM, (keep existing compat) (#877) (@github-actions[bot]) - ci(buildkite): run some of the tutorials on CPU runners (#879) (@avik-pal) - CompatHelper: add new compat entry for StableRNGs at version 1 for package docs, (keep existing compat) (#881) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.5 for package DDIM, (keep existing compat) (#885) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.5 for package ImageNet, (keep existing compat) (#886) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.5 for package SimpleRNN, (keep existing compat) (#887) (@github-actions[bot]) - chore(deps): bump peter-evans/create-pull-request from 6 to 7 (#888) (@dependabot[bot]) - chore(deps): bump crate-ci/typos from 1.24.3 to 1.24.5 (#889) (@dependabot[bot]) - Fixed updatingtov1 link in README.md (#890) (@MartinuzziFrancesco) - fix: pretty printing of MaxPool Layer (#891) (@avik-pal) - docs: add a PINN tutorial with nested AD (#894) (@avik-pal) - fix: remove UnrolledUtilities dep (#895) (@avik-pal) - refactor: cleanup Training and preserve type-stability in Enzyme (#896) (@avik-pal) - docs: add an Optimization.jl tutorial showcasing lazy data movement (#897) (@avik-pal) - CompatHelper: add new compat entry for Literate at version 2 for package PINN2DPDE, (keep existing compat) (#899) (@github-actions[bot]) - feat: update imagenet training script (#909) (@avik-pal) - docs: simplify getting started docs (#930) (@avik-pal) - fix: forceinline inside generated functions to avoid recursion issues (#931) (@avik-pal) - fix: update to use testgradients macro (#932) (@avik-pal) - test: froggie tests are broken on gpu (#933) (@avik-pal) - fix: static vector input to dense (#936) (@avik-pal) - ci(buildkite): debugging CUDA segfaults on CI (#937) (@avik-pal) - docs: try using the new documenter vitepress (#943) (@avik-pal) - docs: collapse docstrings by default (#949) (@avik-pal) - feat: update minimum version of Enzyme (#950) (@avik-pal) - docs: fix version picker path (#951) (@avik-pal) - fix: update Optimization compats (#952) (@avik-pal) - fix: update GravitationalWaveform tutorial (#953) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.24.5 to 1.24.6 (#955) (@dependabot[bot]) - docs: update README example (#956) (@avik-pal) - fix: patch optimization tutorial (#959) (@avik-pal) - Added to Nested AD example how to use `batchedjacobian(#964) (@facusapienza21) - Remove line about "not saving the model" (#965) (@asinghvi17) - fix: optimization integration for gravitational waveform (#966) (@avik-pal) - docs: add compilation example using Reactant (#967) (@avik-pal) - docs: add the newxladevice(#968) (@avik-pal) - feat: compile training loop automatically using reactant (#969) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.24.6 to 1.25.0 (#971) (@dependabot[bot]) - ci: run tests only on1.10for now (#975) (@avik-pal) - refactor: makeLossFunctionsan optional dep (#976) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.25.0 to 1.26.0 (#978) (@dependabot[bot]) - CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#984) (@github-actions[bot]) - CompatHelper: bump compat for GPUArraysCore to 0.2 for package docs, (keep existing compat) (#985) (@github-actions[bot]) - fix:LV/Octavianmoved to optional deps (#986) (@avik-pal) - docs(reactant): simplify the enzyme call (#987) (@avik-pal) - CompatHelper: bump compat for Turing to 0.35 for package BayesianNN, (keep existing compat) (#989) (@github-actions[bot]) - chore(deps): bump crate-ci/typos from 1.26.0 to 1.26.8 (#992) (@dependabot[bot]) - perf: loadLoopVectorizationandOctavianfor benchmarks (#994) (@avik-pal) - refactor: use Lux primitives for AD (#995) (@avik-pal) - Move code blocks inside bullet list (#996) (@abhro) - Fix images.jl link (#997) (@NeroBlackstone) - Fix broken link in Recurrence docs (#1001) (@MartinuzziFrancesco) - refactor: move all subpackages into a mono-repo (#1002) (@avik-pal) - feat: support passing in device and client to XLA (#1020) (@avik-pal) - fix: avoid tracing through Lux models (#1021) (@avik-pal) - chore: bump crate-ci/typos from 1.26.8 to 1.27.0 (#1022) (@dependabot[bot]) - ci: combine workflows (#1023) (@avik-pal) - fix: init hidden state for reactant (#1026) (@avik-pal) - fix for Zygote and ChainRules OneElement (#1038) (@CarloLucibello) - Link to quickstart explaining calling models in interface (#1040) (@oxinabox) - fix: make enzyme testing opt-in for now (#1041) (@avik-pal) - test: try re-enabling enzyme testing on 0.13.16 (#1042) (@avik-pal) - fix: missing zero leads to NaNs (#1044) (@avik-pal) - chore: bump allOptimisersversion (#1058) (@avik-pal) - CompatHelper: bump compat for Optimisers to 0.4 for package DDIM, (keep existing compat) (#1059) (@github-actions[bot]) - feat: update ConvMixer to support reactant (#1063) (@avik-pal) - fix: gracefully handleOneHotArrays(#1064) (@avik-pal) - chore: bump crate-ci/typos from 1.27.0 to 1.27.3 (#1065) (@dependabot[bot]) - fix: unsafe free for OneHotArrays (#1067) (@avik-pal) - feat: update to Functors v0.5 (#1069) (@avik-pal) - ci: generate tags for subdir projects (#1071) (@avik-pal) - docs: restructure the docs a bit (#1083) (@avik-pal) - fix:dataloaders` use adaptstructure (#1084) (@avik-pal) - fix: mark kwargs in functor as leaf (#1085) (@avik-pal) - docs: trigger build for docs (#1087) (@avik-pal) - docs: initial prototype of exporting Lux models to Jax (#1088) (@avik-pal) - nondifferentiable gpudevice and cpudevice (#1089) (@CarloLucibello) - chore: use [sources] in Project.toml (#1090) (@avik-pal) - fix: add lineinfo to compact (#1091) (@avik-pal) - docs: highlight Reactant in landing page (#1092) (@avik-pal) - chore: bump codecov/codecov-action from 4 to 5 (#1093) (@dependabot[bot]) - ci: install specific AMDGPU version (#1096) (@avik-pal) - ci: use sources for docs (#1100) (@avik-pal) - Add Reactant and TPU to autodiff.md (#1101) (@wsmoses) - refactor: cleanup some old pre-1.0 hacks (#1102) (@avik-pal) - feat: add bf16 function (#1104) (@avik-pal) - docs: add CUDA.CURAND.defaultrng() to docs (#1105) (@avik-pal) - fix: use generic broadcasting for complex numbers (#1106) (@avik-pal) - Update exportingtojax.md (#1107) (@wsmoses) - CompatHelper: bump compat for LossFunctions in [weakdeps] to 1, (keep existing compat) (#1108) (@github-actions[bot]) - Add TrainState docstring with Optimisers API (#1110) (@abhro) - Fix markdown list in docstring (#1111) (@abhro) - chore: bump crate-ci/typos from 1.27.3 to 1.28.1 (#1113) (@dependabot[bot]) - fix: handle debug leafs with dispatch (#1115) (@avik-pal) - test: allow the latest AMDGPU to be installed (#1116) (@avik-pal) - test: add unsafefree to skip list (#1117) (@avik-pal) - fix: use the correct dispatches for device overloads (#1118) (@avik-pal) - test: try fixing enzyme test (#1119) (@avik-pal) - ci(github-actions): use julia-actions/cache (#1122) (@avik-pal) - test: re-enable flux testing (#1123) (@avik-pal) - chore: bump minimum Reactant version (#1125) (@avik-pal) - fix: try fixing cuda install in tests (#1126) (@avik-pal) - docs: run partial dataset only on CI (#1128) (@avik-pal) - chore: bump crate-ci/typos from 1.28.1 to 1.28.2 (#1130) (@dependabot[bot]) - fix: preserve object when device is same (#1133) (@avik-pal) - fix: use functors for testing wrapped arrays (#1134) (@avik-pal) - fix: remove old patches around reactant bug (#1135) (@avik-pal) - CompatHelper: bump compat for Flux in [weakdeps] to 0.16, (keep existing compat) (#1136) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.28.2 to 1.28.3 (#1137) (@dependabot[bot]) - fix: update to new reactant changes (#1140) (@avik-pal) - chore: bump crate-ci/typos from 1.28.3 to 1.28.4 (#1144) (@dependabot[bot]) - don't declare implicitly exported functions public (#1147) (@simeonschaub) - use `returntypeinstead ofreturntype(#1148) (@simeonschaub) - feat: more nested AD rules (#1151) (@avik-pal) - fix: update default rng for reactant (#1152) (@avik-pal) - Change update step to thousands in PINN 2D PDE (#1153) (@abhro) - CompatHelper: add new compat entry for ProgressTables at version 0.1 for package CIFAR10, (keep existing compat) (#1154) (@github-actions[bot]) -device(NN)` should only give warning once (#1156) (@vpuri3) - feat: conditional VAE (#1157) (@avik-pal) - fix: ConditionalVAE on CI (#1159) (@avik-pal) - CompatHelper: add new compat entry for Optimisers at version 0.4 for package ConditionalVAE, (keep existing compat) (#1161) (@github-actions[bot]) - CompatHelper: add new compat entry for StableRNGs at version 1 for package ConditionalVAE, (keep existing compat) (#1162) (@github-actions[bot]) - CompatHelper: add new compat entry for Comonicon at version 1 for package ConditionalVAE, (keep existing compat) (#1163) (@github-actions[bot]) - [MLDataDevices] Bump deps (#1164) (@pxl-th) - chore: bump crate-ci/typos from 1.28.4 to 1.29.4 (#1183) (@dependabot[bot]) - fix: pass in RNG to shuffle (#1188) (@avik-pal)

Closed issues: - TagBot trigger issue (#6) - Immutable Arrays (#8) - Suboptimal GroupNorm Implementation on GPUs (#10) - Recurrent Neural Networks (#12) - Flux Feature Parity (#13) - Front page example broken (#17) - Distributed Data Parallel Training on examples/ImageNet error (#18) - ] add Lux doesn't work (#19) - Support for non-CUDNN data types (#22) - Hope to add more examples (#25) - Train examples/NeuralODE error (#26) - Thoughts on docs & tutorials (#28) - Available architectures (#34) - Register (#36) - PairwiseFusion takes more inputs than documented (#38) - Remove Requires.jl (#45) - Performance regressions with ComponentArrays (#49) - How do I extend Chain to have multiple inputs (#53) - Nested Lists broken with the current Documentation (#68) - Remove ActivationFunction? (#71) - Quickstart Example: using Optimisers, Zygote do not work unless we explicitly add those to current environment. (#75) - Remove track_stats from GroupNorm (#78) - Named Layers for Container Types (#79) - Tracking support for Enzyme.jl (#81) - Lighter syntax for stateless networks? (#83) - Improve Julia & Lux for the uninitiated (#90) - Remaining Deprecations (#91) - Scalar indexing problem for the NeuralODE example (#92) - Basic example from Migrating from Flux to Lux is broken || normalization issue (#94) - WeightNorm causes NaN for Conv layer gradients (#95) - [Feature request] Another type of Chain that sequentially passing x and st (#96) - Generalize normalization to work for unconstrained types (#98) - RNN and LSTM break when using GPU (#100) - Can one compose lux layers with graph neural network (#102) - optimising parameters with Optimization.jl (#108) - add OrdinaryDiffEq downstream test (#110) - Make it easier to pass empty state st = (;) (#118) - is there transposed convolution (#122) - Support for multidimensional data? (#123) - Inconsistent descripition of PairwiseFusion (#130) - getindex for Chain (#131) - No method matching with argument IRTools.Inner.Undefined in gradient computation. (#134) - checkpointing for backpropagation (#139) - CUDNNError during backpropagation in simple CNN (#141) - Proposal of Lux + Enzyme + CUDA differential programming example (#145) - concat input and output of a layer (#146) - How to avoid the activation function conversion (#152) - Allow dispatch on custom array types (#157) - Nondeterministic method error for some gradients... (#159) - Tied Weights (#182) - Frozen Weights (#183) - layermap fails on custom containers (#187) - Remove LuxCore manual installation in workflows (#192) - Custom layers (#220) - Lux.setup not found (#224) - Support for CuArray{Float64} (#237) - How to create a chain of LSTMcells in Lux.jl? (#239) - Constrain the output layer! (#242) - On using ComponentArray for L2 regularization (#243) - Shared Lux Testing Package (#270) - Automatic Differentiation Backends (#271) - Get the full run of a recurrent cell using Lux (#282) - Nested AD doesn't work with ComponentArrays (#286) - Remove weak dependencies (#294) - Lux Recurrence history is not in the correct order (I think) (#302) - tanh activation function in GRUCell docstring (#308) - WARNING: Wrapping Vararg directly in UnionAll is deprecated (wrap the tuple instead). (#312) - Adding AbstractRecurrentCell (#320) - Splitting weights initializers in own package (#321) - Include documentation on how to save models with Lux (#329) - network with multiple inputs (#330) - Working with NamedTuples (#331) - bilinear doesn't work for AbstractArray{T,3} (#332) - Use ADTypes (#354) - Add ability to load weights into Dense (#361) - Initialize weights of network from csv file (#369) - BatchNorm(; affine = false) in a Chain missing _getproperty(::SubArray... when ps = ComponentArray(ps) (#371) - Slightly broken example Polynomial Fitting (#374) - Fixing the testing on buildkite (#375) - Implementation of custom layer in Lux (#376) - deploy versions (#384) - DocumenterVitepress module into package (#385) - Segfault when using Lux.Conv with CUDA (#386) - Documentation Enhancement Suggestions (#387) - @save not defined? (#403) - The MNIST Neural ODE example does not work with ReverseDiffAdjoint (#407) - Update Documentation to mention loading AD Packages for Training (#415) - ComponentArrays makes coupling layers type-unstable unexpectedly (#416) - ComponentArrays makes Custom Layers containing Chains type-unstable (#417) - Custom Layer, Differential Equation as Activation Function. (#418) - Gradients of shared parameters do not behave as expected (#420) - inconsistent LSTM results in time series forecast between Flux.jl and Lux.jl (#424) - Broadcast Layer (#426) - Can't use freeze with ComponentArray. (#427) - Lux.testmode resorts to scalar indexing with frozen params (#432) - Custom Model for Neural ODE (#441) - Periodic Padding (#451) - Bug in ConvTranspose? (#455) - Generating Parameters with CUDA (#456) - Zygote gradient fails for Custom Layer (#457) - Adaptors should not change the dtype (#462) - Any equivalency to torch.nn.Parameter? (#464) - Support for MultiRNNCell (#472) - GPU evaluation of Recurrence() broken on Metal (#473) - Recurrent Layers don't take Vectors as Input (#478) - How to choose a specific GPU device (#480) - Training in batches and building gradient as mean of individual gradients (#481) - ComponentArrays type piracy (#482) - No Gradients with respect to parameters using Custom Layers (#483) - Where is the API doc for activatations (#486) - Distributed Training (#494) - AMDGPU CI takes a lot of time (#495) - SimpleRNN example is broken on AMDGPU (#498) - Support for multi-core CPUs? (#502) - Bayesian NN example throws Pkg Extension load errors (#507) - 404 many Tutorial links are invalid (#508) - uninitiated tutorial replicate part shows different numbers but should show the same (#513) - uninitiated tutorial - Code Font confusing for pipe |> (#514) - Documentation Request: Standardize the handling of the state st (#515) - Let @compact return the updated state (#516) - Documentation Request: Have a section about Loss Functions (#517) - Documentation Request: Also list GeometricML.jl and SciML.ai under Ecosystem (#518) - Should replicate be part of LuxCore? (#531) - pad=SamePad() does not work as intended in ConvTranspose. (#534) - Array of Structs to Struct of Array transformation for some AD backends (#538) - Documentation on main is broken (#541) - Lux.AMDGPU: type cast throws error (#542) - l should be clarified. Maybe a typo? (#543) - Bug when converting model with single layer to SimpleChains (#545) - Improve broadcasting via FastBroadcast.jl (#549) - FYI: Comment and question (#550) - TypeError using SimpleChains integration (#551) - SimpleChains-backed models do not setup consistenly with fixed RNG seeding (#554) - Stable docs missing (#566) - Tutorial links too small (#567) - Constraint on weights and bias (#568) - Continuous Benchmarking (#573) - Allow "const" arrays as inputs to @compact (#588) - Pullback over jacobian (with CUDA) (#602) - Zygote nested AD failure (#604) - Meta-Issue for improvements to @compact (#606) - Nested AD for Parameter Gradient/Jacobian (#610) - Rewrite `@layermapto use KeyPath from Functors (#615) - Extracting part of a model, with the corresponding parameters and states (#617) - DifferentiatingZygote.pullback(#621) - Batched Jacobian Functions (#622) - Error for JVP by Enzyme (#628) - [Nested AD] Incorrect gradient when taking a gradient over a gradient using StatefulLuxLayer (#630) - batched_jacobian + CUDA => InvalidIRError (#636) - Add a compiled tape version for ReverseDiff (#642) - Simple MLP requires Enzyme runtimeActivity (#647) - UsingswishasConvactivation function errors on the GPU (#662) - Fast activation error (#663) - Definition and implementation of 'Loss' in Linear Regression Tutorial "Julia & Lux for the Uninitiated" (#664) - Add improper qualified accesses checks (#667) -rruleforBase.mergedefined inChainRulesCore(#678) - Different activation functions in one layer (#680) - Remove Auto-Flattening of Chains (#682) - Add type-stability checks viaDispatchDoctor.jl(#683) - Support for inactive arguments in DifferentiationInterface (#685) - Feature request: Bidirectional for RNN layer. (#687) - Predefined loss functions (#689) - Static Type Parameters not accessible inside@compact(#692) - Auto detect and warn against performance pitfalls (#699) - Add documentation about how to partial tests. (#701) - Feature request: 1D CNN, i.e. keras.layer.Conv1d (#709) - AMDGPU CI stalls (#715) - Inference usingNN :: Chaininside a GPU kernel (#720) - customshowis often not valid julia syntax to reconstruct (#730) - Roadmap to v1 (#735) - Error incomputegradientswhen loss already has aZygote.gradient(#743) - NCCL Complex wrapper (#747) - DropTracker.jl` support for SimpleChains (#753) - Feature request: TimeDistributed Layer (#758) - Feature Request: Allow recurrent layers with 2D input (features * seqlength), even if the order is BatchLastIndex (#767) - Missing statistics tracking in normalization layers (#780) - unexpected parameter type for AbstractExplicitContainer with single trainable field (#795) - Test with DispatchDoctor error mode (#797) - Change defaults for Layers to match Pytorch (#808) - Gradient checkpointing/ rematerialization (#816) - how to use Lux.jl utility 'BinaryCrossEntropy' (#841) - Mixed-Precision Matrix Multiply Performance Regression (#847) - Lux.testmode not updating state for BatchNorm layers for nested models? (#849) - Add Float128 support (#851) - Add multiple cpu cores and multiple Julia computers support (#852) - Enzyme.Forward hits Octavian dispatch in Dense (#853) - Move uncommon layers to Boltz.jl (#854) - Update the ImageNet example (#878) - Downstream Compat Updates (#880) - MethodError: no method matching applychain (#884) - Question: how can one use TrainState.cache? (#892) - Problem with Enzyme AD and SArray parameters (#935) - Is AbstractLuxContainerLayer abandoned in Lux 1.0.4? (#942) - Docs build is broken (#957) - Encoder-Decoder RNNs (#961) - Efficient way to compute Jacobian in nested AD (#963) - Zygote + ForwardDiff support for complex differentiation (#977) - The returned values loss and trainstate of singletrainstep! are not compatible (#979) - Segfault for simple Zygote pullback (#980) - Add `CUDA.CURAND.defaultrng()to the table (#1003) - Question on intialization after breaking changes (#988) - Documentation: Using MLFlow with Lux.jl (#990) - Documentation of Layer Freezing might need small update (#991) - scalar indexing of gpu array in Zygote gradient (#1016) - sending to devices tuples, named tuples and arrays does not keep track of identical objects (#1017) - Enzyme 0.13 fails with batched matrix multiply (#1024) - Compiling Recurrent Models with Reactant (#1025) - Getting NaNs in the pullback of ReverseSequence (#1043) - Simplify recursive code withFunctorsv0.5 (#1061) -unsafefree!from MLDataDevices fails for OneHotArrays (#1066) -getkeypathandlayermapnot fully working with model withParallellayers (#1068) - Re-enable Flux compatibility testing (#1070) - [AMDGPU CI] Circular dependencies disabling precompilation (#1095) -LuxTestUtils.Constantclashes withDifferentiationInterface.Constant(#1103) - Error in trying to use Optimization.jl for LSTM training based on Lux.jl (#1114) - Documentation Build Stalls (#1120) - CUDA Test CI is broken (#1121) - [MLDataDevices] devices don't preserve identity (#1129) - Random Numbers & Reactant (#1131) - Problem with Lux & SymbolicsLuxExt (#1132) - How to implement a detach operation similar to Pytorch? (#1138) - Unexpected handling of LR Schedulers in TrainState (#1143) - Directly construct Optimiser state on Reactant buffers (#1145) -(AbstractDevice)(x)should respectAdapt.adaptstructure` (#1149) - CUDA 2nd order AD with MaxPool and logsoftmax (#1150) - Recurrent cells cannot be chained with other layers (#1155) - sparseinit doesn't use provided rng fully (#1185) - Incorrect IR generated for some neural networks (#1186)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - MLDataDevices-v1.6.7

MLDataDevices MLDataDevices-v1.6.7

Diff since MLDataDevices-v1.6.6

Merged pull requests: - feat: update ConvMixer to support reactant (#1063) (@avik-pal) - Change update step to thousands in PINN 2D PDE (#1153) (@abhro) - CompatHelper: add new compat entry for ProgressTables at version 0.1 for package CIFAR10, (keep existing compat) (#1154) (@github-actions[bot]) - device(NN) should only give warning once (#1156) (@vpuri3) - feat: conditional VAE (#1157) (@avik-pal) - fix: ConditionalVAE on CI (#1159) (@avik-pal) - CompatHelper: add new compat entry for Optimisers at version 0.4 for package ConditionalVAE, (keep existing compat) (#1161) (@github-actions[bot]) - CompatHelper: add new compat entry for StableRNGs at version 1 for package ConditionalVAE, (keep existing compat) (#1162) (@github-actions[bot]) - CompatHelper: add new compat entry for Comonicon at version 1 for package ConditionalVAE, (keep existing compat) (#1163) (@github-actions[bot]) - [MLDataDevices] Bump deps (#1164) (@pxl-th)

Closed issues: - Recurrent cells cannot be chained with other layers (#1155)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.4.4

Lux v1.4.4

Diff since v1.4.3

Merged pull requests: - feat: update ConvMixer to support reactant (#1063) (@avik-pal) - feat: more nested AD rules (#1151) (@avik-pal) - fix: update default rng for reactant (#1152) (@avik-pal) - Change update step to thousands in PINN 2D PDE (#1153) (@abhro)

Closed issues: - Random Numbers & Reactant (#1131) - CUDA 2nd order AD with MaxPool and logsoftmax (#1150)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - MLDataDevices-v1.6.6

MLDataDevices MLDataDevices-v1.6.6

Diff since MLDataDevices-v1.6.5

Merged pull requests: - fix: remove old patches around reactant bug (#1135) (@avik-pal) - CompatHelper: bump compat for Flux in [weakdeps] to 0.16, (keep existing compat) (#1136) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.28.2 to 1.28.3 (#1137) (@dependabot[bot]) - fix: update to new reactant changes (#1140) (@avik-pal) - chore: bump crate-ci/typos from 1.28.3 to 1.28.4 (#1144) (@dependabot[bot]) - don't declare implicitly exported functions public (#1147) (@simeonschaub) - use return_type instead of _return_type (#1148) (@simeonschaub) - feat: more nested AD rules (#1151) (@avik-pal) - fix: update default rng for reactant (#1152) (@avik-pal)

Closed issues: - Random Numbers & Reactant (#1131) - Problem with Lux & SymbolicsLuxExt (#1132) - How to implement a detach operation similar to Pytorch? (#1138) - Unexpected handling of LR Schedulers in TrainState (#1143) - Directly construct Optimiser state on Reactant buffers (#1145) - (AbstractDevice)(x) should respect Adapt.adapt_structure (#1149) - CUDA 2nd order AD with MaxPool and logsoftmax (#1150)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxLib-v1.4.0

LuxLib LuxLib-v1.4.0

Diff since LuxLib-v1.3.11

Merged pull requests: - feat: more nested AD rules (#1151) (@avik-pal)

Closed issues: - CUDA 2nd order AD with MaxPool and logsoftmax (#1150)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxLib-v1.3.11

LuxLib LuxLib-v1.3.11

Diff since LuxLib-v1.3.10

Merged pull requests: - Update exportingtojax.md (#1107) (@wsmoses) - CompatHelper: bump compat for LossFunctions in [weakdeps] to 1, (keep existing compat) (#1108) (@github-actions[bot]) - Add TrainState docstring with Optimisers API (#1110) (@abhro) - Fix markdown list in docstring (#1111) (@abhro) - chore: bump crate-ci/typos from 1.27.3 to 1.28.1 (#1113) (@dependabot[bot]) - fix: handle debug leafs with dispatch (#1115) (@avik-pal) - test: allow the latest AMDGPU to be installed (#1116) (@avik-pal) - test: add unsafefree to skip list (#1117) (@avik-pal) - fix: use the correct dispatches for device overloads (#1118) (@avik-pal) - test: try fixing enzyme test (#1119) (@avik-pal) - ci(github-actions): use julia-actions/cache (#1122) (@avik-pal) - test: re-enable flux testing (#1123) (@avik-pal) - chore: bump minimum Reactant version (#1125) (@avik-pal) - fix: try fixing cuda install in tests (#1126) (@avik-pal) - docs: run partial dataset only on CI (#1128) (@avik-pal) - chore: bump crate-ci/typos from 1.28.1 to 1.28.2 (#1130) (@dependabot[bot]) - fix: preserve object when device is same (#1133) (@avik-pal) - fix: use functors for testing wrapped arrays (#1134) (@avik-pal) - fix: remove old patches around reactant bug (#1135) (@avik-pal) - CompatHelper: bump compat for Flux in [weakdeps] to 0.16, (keep existing compat) (#1136) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.28.2 to 1.28.3 (#1137) (@dependabot[bot]) - fix: update to new reactant changes (#1140) (@avik-pal) - chore: bump crate-ci/typos from 1.28.3 to 1.28.4 (#1144) (@dependabot[bot]) - don't declare implicitly exported functions public (#1147) (@simeonschaub) - use `returntypeinstead ofreturntype` (#1148) (@simeonschaub)

Closed issues: - Immutable Arrays (#8) - Downstream Compat Updates (#880) - Enzyme 0.13 fails with batched matrix multiply (#1024) - getkeypath and layer_map not fully working with model with Parallel layers (#1068) - Re-enable Flux compatibility testing (#1070) - [AMDGPU CI] Circular dependencies disabling precompilation (#1095) - LuxTestUtils.Constant clashes with DifferentiationInterface.Constant (#1103) - Error in trying to use Optimization.jl for LSTM training based on Lux.jl (#1114) - Documentation Build Stalls (#1120) - CUDA Test CI is broken (#1121) - [MLDataDevices] devices don't preserve identity (#1129) - Problem with Lux & SymbolicsLuxExt (#1132) - How to implement a detach operation similar to Pytorch? (#1138) - Unexpected handling of LR Schedulers in TrainState (#1143) - Directly construct Optimiser state on Reactant buffers (#1145) - (AbstractDevice)(x) should respect Adapt.adapt_structure (#1149)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.4.3

Lux v1.4.3

Diff since v1.4.2

Merged pull requests: - CompatHelper: bump compat for Flux in [weakdeps] to 0.16, (keep existing compat) (#1136) (@github-actions[bot]) - chore: bump crate-ci/typos from 1.28.2 to 1.28.3 (#1137) (@dependabot[bot]) - fix: update to new reactant changes (#1140) (@avik-pal) - chore: bump crate-ci/typos from 1.28.3 to 1.28.4 (#1144) (@dependabot[bot]) - don't declare implicitly exported functions public (#1147) (@simeonschaub) - use return_type instead of _return_type (#1148) (@simeonschaub)

Closed issues: - Problem with Lux & SymbolicsLuxExt (#1132) - How to implement a detach operation similar to Pytorch? (#1138) - Unexpected handling of LR Schedulers in TrainState (#1143) - Directly construct Optimiser state on Reactant buffers (#1145) - (AbstractDevice)(x) should respect Adapt.adapt_structure (#1149)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.4.2

Lux v1.4.2

Diff since v1.4.1

Merged pull requests: - docs: run partial dataset only on CI (#1128) (@avik-pal) - chore: bump crate-ci/typos from 1.28.1 to 1.28.2 (#1130) (@dependabot[bot]) - fix: preserve object when device is same (#1133) (@avik-pal) - fix: use functors for testing wrapped arrays (#1134) (@avik-pal) - fix: remove old patches around reactant bug (#1135) (@avik-pal)

Closed issues: - Immutable Arrays (#8) - Downstream Compat Updates (#880) - [MLDataDevices] devices don't preserve identity (#1129)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - MLDataDevices-v1.6.5

MLDataDevices MLDataDevices-v1.6.5

Diff since MLDataDevices-v1.6.4

Merged pull requests: - fix: use functors for testing wrapped arrays (#1134) (@avik-pal)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - MLDataDevices-v1.6.4

MLDataDevices MLDataDevices-v1.6.4

Diff since MLDataDevices-v1.6.3

Merged pull requests: - test: re-enable flux testing (#1123) (@avik-pal) - chore: bump minimum Reactant version (#1125) (@avik-pal) - fix: try fixing cuda install in tests (#1126) (@avik-pal) - docs: run partial dataset only on CI (#1128) (@avik-pal) - chore: bump crate-ci/typos from 1.28.1 to 1.28.2 (#1130) (@dependabot[bot]) - fix: preserve object when device is same (#1133) (@avik-pal)

Closed issues: - Immutable Arrays (#8) - Downstream Compat Updates (#880) - Re-enable Flux compatibility testing (#1070) - Documentation Build Stalls (#1120) - CUDA Test CI is broken (#1121) - [MLDataDevices] devices don't preserve identity (#1129)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.4.1

Lux v1.4.1

Diff since v1.4.0

Merged pull requests: - Update exportingtojax.md (#1107) (@wsmoses) - CompatHelper: bump compat for LossFunctions in [weakdeps] to 1, (keep existing compat) (#1108) (@github-actions[bot]) - Add TrainState docstring with Optimisers API (#1110) (@abhro) - Fix markdown list in docstring (#1111) (@abhro) - chore: bump crate-ci/typos from 1.27.3 to 1.28.1 (#1113) (@dependabot[bot]) - fix: handle debug leafs with dispatch (#1115) (@avik-pal) - test: allow the latest AMDGPU to be installed (#1116) (@avik-pal) - test: add unsafe_free to skip list (#1117) (@avik-pal) - fix: use the correct dispatches for device overloads (#1118) (@avik-pal) - test: try fixing enzyme test (#1119) (@avik-pal) - ci(github-actions): use julia-actions/cache (#1122) (@avik-pal) - test: re-enable flux testing (#1123) (@avik-pal) - chore: bump minimum Reactant version (#1125) (@avik-pal) - fix: try fixing cuda install in tests (#1126) (@avik-pal)

Closed issues: - Enzyme 0.13 fails with batched matrix multiply (#1024) - getkeypath and layer_map not fully working with model with Parallel layers (#1068) - Re-enable Flux compatibility testing (#1070) - [AMDGPU CI] Circular dependencies disabling precompilation (#1095) - LuxTestUtils.Constant clashes with DifferentiationInterface.Constant (#1103) - Error in trying to use Optimization.jl for LSTM training based on Lux.jl (#1114) - Documentation Build Stalls (#1120) - CUDA Test CI is broken (#1121)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - MLDataDevices-v1.6.3

MLDataDevices MLDataDevices-v1.6.3

Diff since MLDataDevices-v1.6.2

Merged pull requests: - test: try re-enabling enzyme testing on 0.13.16 (#1042) (@avik-pal) - chore: bump codecov/codecov-action from 4 to 5 (#1093) (@dependabot[bot]) - ci: install specific AMDGPU version (#1096) (@avik-pal) - ci: use sources for docs (#1100) (@avik-pal) - Add Reactant and TPU to autodiff.md (#1101) (@wsmoses) - refactor: cleanup some old pre-1.0 hacks (#1102) (@avik-pal) - feat: add bf16 function (#1104) (@avik-pal) - docs: add CUDA.CURAND.defaultrng() to docs (#1105) (@avik-pal) - fix: use generic broadcasting for complex numbers (#1106) (@avik-pal) - Update exportingtojax.md (#1107) (@wsmoses) - CompatHelper: bump compat for LossFunctions in [weakdeps] to 1, (keep existing compat) (#1108) (@github-actions[bot]) - Add TrainState docstring with Optimisers API (#1110) (@abhro) - Fix markdown list in docstring (#1111) (@abhro) - chore: bump crate-ci/typos from 1.27.3 to 1.28.1 (#1113) (@dependabot[bot]) - fix: handle debug leafs with dispatch (#1115) (@avik-pal) - test: allow the latest AMDGPU to be installed (#1116) (@avik-pal) - test: add unsafefree to skip list (#1117) (@avik-pal) - fix: use the correct dispatches for device overloads (#1118) (@avik-pal) - test: try fixing enzyme test (#1119) (@avik-pal) - ci(github-actions): use julia-actions/cache (#1122) (@avik-pal)

Closed issues: - Zygote + ForwardDiff support for complex differentiation (#977) - Add CUDA.CURAND.default_rng() to the table (#1003) - getkeypath and layer_map not fully working with model with Parallel layers (#1068) - [AMDGPU CI] Circular dependencies disabling precompilation (#1095) - LuxTestUtils.Constant clashes with DifferentiationInterface.Constant (#1103) - Error in trying to use Optimization.jl for LSTM training based on Lux.jl (#1114)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxCore-v1.2.1

LuxCore LuxCore-v1.2.1

Diff since LuxCore-v1.2.0

Merged pull requests: - test: try re-enabling enzyme testing on 0.13.16 (#1042) (@avik-pal) - docs: restructure the docs a bit (#1083) (@avik-pal) - fix: dataloaders use adaptstructure (#1084) (@avik-pal) - fix: mark kwargs in functor as leaf (#1085) (@avik-pal) - docs: trigger build for docs (#1087) (@avik-pal) - docs: initial prototype of exporting Lux models to Jax (#1088) (@avik-pal) - nondifferentiable gpudevice and cpudevice (#1089) (@CarloLucibello) - chore: use [sources] in Project.toml (#1090) (@avik-pal) - fix: add lineinfo to compact (#1091) (@avik-pal) - docs: highlight Reactant in landing page (#1092) (@avik-pal) - chore: bump codecov/codecov-action from 4 to 5 (#1093) (@dependabot[bot]) - ci: install specific AMDGPU version (#1096) (@avik-pal) - ci: use sources for docs (#1100) (@avik-pal) - Add Reactant and TPU to autodiff.md (#1101) (@wsmoses) - refactor: cleanup some old pre-1.0 hacks (#1102) (@avik-pal) - feat: add bf16 function (#1104) (@avik-pal) - docs: add CUDA.CURAND.defaultrng() to docs (#1105) (@avik-pal) - fix: use generic broadcasting for complex numbers (#1106) (@avik-pal) - Update exportingtojax.md (#1107) (@wsmoses) - CompatHelper: bump compat for LossFunctions in [weakdeps] to 1, (keep existing compat) (#1108) (@github-actions[bot]) - Add TrainState docstring with Optimisers API (#1110) (@abhro) - Fix markdown list in docstring (#1111) (@abhro) - chore: bump crate-ci/typos from 1.27.3 to 1.28.1 (#1113) (@dependabot[bot]) - fix: handle debug leafs with dispatch (#1115) (@avik-pal) - test: allow the latest AMDGPU to be installed (#1116) (@avik-pal) - test: add unsafefree to skip list (#1117) (@avik-pal) - fix: use the correct dispatches for device overloads (#1118) (@avik-pal) - test: try fixing enzyme test (#1119) (@avik-pal) - ci(github-actions): use julia-actions/cache (#1122) (@avik-pal)

Closed issues: - Zygote + ForwardDiff support for complex differentiation (#977) - Add CUDA.CURAND.default_rng() to the table (#1003) - getkeypath and layer_map not fully working with model with Parallel layers (#1068) - [AMDGPU CI] Circular dependencies disabling precompilation (#1095) - LuxTestUtils.Constant clashes with DifferentiationInterface.Constant (#1103) - Error in trying to use Optimization.jl for LSTM training based on Lux.jl (#1114)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - LuxLib-v1.3.10

LuxLib LuxLib-v1.3.10

Diff since LuxLib-v1.3.9

Merged pull requests: - ci: use sources for docs (#1100) (@avik-pal) - Add Reactant and TPU to autodiff.md (#1101) (@wsmoses) - refactor: cleanup some old pre-1.0 hacks (#1102) (@avik-pal) - feat: add bf16 function (#1104) (@avik-pal) - docs: add CUDA.CURAND.default_rng() to docs (#1105) (@avik-pal) - fix: use generic broadcasting for complex numbers (#1106) (@avik-pal)

Closed issues: - Zygote + ForwardDiff support for complex differentiation (#977) - Add CUDA.CURAND.default_rng() to the table (#1003)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.4.0

Lux v1.4.0

Diff since v1.3.4

Merged pull requests: - ci: use sources for docs (#1100) (@avik-pal) - Add Reactant and TPU to autodiff.md (#1101) (@wsmoses) - refactor: cleanup some old pre-1.0 hacks (#1102) (@avik-pal) - feat: add bf16 function (#1104) (@avik-pal) - docs: add CUDA.CURAND.default_rng() to docs (#1105) (@avik-pal) - fix: use generic broadcasting for complex numbers (#1106) (@avik-pal)

Closed issues: - Zygote + ForwardDiff support for complex differentiation (#977) - Add CUDA.CURAND.default_rng() to the table (#1003)

- Julia
Published by github-actions[bot] about 1 year ago

Lux - v1.3.4

Lux v1.3.4

Diff since v1.3.3

Merged pull requests: - test: try re-enabling enzyme testing on 0.13.16 (#1042) (@avik-pal) - chore: bump codecov/codecov-action from 4 to 5 (#1093) (@dependabot[bot]) - ci: install specific AMDGPU version (#1096) (@avik-pal)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - LuxTestUtils-v1.7.0

LuxTestUtils LuxTestUtils-v1.7.0

Diff since LuxTestUtils-v1.6.0

Merged pull requests: - test: try re-enabling enzyme testing on 0.13.16 (#1042) (@avik-pal) - docs: restructure the docs a bit (#1083) (@avik-pal) - fix: dataloaders use adaptstructure (#1084) (@avik-pal) - fix: mark kwargs in functor as leaf (#1085) (@avik-pal) - docs: trigger build for docs (#1087) (@avik-pal) - docs: initial prototype of exporting Lux models to Jax (#1088) (@avik-pal) - nondifferentiable gpudevice and cpudevice (#1089) (@CarloLucibello) - chore: use [sources] in Project.toml (#1090) (@avik-pal) - fix: add lineinfo to compact (#1091) (@avik-pal) - docs: highlight Reactant in landing page (#1092) (@avik-pal) - chore: bump codecov/codecov-action from 4 to 5 (#1093) (@dependabot[bot]) - ci: install specific AMDGPU version (#1096) (@avik-pal)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - LuxLib-v1.3.9

LuxLib LuxLib-v1.3.9

Diff since LuxLib-v1.3.8

Merged pull requests: - test: try re-enabling enzyme testing on 0.13.16 (#1042) (@avik-pal) - docs: restructure the docs a bit (#1083) (@avik-pal) - fix: dataloaders use adaptstructure (#1084) (@avik-pal) - fix: mark kwargs in functor as leaf (#1085) (@avik-pal) - docs: trigger build for docs (#1087) (@avik-pal) - docs: initial prototype of exporting Lux models to Jax (#1088) (@avik-pal) - nondifferentiable gpudevice and cpudevice (#1089) (@CarloLucibello) - chore: use [sources] in Project.toml (#1090) (@avik-pal) - fix: add lineinfo to compact (#1091) (@avik-pal) - docs: highlight Reactant in landing page (#1092) (@avik-pal) - chore: bump codecov/codecov-action from 4 to 5 (#1093) (@dependabot[bot]) - ci: install specific AMDGPU version (#1096) (@avik-pal)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - MLDataDevices-v1.6.2

MLDataDevices MLDataDevices-v1.6.2

Diff since MLDataDevices-v1.6.1

Merged pull requests: - fix: mark kwargs in functor as leaf (#1085) (@avik-pal) - docs: trigger build for docs (#1087) (@avik-pal) - docs: initial prototype of exporting Lux models to Jax (#1088) (@avik-pal) - nondifferentiable gpudevice and cpu_device (#1089) (@CarloLucibello) - chore: use [sources] in Project.toml (#1090) (@avik-pal) - fix: add lineinfo to compact (#1091) (@avik-pal) - docs: highlight Reactant in landing page (#1092) (@avik-pal)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - v1.3.3

Lux v1.3.3

Diff since v1.3.2

Merged pull requests: - docs: initial prototype of exporting Lux models to Jax (#1088) (@avik-pal) - chore: use [sources] in Project.toml (#1090) (@avik-pal) - fix: add lineinfo to compact (#1091) (@avik-pal)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - v1.3.2

Lux v1.3.2

Diff since v1.3.1

Merged pull requests: - docs: trigger build for docs (#1087) (@avik-pal)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - v1.3.1

Lux v1.3.1

Diff since v1.3.0

Merged pull requests: - docs: restructure the docs a bit (#1083) (@avik-pal) - fix: dataloaders use adapt_structure (#1084) (@avik-pal) - fix: mark kwargs in functor as leaf (#1085) (@avik-pal)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - MLDataDevices-v1.6.1

MLDataDevices MLDataDevices-v1.6.1

Diff since MLDataDevices-v1.6.0

Merged pull requests: - fix: dataloaders use adapt_structure (#1084) (@avik-pal)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - MLDataDevices-v1.6.0

MLDataDevices MLDataDevices-v1.6.0

Diff since MLDataDevices-v1.5.3

Merged pull requests: - fix: init hidden state for reactant (#1026) (@avik-pal) - feat: update to Functors v0.5 (#1069) (@avik-pal)

Closed issues: - sending to devices tuples, named tuples and arrays does not keep track of identical objects (#1017) - Compiling Recurrent Models with Reactant (#1025) - Simplify recursive code with Functors v0.5 (#1061)

- Julia
Published by github-actions[bot] over 1 year ago

Lux - LuxTestUtils-v1.6.0

LuxTestUtils LuxTestUtils-v1.6.0

Merged pull requests: - Rewrite (#7) (@avik-pal) - Rename to Lux (#11) (@avik-pal) - Initial Documentation (#14) (@avik-pal) - Minor Updates (#15) (@avik-pal) - Better CUDNN Dispatches (#16) (@avik-pal) - Tutorials (#21) (@avik-pal) - Proper dispatch for types not supported by CUDNN (#23) (@avik-pal) - [WIP] Recurrent Neural Networks (#24) (@avik-pal) - Fix math display in docs (#27) (@gdalle) - Initial ViT Implementation & Pretrained ImageNet Models (#29) (@avik-pal) - CompatHelper: bump compat for Setfield to 1, (keep existing compat) (#30) (@github-actions[bot]) - Code Formatting -- SciMLStyle (#31) (@avik-pal) - Cleanup generated function style (#33) (@avik-pal) - Update README.md (#37) (@zsz00) - Fix doc for PairwiseFusion (#39) (@theabhirath) - Extending Scale to allow for multiple dimension inputs (#40) (@theabhirath) - Fix Zygote error caused due to fill! (#41) (@theabhirath) - CompatHelper: bump compat for ComponentArrays to 0.12, (keep existing compat) (#43) (@github-actions[bot]) - Update JET tests to allow julia v1.6 (#47) (@avik-pal) - Formatting updates and relax parameter type (#48) (@avik-pal) - Enable doctests in CI (#51) (@avik-pal) - fix quickstart example (#52) (@visr) - Test on 1.8 (#54) (@avik-pal) - Separate out testing unreleased julia versions (#55) (@avik-pal) - Cleaner and Better Documentation (#56) (@avik-pal) - Bump Pkg Compats (#66) (@avik-pal) - CompatHelper: bump compat for MLDatasets to 0.7 for package examples, (keep existing compat) (#67) (@github-actions[bot]) - Manual to translate Flux to Lux (#69) (@avik-pal) - Try codecov for doctests (#70) (@avik-pal) - Add tests for utility functions (#74) (@avik-pal) - Add tip to install packages (#76) (@Karthik-d-k) - More Testing + Deprecate Nonsensical Functions + Better Naming for Kwargs (#80) (@avik-pal) - CompatHelper: add new compat entry for Optimisers at version 0.2, (keep existing compat) (#82) (@github-actions[bot]) - Update rrules so that we can support Yota (#85) (@avik-pal) - CompatHelper: bump compat for FluxMPI to 0.6 for package examples, (keep existing compat) (#86) (@github-actions[bot]) - Update comparison section in overview.md (#88) (@ToucheSir) - Fix typos (#89) (@claforte) - Fix minor typos in the docs (#93) (@gabrevaya) - making x Float32 in migrate from Flux example (#97) (@gabrevaya) - add inithiddenstate function (#101) (@gabrevaya) - JLArray is now registered (#103) (@YichengDWu) - [LuxTraining] Wrappers for less clunky training loops (#104) (@avik-pal) - Use OneHotArrays (#105) (@YichengDWu) - Fixes WeightNorm with zero Parameter bug (#106) (@avik-pal) - fix state update in NeuralODE example (#107) (@gabrevaya) - Deprecate elementwise_* and applyactivation (#113) (@avik-pal) - Go through the dense bias deprecation (#114) (@avik-pal) - Fix Scale's paramlength (#116) (@lungd) - Trainable hidden states (#117) (@lungd) - Rnn bias deprecation (#120) (@lungd) - Add usebias kwarg to LSTMCell and GRUCell (#121) (@lungd) - Update docs for dense layer (#124) (@avik-pal) - Upper bound ComponentArrays (#125) (@avik-pal) - Relax ComponentArrays compat (#126) (@avik-pal) - Layer Normalization Implementation (#127) (@avik-pal) - LSTM docs: don't go over first element in sequence twice (#132) (@visr) - fix PairwiseFusion docs (#133) (@YichengDWu) - Generic recurrent cells (#136) (@jumerckx) - relu tests with finite diff is too unreliable (#137) (@avik-pal) - Add kaiming initialization (#138) (@YichengDWu) - Remove Val in typeinfo of WeightNorm (#140) (@avik-pal) - Named Layers inside Generic Containers (#143) (@avik-pal) - Allow fmapping over the model (#144) (@avik-pal) - Update Imagenet example (#147) (@avik-pal) - Make normalization more AD friendly (Diffractor) (#148) (@avik-pal) - Fix CuArray -> Array rrule (#149) (@avik-pal) - Allow indexing into Chains (#150) (@avik-pal) - API for freezing layers (#151) (@avik-pal) - Allow controlling fast activation transformation (#153) (@avik-pal) - Introducing LuxLib.jl: Effectively pullout some of the custom layer implementations from Lux.jl (#154) (@avik-pal) - Try relaxing JET version (#155) (@avik-pal) - Update to use LuxLib (#156) (@avik-pal) - Allow dispatch using Lux.apply (#158) (@avik-pal) - Mark non differentiable code paths (#160) (@avik-pal) - Fix generic GN dispatch for non 4D arrays (#161) (@avik-pal) - Add dispatch for subarray (#162) (@avik-pal) - Add More Layers (#163) (@avik-pal) - Fix type stability in normalization implementation (#164) (@avik-pal) - Codecov for lib directories Take 2 (#165) (@avik-pal) - Add freeze tests to runtests (#166) (@avik-pal) - Precompile common workflows + check invalidations (#167) (@avik-pal) - Make normalization typestable (#168) (@avik-pal) - Add a manual page on precompilation (#169) (@avik-pal) - Deprecate Lux.transform in favor of Flux2Lux.jl (#170) (@avik-pal) - Remove dead code and improve var for Tracker.jl support (#171) (@avik-pal) - Hyper Network Example (#172) (@avik-pal) - Modify mkdocs settings (#173) (@avik-pal) - Make ViT work on GPUs (#174) (@avik-pal) - Add sensible recurrent layer wrappers (#175) (@avik-pal) - setup only on AbstractRules (#176) (@avik-pal) - Start using Flux2Lux (#177) (@avik-pal) - Fix some displays (#178) (@avik-pal) - Relax dropout types (#179) (@avik-pal) - Add instancenorm and alphadropout implementations (#180) (@avik-pal) - Add InstanceNorm and AlphaDropout (#181) (@avik-pal) - CompatHelper: bump compat for MLUtils to 0.3 for package examples, (keep existing compat) (#184) (@github-actions[bot]) - remove convert rrule (#185) (@ArnoStrouwen) - CompatHelper: bump compat for OneHotArrays to 0.2 for package examples, (keep existing compat) (#186) (@github-actions[bot]) - CompatHelper: bump compat for Turing to 0.22 for package examples, (keep existing compat) (#188) (@github-actions[bot]) - Fix layermap for custom layers (#189) (@avik-pal) - add example of DDIM implementation (#190) (@yng87) - LuxCore.jl: Extremely light dependency for Lux Compatibility (#191) (@avik-pal) - Revert github workflows for merged LuxCore.jl (#193) (@avik-pal) - CompatHelper: bump compat for MLUtils to 0.3 for package ImageNet, (keep existing compat) (#194) (@github-actions[bot]) - CompatHelper: bump compat for Setfield to 1 for package ImageNet, (keep existing compat) (#195) (@github-actions[bot]) - CompatHelper: bump compat for OneHotArrays to 0.2 for package ImageNet, (keep existing compat) (#196) (@github-actions[bot]) - ADAM -> Adam (#197) (@cossio) - CompatHelper: bump compat for Functors to 0.4, (keep existing compat) (#199) (@github-actions[bot]) - CompatHelper: bump compat for Functors to 0.4 for package examples, (keep existing compat) (#200) (@github-actions[bot]) - CompatHelper: bump compat for Functors to 0.4 for package ImageNet, (keep existing compat) (#201) (@github-actions[bot]) - Add easy tied weights/parameter sharing support (#202) (@avik-pal) - CompatHelper: bump compat for Functors to 0.4 for package LuxCore, (keep existing compat) (#203) (@github-actions[bot]) - CompatHelper: add new compat entry for Zygote at version 0.6 for package DDIM, (keep existing compat) (#218) (@github-actions[bot]) - Update DDIM compat requirements (#219) (@avik-pal) - Update examples (#221) (@avik-pal) - CompatHelper: bump compat for Turing to 0.23 for package examples, (keep existing compat) (#222) (@github-actions[bot]) - Fix docs (#223) (@avik-pal) - CompatHelper: bump compat for MLUtils to 0.4 for package examples, (keep existing compat) (#226) (@github-actions[bot]) - CompatHelper: bump compat for MLUtils to 0.4 for package ImageNet, (keep existing compat) (#227) (@github-actions[bot]) - CompatHelper: bump compat for MLUtils to 0.4 for package DDIM, (keep existing compat) (#228) (@github-actions[bot]) - Functor ambiguity fix (#229) (@avik-pal) - Add all compats together (#238) (@avik-pal) - CompatHelper: bump compat for Turing to 0.24 for package examples, (keep existing compat) (#241) (@github-actions[bot]) - CompatHelper: bump compat for JET to 0.7 for package test, (keep existing compat) (#251) (@github-actions[bot]) - [WIP] Use Extensions for Flux2Lux (#261) (@avik-pal) - Cleaner test workflow (#262) (@avik-pal) - Add a patch for #243 (#263) (@avik-pal) - Update LuxLib dependencies (#265) (@avik-pal) - Dropping Julia 1.6 support for Lux (#266) (@avik-pal) - Purge unnecessary dependencies into weak dependencies (#267) (@avik-pal) - Add ForwardDiff Extension: Dropout (#269) (@avik-pal) - Add Tracker as an Extension (#272) (@avik-pal) - CompatHelper: bump compat for AbstractDifferentiation to 0.5 for package examples, (keep existing compat) (#273) (@github-actions[bot]) - Some Improvements (#274) (@avik-pal) - Tracker has some of the rules (#275) (@avik-pal) - Temporary CA + Tracker Patches (#276) (@avik-pal) - Add CUDA and AMDGPU trigger packages (#277) (@avik-pal) - ReverseDiff Extension (#280) (@avik-pal) - Bump peter-evans/create-pull-request from 3 to 4 (#283) (@dependabot[bot]) - Bump actions/cache from 1 to 3 (#284) (@dependabot[bot]) - Bump actions/checkout from 1 to 3 (#285) (@dependabot[bot]) - Return the history for Recurrence (#287) (@avik-pal) - Truncate tuples and namedtuples (#290) (@avik-pal) - [WIP] Remove projects from lib to LuxDL (#291) (@avik-pal) - Patch freeze (#292) (@avik-pal) - Add dispatch for no activation (#293) (@avik-pal) - Remove weakdeps from deps (#295) (@avik-pal) - Try restoring lts support (#296) (@avik-pal) - Testing using LuxTestUtils.jl (#297) (@avik-pal) - CompatHelper: bump compat for Boltz to 0.2 for package ImageNet, (kee… (#298) (@avik-pal) - Bump peter-evans/create-pull-request from 4 to 5 (#299) (@dependabot[bot]) - remove Dataloaders (#300) (@avik-pal) - Update docs (#301) (@avik-pal) - Fix bug in recurrence ordering (#303) (@avik-pal) - Update LuxComponentArraysExt.jl (#304) (@avik-pal) - CompatHelper: bump compat for Turing to 0.25 for package examples, (keep existing compat) (#306) (@github-actions[bot]) - propertynames of CA from type (#307) (@avik-pal) - Fix GRUCell docstring (#309) (@andreuvall) - Fix enzyme doc to reflect custom rules (#310) (@wsmoses) - Fixed link to sciml book in NeuralODE example (#311) (@MartinuzziFrancesco) - Move documentation build to buildkite (#314) (@avik-pal) - Fixed Boltz.jl link in docs (#316) (@MartinuzziFrancesco) - Allow container layers to have custom names (#317) (@avik-pal) - Small grammar and style fixes (#318) (@MartinuzziFrancesco) - Added 'applyactivation' to 'RNNCell's (#319) (@MartinuzziFrancesco) - Added AbstractRecurrentCell (#322) (@MartinuzziFrancesco) - Towards v0.5 Take II (@avik-pal) - Fix errors in applying bilinear layer to ND arrays (#333) (@vpuri3) - Use WeightInitializers.jl (#334) (@avik-pal) - Use PackageExtensionCompat (#335) (@avik-pal) - CompatHelper: add new compat entry for LuxCUDA at version 0.1 for package ImageNet, (keep existing compat) (#337) (@github-actions[bot]) - CompatHelper: add new compat entry for LuxAMDGPU at version 0.1 for package ImageNet, (keep existing compat) (#338) (@github-actions[bot]) - Basic 2nd order support (#339) (@avik-pal) - Use LuxLib 0.3 (#340) (@avik-pal) - Workaround https://github.com/cjdoris/PackageExtensionCompat.jl/issues/9 (#344) (@avik-pal) - Merge pull request #344 from LuxDL/ap/lux0.4 (#346) (@avik-pal) - Fixes for compat (#350) (@avik-pal) - Fix ext docs (#351) (@avik-pal) - Allow modifying ordering of data for recurrence (#353) (@avik-pal) - CompatHelper: bump compat for ComponentArrays to 0.14 for package examples, (keep existing compat) (#355) (@github-actions[bot]) - Fix AMDGPU tests and versions (#356) (@avik-pal) - Clean up the codebase (#357) (@avik-pal) - Add example on how to save the models (#358) (@avik-pal) - DOCFIX: LayerNorm's affine default value was incorrectly noted as 'false' in doc. (#359) (@srikumarks) - CompatHelper: bump compat for Lux to 0.5 for package ImageNet, (keep existing compat) (#362) (@github-actions[bot]) - CompatHelper: bump compat for Lux to 0.5 for package DDIM, (keep existing compat) (#363) (@github-actions[bot]) - CompatHelper: bump compat for Images to 0.26 for package ImageNet, (keep existing compat) (#365) (@github-actions[bot]) - CompatHelper: bump compat for Images to 0.26 for package DDIM, (keep existing compat) (#366) (@github-actions[bot]) - Fix url link to Deep learning with Flux tutorial (#367) (@pnavaro) - CompatHelper: bump compat for Turing to 0.27 for package examples, (keep existing compat) (#368) (@github-actions[bot]) - CompatHelper: bump compat for Turing to 0.28 for package examples, (keep existing compat) (#372) (@github-actions[bot]) - Boltz Link was not working, updated (#373) (@ashwani-rathee) - Formatting fix (#379) (@avik-pal) - CompatHelper: bump compat for ADTypes to 0.2, (keep existing compat) (#380) (@github-actions[bot]) - Move experimental code to Experimental (#381) (@avik-pal) - CompatHelper: bump compat for Boltz to 0.3 for package ImageNet, (keep existing compat) (#382) (@github-actions[bot]) - Migrate Docs to using Vitepress (#383) (@avik-pal) - Add Potential CUDA Grouped Conv segfault test (#388) (@avik-pal) - Add Tutorial on modeling gravitational waveforms (#389) (@avik-pal) - CompatHelper: bump compat for Optimisers to 0.3, (keep existing compat) (#390) (@github-actions[bot]) - CompatHelper: add new compat entry for CSV at version 0.10 for package examples, (keep existing compat) (#391) (@github-actions[bot]) - CompatHelper: add new compat entry for Optimization at version 3 for package examples, (keep existing compat) (#392) (@github-actions[bot]) - CompatHelper: bump compat for Optimisers to 0.3 for package examples, (keep existing compat) (#393) (@github-actions[bot]) - CompatHelper: add new compat entry for LineSearches at version 7 for package examples, (keep existing compat) (#394) (@github-actions[bot]) - CompatHelper: add new compat entry for OptimizationOptimJL at version 0.1 for package examples, (keep existing compat) (#395) (@github-actions[bot]) - CompatHelper: bump compat for Optimisers to 0.3 for package ImageNet, (keep existing compat) (#396) (@github-actions[bot]) - CompatHelper: bump compat for Optimisers to 0.3 for package DDIM, (keep existing compat) (#397) (@github-actions[bot]) - Restructure for autosidebar (#398) (@avik-pal) - Use separate Project and Manifest files (#399) (@avik-pal) - Use separate processes to generate the tutorials (#400) (@avik-pal) - Add f16, f32, f64 functions for easy parameter eltype conversions (#401) (@avik-pal) - Add a @debug_mode for debugging NaNs and Errors (#402) (@avik-pal) - Add a stateful layer which prevents boxing in SciML Layers (#404) (@avik-pal) - CompatHelper: bump compat for Turing to 0.29 for package BayesianNN, (keep existing compat) (#405) (@github-actions[bot]) - CompatHelper: bump compat for ComponentArrays to 0.15 for package Basics, (keep existing compat) (#408) (@github-actions[bot]) - CompatHelper: bump compat for ComponentArrays to 0.15 for package GravitationalWaveForm, (keep existing compat) (#409) (@github-actions[bot]) - CompatHelper: bump compat for ComponentArrays to 0.15 for package HyperNet, (keep existing compat) (#410) (@github-actions[bot]) - CompatHelper: bump compat for ComponentArrays to 0.15 for package NeuralODE, (keep existing compat) (#411) (@github-actions[bot]) - Bump actions/checkout from 3 to 4 (#412) (@dependabot[bot]) - Change Mean to Max Pooling layer in docstring skip ci (@roflmaostc) - Upstream CA patches for AD Packages (#414) (@avik-pal) - docs: fix the ecosystem link (#419) (@sathvikbhagavan) - GPU Downstream testing (#421) (@avik-pal) - Neural PDE downstream (#422) (@avik-pal) - Minor Fixes (#425) (@avik-pal) - Ensure ReverseDiff and Gauss Adjoint is also tested (#431) (@avik-pal) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package DDIM, (keep existing compat) (#433) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package GravitationalWaveForm, (keep existing compat) (#434) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package HyperNet, (keep existing compat) (#435) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package ImageNet, (keep existing compat) (#436) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package NeuralODE, (keep existing compat) (#437) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package PolynomialFitting, (keep existing compat) (#438) (@github-actions[bot]) - CompatHelper: bump compat for LuxAMDGPU to 0.2 for package SimpleRNN, (keep existing compat) (#439) (@github-actions[bot]) - Update Project.toml (#440) (@avik-pal) - Emergency patch the ChainRules bug for Vector of CuArrays (#442) (@avik-pal) - CompatHelper: add new compat entry for Statistics at version 1, (keep existing compat) (#443) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package DDIM, (keep existing compat) (#444) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package HyperNet, (keep existing compat) (#445) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package ImageNet, (keep existing compat) (#446) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package NeuralODE, (keep existing compat) (#447) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package PolynomialFitting, (keep existing compat) (#448) (@github-actions[bot]) - CompatHelper: add new compat entry for Statistics at version 1 for package SimpleRNN, (keep existing compat) (#449) (@github-actions[bot]) - Add perdiodic padding to documentation (#452) (@maximilian-gelbrecht) - Fix link to documentation in README.md (#454) (@pierre-haessig) - Add CA test for Nested AutoDiff (#458) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.11 for package BayesianNN, (keep existing compat) (#459) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.11 for package GravitationalWaveForm, (keep existing compat) (#460) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.11 for package PolynomialFitting, (keep existing compat) (#461) (@github-actions[bot]) - Update WeightInitializers documentation (#465) (@avik-pal) - Allow dispatch on compact layers and use let blocks for faster closures (#466) (@avik-pal) - Add a RepeatedLayer (#467) (@avik-pal) - Fix check (#469) (@avik-pal) - CompatHelper: bump compat for Adapt to 4, (keep existing compat) (#470) (@github-actions[bot]) - Patch Metal Recurrent Neural Networks (#474) (@avik-pal) - Bump actions/cache from 3 to 4 (#479) (@dependabot[bot]) - Bump codecov/codecov-action from 3 to 4 (#484) (@dependabot[bot]) - Bump peter-evans/create-pull-request from 5 to 6 (#485) (@dependabot[bot]) - Drop 1.6 support + Patches to Fix Tests (#487) (@avik-pal) - Remove extensions in favor of GPUArraysCore (#488) (@avik-pal) - Parallel Testing + Distributed Docs build (#490) (@avik-pal) - Add output lengths for layers (#491) (@SebastianM-C) - Format code (#493) (@avik-pal) - Try using DocumenterVitepress.jl (#496) (@avik-pal) - Move Stateful lux layer out of experimental (#497) (@avik-pal) - Inbuilt-Distributed Setup (#500) (@avik-pal) - Remove ComponentArrays type-piracies (#501) (@avik-pal) - Add outputsize for Chain (#503) (@SebastianM-C) - fixes ImageNet, SimpleRNN examples (#504) (@avik-pal) - Documentation Fixes (#505) (@avik-pal) - Fix tutorial numbering (#509) (@avik-pal) - CompatHelper: add new compat entry for LuxAMDGPU at version 0.2 for package Basics, (keep existing compat) (#510) (@github-actions[bot]) - CompatHelper: add new compat entry for Metalhead at version 0.9 for package ImageNet, (keep existing compat) (#511) (@github-actions[bot]) - CompatHelper: add new compat entry for Flux at version 0.14 for package ImageNet, (keep existing compat) (#512) (@github-actions[bot]) - Patches (#519) (@avik-pal) - Docs Again (#520) (@avik-pal) - General Quality of Life Enhancements (#521) (@avik-pal) - CompatHelper: add new compat entry for Literate at version 2 for package Basics, (keep existing compat) (#522) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package BayesianNN, (keep existing compat) (#523) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package GravitationalWaveForm, (keep existing compat) (#524) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package HyperNet, (keep existing compat) (#525) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package NeuralODE, (keep existing compat) (#526) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package PolynomialFitting, (keep existing compat) (#527) (@github-actions[bot]) - CompatHelper: add new compat entry for Literate at version 2 for package SimpleRNN, (keep existing compat) (#528) (@github-actions[bot]) - New Interface to switch between frameworks (#529) (@avik-pal) - CompatHelper: add new compat entry for MLUtils at version 0.4 for package SimpleChains, (keep existing compat) (#530) (@github-actions[bot]) - Move replicate to LuxCore (#532) (@MartinuzziFrancesco) - Test for implicit imports (#533) (@avik-pal) - Fix https://github.com/LuxDL/Lux.jl/issues/534 (#535) (@avik-pal) - Fix Dense documentation (#539) (@Sleort) - Fix typo: l to layer (#546) (@prbzrg) - Minor fixes (#547) (@avik-pal) - QoL improvements for tracing based AD (#548) (@avik-pal) - Fix SimpleChains for single dims (#552) (@avik-pal) - Standardize the handling of states (#553) (@avik-pal) - CompatHelper: add new compat entry for ADTypes at version 0.2 for package HyperNet, (keep existing compat) (#555) (@github-actions[bot]) - CompatHelper: add new compat entry for ADTypes at version 0.2 for package PolynomialFitting, (keep existing compat) (#556) (@github-actions[bot]) - CompatHelper: add new compat entry for ADTypes at version 0.2 for package SimpleChains, (keep existing compat) (#557) (@github-actions[bot]) - LuxSimpleChainsExt: specify rng when initializing (#559) (@pao) - Update SimpleRNN docs (#561) (@avik-pal) - Remove TruncatedStacktraces (#562) (@avik-pal) - Use @closure to make closures type-stable (#563) (@avik-pal) - Add set_device! to docs (#569) (@avik-pal) - Fuse the activation and bias (#570) (@avik-pal) - Try fixing the hydration error (#571) (@avik-pal) - Test continuous benchmarking (#572) (@avik-pal) - Add more benchmarks (#574) (@avik-pal) - More Continuous Benchmarks (#575) (@avik-pal) - Make the AD benchmarks type stable (#576) (@avik-pal) - Bump julia-actions/setup-julia from 1 to 2 (#577) (@dependabot[bot]) - Fix numbering in the docs (#578) (@avik-pal) - Add a gallery component (#579) (@avik-pal) - AD Housekeeping (#580) (@avik-pal) - Update style.css to disable 'calt' feature for monospace (#581) (@cormullion) - Improvement to the @compact API (#584) (@avik-pal) - Add dynamic expressions extension (#585) (@avik-pal) - Convert examples to doctests (#586) (@avik-pal) - Bump crate-ci/typos from 1.18.0 to 1.20.8 (#587) (@dependabot[bot]) - CompatHelper: add new compat entry for Lux at version 0.5 for package SymbolicOptimalControl, (keep existing compat) (#589) (@github-actions[bot]) - Allow @set! for Stateful Layers (#590) (@avik-pal) - Used New Fused Ops from LuxLib (#591) (@avik-pal) - CompatHelper: bump compat for ADTypes to 1, (keep existing compat) (#592) (@github-actions[bot]) - CompatHelper: bump compat for ADTypes to 1 for package HyperNet, (keep existing compat) (#593) (@github-actions[bot]) - CompatHelper: bump compat for ADTypes to 1 for package PolynomialFitting, (keep existing compat) (#594) (@github-actions[bot]) - CompatHelper: bump compat for ADTypes to 1 for package SimpleChains, (keep existing compat) (#595) (@github-actions[bot]) - CompatHelper: bump compat for ADTypes to 1 for package SimpleRNN, (keep existing compat) (#596) (@github-actions[bot]) - Bump crate-ci/typos from 1.20.8 to 1.20.9 (#597) (@dependabot[bot]) - Native Nested AD support for Lux Models (#598) (@avik-pal) - CompatHelper: bump compat for Turing to 0.31 for package BayesianNN, (keep existing compat) (#599) (@github-actions[bot]) - Faster testing (#601) (@avik-pal) - Unstructure structured inputs for reasonable broadcasting (#603) (@avik-pal) - Bump crate-ci/typos from 1.20.9 to 1.20.10 (#607) (@dependabot[bot]) - Add 3rd party tutorial (#609) (@agdestein) - CompatHelper: bump compat for DynamicExpressions to 0.17 for package SymbolicOptimalControl, (keep existing compat) (#611) (@github-actions[bot]) - Improvements to Nested AD (#612) (@avik-pal) - Add missing table of contents entry (#613) (@agdestein) - Attempt to build the tutorials in parallel (#616) (@avik-pal) - Add field access syntax to Chain (#619) (@Sleort) - Add vector_jacobian_product and jacobian_vector_product functions (#623) (@avik-pal) - Bump crate-ci/typos from 1.20.10 to 1.21.0 (#624) (@dependabot[bot]) - Bring in batched_jacobian (#625) (@avik-pal) - Added layer for periodic inputs (#626) (@nicholaskl97) - Cleanup (#629) (@avik-pal) - CompatHelper: bump compat for CairoMakie to 0.12 for package BayesianNN, (keep existing compat) (#631) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.12 for package GravitationalWaveForm, (keep existing compat) (#632) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.12 for package PolynomialFitting, (keep existing compat) (#633) (@github-actions[bot]) - CompatHelper: bump compat for CairoMakie to 0.12 for package SymbolicOptimalControl, (keep existing compat) (#634) (@github-actions[bot]) - Fixes to type stability of Zygote (#635) (@avik-pal) - Reduce max chunksize (#637) (@avik-pal) - missing keyword in docstring (#638) (@RoyCCWang) - Adding Enzyme Tests (#639) (@avik-pal) - Enzyme Testing + Caching in compute_gradients (#640) (@avik-pal) - Add Enzyme to benchmark infra (#641) (@wsmoses) - Add Enzyme to benchmark infra (#643) (@avik-pal) - Add a warning on using Tracker with SimpleChains (#645) (@avik-pal) - Improvements to Batched Jacobian (#646) (@avik-pal) - Patch a compact bug (#648) (@avik-pal) - update makie (#649) (@avik-pal) - Test on multiple os (#650) (@avik-pal) - Fix DocumenterVitepress compat (#651) (@avik-pal) - Prevent infinite loop in Tracker (#652) (@avik-pal) - Test ComponentArrays with Enzyme (#653) (@avik-pal) - Update DocumenterVitepress compat in docs (#654) (@asinghvi17) - Use ArgCheck.jl for helpful error messages (#655) (@avik-pal) - CompatHelper: bump compat for OptimizationOptimJL to 0.3 for package GravitationalWaveForm, (keep existing compat) (#656) (@github-actions[bot]) - CompatHelper: bump compat for OptimizationOptimJL to 0.3 for package SymbolicOptimalControl, (keep existing compat) (#657) (@github-actions[bot]) - CompatHelper: bump compat for Turing to 0.32 for package BayesianNN, (keep existing compat) (#658) (@github-actions[bot]) - Restore the rrule for merge (#659) (@avik-pal) - Bump julia-actions/julia-format from 2 to 3 (#660) (@dependabot[bot]) - Update & Rewrite the DDIM example (#661) (@avik-pal) - Quality of Life Improvements (#666) (@avik-pal) - CompatHelper: bump compat for SymbolicUtils to 2 for package SymbolicOptimalControl, (keep existing compat) (#669) (@github-actions[bot]) - Add Cartesian Embedding methods (#670) (@ldeso) - More principled rewrite of layermap (#671) (@avik-pal) - Clean up the code for debug mode (#674) (@avik-pal) - CompatHelper: add new compat entry for TensorBoardLogger at version 0.1 for package DDIM, (keep existing compat) (#676) (@github-actions[bot]) - CompatHelper: add new compat entry for CairoMakie at version 0.12 for package DDIM, (keep existing compat) (#677) (@github-actions[bot]) - Remove rrule for merge (#679) (@avik-pal) - Minor optimizations (#681) (@avik-pal) - CompatHelper: bump compat for Turing to 0.33 for package BayesianNN, (keep existing compat) (#688) (@github-actions[bot]) - Newer public functions (#690) (@avik-pal) - Update Boltz API Docs (#691) (@avik-pal) - Bump crate-ci/typos from 1.21.0 to 1.22.3 (#693) (@dependabot[bot]) - More API updates (#696) (@avik-pal) - Add ReverseSequence (#698) (@NeroBlackstone) - Training ConvMixer on CIFAR10 in 10mins (#700) (@avik-pal) - Add activation functions doc reference (Rebase #694) (#702) (@avik-pal) - Clean up the CI scripts (#703) (@avik-pal) - Loss functions module (#704) (@avik-pal) - Add test guide documentation (#705) (@NeroBlackstone) - Add ReverseSequence() docs (#706) (@NeroBlackstone) - Bidirectional RNN (#708) (@NeroBlackstone) - Run doctests in the test CI + Lazy install test dependencies (#710) (@avik-pal) - Bump crate-ci/typos from 1.22.3 to 1.22.7 (#711) (@dependabot[bot]) - Mark unexported symbols as public (#712) (@avik-pal) - Install packages before loading (#713) (@avik-pal) - Extend training API and update examples (#714) (@avik-pal) - Try fixing AMDGPU test stalling (#716) (@avik-pal) - CompatHelper: bump compat for AMDGPU in [weakdeps] to 0.9, (keep existing compat) (#717) (@github-actions[bot]) - Try to improve coverage (#718) (@avik-pal) - Try wider docs (#721) (@avik-pal) - Compiled ReverseDiff for training on CPU (#722) (@avik-pal) - Makes name concrete types (#723) (@avik-pal) - CompatHelper: add new compat entry for StaticArrays at version 1 for package docs, (keep existing compat) (#724) (@github-actions[bot]) - CompatHelper: add new compat entry for KernelAbstractions at version 0.9 for package docs, (keep existing compat) (#725) (@github-actions[bot]) - Bump crate-ci/typos from 1.22.7 to 1.22.9 (#726) (@dependabot[bot]) - Performance Pitfalls and How to Catch them (#727) (@avik-pal) - CompatHelper: bump compat for DynamicExpressions in [weakdeps] to 0.18, (keep existing compat) (#728) (@github-actions[bot]) - CompatHelper: bump compat for DynamicExpressions to 0.18 for package SymbolicOptimalControl, (keep existing compat) (#729) (@github-actions[bot]) - Store the optimizer in TrainState (#731) (@avik-pal) - Simply show implementations and make them round-trippable (#732) (@avik-pal) - Try removing the type assert with this (#734) (@avik-pal) - Add enzyme support for loss functions from LossFunctions.jl (#736) (@avik-pal) - Mark cartersian index tests on cuda broken for now (#737) (@avik-pal) - Run CI on pre (#739) (@avik-pal) - Revert bee2de7-1188db7 (#740) (@avik-pal) - Use shorthand syntax of @concrete (#741) (@avik-pal) - Check status of broken tests (#742) (@avik-pal) - Aggregate changes for v1 (#744) (@avik-pal) - fix: nested ad when using direct eval in function (#745) (@avik-pal) - CompatHelper: add new compat entry for GPUArraysCore at version 0.1 for package docs, (keep existing compat) (#746) (@github-actions[bot]) - Bump crate-ci/typos from 1.22.9 to 1.23.1 (#748) (@dependabot[bot]) - chore: bump simplechains version (#749) (@avik-pal) - CompatHelper: bump compat for SciMLSensitivity to 7 for package NeuralODE, (keep existing compat) (#750) (@github-actions[bot]) - docs: restructure the manual entries a bit (#751) (@avik-pal) - refactor: bring Optimisers.jl into main deps (#754) (@avik-pal) - refactor: drop the AMDGPU extension (#755) (@avik-pal) - rearrange code in extensions (#756) (@avik-pal) - fix: use proper qualified accesses for modules (#757) (@avik-pal) - docs: remove redundant old preferences (#759) (@avik-pal) - feat: allow multiple @return (#760) (@avik-pal) - Making all eltypes Float32 in Fitting a Polynomial using MLP (#761) (@Sleort) - docs: fix inline math rendering (#762) (@avik-pal) - refactor: use the faster `getdevicetype(#763) (@avik-pal) - refactor: move ForwardDiff.jl into main deps (#764) (@avik-pal) - test: set st to training (#765) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.23.1 to 1.23.2 (#766) (@dependabot[bot]) - Update docstring dropout (#770) (@dmetivie) - chore: recommend GH Discussions for Q/A (#774) (@avik-pal) - Allow 2d input if RNN order is BatchLastIndex (#778) (@NeroBlackstone) - test: remove@testnowarntesting (#781) (@avik-pal) - fix: don't reuse pullback for safety (#782) (@avik-pal) - improvements to compact macro (#783) (@avik-pal) - test: warp@inferredwith@test(#784) (@avik-pal) - chore: add NNlib as a direct dep (#785) (@avik-pal) - fix: update to latest LuxLib API + deprecations (#786) (@avik-pal) - perf: fix enzyme benchmarks (#787) (@avik-pal) - test: trigger enzyme tests (#788) (@avik-pal) - docs: fix typo in "JVP & VJP Wrappers" (#789) (@ldeso) - docs: update docs from downstream changes (#790) (@avik-pal) - CompatHelper: bump compat for WeightInitializers to 1, (keep existing compat) (#791) (@github-actions[bot]) - CompatHelper: bump compat for WeightInitializers to 1 for package docs, (keep existing compat) (#792) (@github-actions[bot]) - test: improved testing (#793) (@avik-pal) - feat: improvements to the Training API (#794) (@avik-pal) - feat: easy mechanism to set preferences (#798) (@avik-pal) - CompatHelper: bump compat for SymbolicUtils to 3 for package SymbolicOptimalControl, (keep existing compat) (#799) (@github-actions[bot]) - test: update to the newer LuxTestUtils (#800) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.23.2 to 1.23.5 (#804) (@dependabot[bot]) - refactor: move TrackerExt in a directory (#806) (@avik-pal) - feat:NilArrayfor fast size propagation (#811) (@avik-pal) - docs: add new function to docs (#813) (@avik-pal) - fix: update Dynamic Expressions to 0.19 (#814) (@avik-pal) - docs: add documentation forMLDataDevices(#815) (@avik-pal) - CompatHelper: add new compat entry for MLDataDevices at version 1 for package docs, (keep existing compat) (#818) (@github-actions[bot]) - test: try separating the test Project files (#819) (@avik-pal) - feat: use faster version of batched matmul (#820) (@avik-pal) - ci: setup benchmarking CI (#821) (@avik-pal) - ci: add CI to benchmark load times (#822) (@avik-pal) - chore(deps): bump actions/checkout from 2 to 4 (#823) (@dependabot[bot]) - chore(deps): bump peter-evans/create-or-update-comment from 3 to 4 (#824) (@dependabot[bot]) - chore(deps): bump julia-actions/setup-julia from 1 to 2 (#825) (@dependabot[bot]) - chore(deps): bump peter-evans/find-comment from 2 to 3 (#826) (@dependabot[bot]) - chore(deps): bump julia-actions/cache from 1 to 2 (#827) (@dependabot[bot]) - fix: mark objective function asConst(#835) (@avik-pal) - ci: separate testing for groups in buildkite (#836) (@avik-pal) - chore: update all AMDGPU compats (#837) (@avik-pal) - test: remove Flux as a direct test dep (#838) (@avik-pal) - test: remove some of the unnecessary Flux tests (#839) (@avik-pal) - refactor: cleanup of internals (#840) (@avik-pal) - fix: remove type pirated functions from Lux (#843) (@avik-pal) - chore(deps): bump actions/upload-artifact from 2 to 4 (#844) (@dependabot[bot]) - chore(deps): bump crate-ci/typos from 1.23.5 to 1.23.6 (#845) (@dependabot[bot]) - CompatHelper: add new compat entry for Static at version 1 for package test, (keep existing compat) (#846) (@github-actions[bot]) - feat: improve batched jacobian (#848) (@avik-pal) - chore: bump minimum LuxTestUtils version (#850) (@avik-pal) - docs: minor documentation changes (#855) (@avik-pal) - chore: marking layers as deprecated (#856) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.23.6 to 1.24.1 (#857) (@dependabot[bot]) - docs: more details in performance pitfalls (#859) (@avik-pal) - fix: remove hacky usage of module getproperty rrules (#865) (@avik-pal) - feat: expandtrainmode,testmode,updatestate` to support Stateful Layers (#866) (@avik-pal) - CompatHelper: bump compat for Turing to 0.34 for package BayesianNN, (keep existing compat) (#870) (@github-actions[bot]) - chore(deps): bump crate-ci/typos from 1.24.1 to 1.24.3 (#871) (@dependabot[bot]) - test: don't run doctests on pre-releases (#873) (@avik-pal) - test: run with DD error mode (#874) (@avik-pal) - refactor: static fields in layers (#875) (@avik-pal) - CompatHelper: bump compat for DataAugmentation to 0.3 for package ConvMixer, (keep existing compat) (#876) (@github-actions[bot]) - CompatHelper: bump compat for DataAugmentation to 0.3 for package DDIM, (keep existing compat) (#877) (@github-actions[bot]) - ci(buildkite): run some of the tutorials on CPU runners (#879) (@avik-pal) - CompatHelper: add new compat entry for StableRNGs at version 1 for package docs, (keep existing compat) (#881) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.5 for package DDIM, (keep existing compat) (#885) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.5 for package ImageNet, (keep existing compat) (#886) (@github-actions[bot]) - CompatHelper: bump compat for JLD2 to 0.5 for package SimpleRNN, (keep existing compat) (#887) (@github-actions[bot]) - chore(deps): bump peter-evans/create-pull-request from 6 to 7 (#888) (@dependabot[bot]) - chore(deps): bump crate-ci/typos from 1.24.3 to 1.24.5 (#889) (@dependabot[bot]) - Fixed updatingtov1 link in README.md (#890) (@MartinuzziFrancesco) - fix: pretty printing of MaxPool Layer (#891) (@avik-pal) - docs: add a PINN tutorial with nested AD (#894) (@avik-pal) - fix: remove UnrolledUtilities dep (#895) (@avik-pal) - refactor: cleanup Training and preserve type-stability in Enzyme (#896) (@avik-pal) - docs: add an Optimization.jl tutorial showcasing lazy data movement (#897) (@avik-pal) - CompatHelper: add new compat entry for Literate at version 2 for package PINN2DPDE, (keep existing compat) (#899) (@github-actions[bot]) - feat: update imagenet training script (#909) (@avik-pal) - docs: simplify getting started docs (#930) (@avik-pal) - fix: forceinline inside generated functions to avoid recursion issues (#931) (@avik-pal) - fix: update to use testgradients macro (#932) (@avik-pal) - test: froggie tests are broken on gpu (#933) (@avik-pal) - fix: static vector input to dense (#936) (@avik-pal) - ci(buildkite): debugging CUDA segfaults on CI (#937) (@avik-pal) - docs: try using the new documenter vitepress (#943) (@avik-pal) - docs: collapse docstrings by default (#949) (@avik-pal) - feat: update minimum version of Enzyme (#950) (@avik-pal) - docs: fix version picker path (#951) (@avik-pal) - fix: update Optimization compats (#952) (@avik-pal) - fix: update GravitationalWaveform tutorial (#953) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.24.5 to 1.24.6 (#955) (@dependabot[bot]) - docs: update README example (#956) (@avik-pal) - fix: patch optimization tutorial (#959) (@avik-pal) - Added to Nested AD example how to use `batchedjacobian(#964) (@facusapienza21) - Remove line about "not saving the model" (#965) (@asinghvi17) - fix: optimization integration for gravitational waveform (#966) (@avik-pal) - docs: add compilation example using Reactant (#967) (@avik-pal) - docs: add the newxla_device(#968) (@avik-pal) - feat: compile training loop automatically using reactant (#969) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.24.6 to 1.25.0 (#971) (@dependabot[bot]) - ci: run tests only on1.10for now (#975) (@avik-pal) - refactor: makeLossFunctionsan optional dep (#976) (@avik-pal) - chore(deps): bump crate-ci/typos from 1.25.0 to 1.26.0 (#978) (@dependabot[bot]) - CompatHelper: bump compat for GPUArraysCore to 0.2, (keep existing compat) (#984) (@github-actions[bot]) - CompatHelper: bump compat for GPUArraysCore to 0.2 for package docs, (keep existing compat) (#985) (@github-actions[bot]) - fix:LV/Octavianmoved to optional deps (#986) (@avik-pal) - docs(reactant): simplify the enzyme call (#987) (@avik-pal) - CompatHelper: bump compat for Turing to 0.35 for package BayesianNN, (keep existing compat) (#989) (@github-actions[bot]) - chore(deps): bump crate-ci/typos from 1.26.0 to 1.26.8 (#992) (@dependabot[bot]) - perf: loadLoopVectorizationandOctavianfor benchmarks (#994) (@avik-pal) - refactor: use Lux primitives for AD (#995) (@avik-pal) - Move code blocks inside bullet list (#996) (@abhro) - Fix images.jl link (#997) (@NeroBlackstone) - Fix broken link in Recurrence docs (#1001) (@MartinuzziFrancesco) - refactor: move all subpackages into a mono-repo (#1002) (@avik-pal) - feat: support passing in device and client to XLA (#1020) (@avik-pal) - fix: avoid tracing through Lux models (#1021) (@avik-pal) - chore: bump crate-ci/typos from 1.26.8 to 1.27.0 (#1022) (@dependabot[bot]) - ci: combine workflows (#1023) (@avik-pal) - fix: init hidden state for reactant (#1026) (@avik-pal) - fix for Zygote and ChainRules OneElement (#1038) (@CarloLucibello) - Link to quickstart explaining calling models in interface (#1040) (@oxinabox) - fix: make enzyme testing opt-in for now (#1041) (@avik-pal) - fix: missing zero leads to NaNs (#1044) (@avik-pal) - chore: bump allOptimisersversion (#1058) (@avik-pal) - CompatHelper: bump compat for Optimisers to 0.4 for package DDIM, (keep existing compat) (#1059) (@github-actions[bot]) - fix: gracefully handleOneHotArrays` (#1064) (@avik-pal) - chore: bump crate-ci/typos from 1.27.0 to 1.27.3 (#1065) (@dependabot[bot]) - fix: unsafe free for OneHotArrays (#1067) (@avik-pal) - feat: update to Functors v0.5 (#1069) (@avik-pal) - ci: generate tags for subdir projects (#1071) (@avik-pal)

Closed issues: - TagBot trigger issue (#6) - Suboptimal GroupNorm Implementation on GPUs (#10) - Recurrent Neural Networks (#12) - Flux Feature Parity (#13) - Front page example broken (#17) - Distributed Data Parallel Training on examples/ImageNet error (#18) - ] add Lux doesn't work (#19) - Support for non-CUDNN data types (#22) - Hope to add more examples (#25) - Train examples/NeuralODE error (#26) - Thoughts on docs & tutorials (#28) - Available architectures (#34) - Register (#36) - PairwiseFusion takes more inputs than documented (#38) - Remove Requires.jl (#45) - Performance regressions with ComponentArrays (#49) - How do I extend Chain to have multiple inputs (#53) - Nested Lists broken with the current Documentation (#68) - Remove ActivationFunction? (#71) - Quickstart Example: using Optimisers, Zygote do not work unless we explicitly add those to current environment. (#75) - Remove track_stats from GroupNorm (#78) - Named Layers for Container Types (#79) - Tracking support for Enzyme.jl (#81) - Lighter syntax for stateless networks? (#83) - Improve Julia & Lux for the uninitiated (#90) - Remaining Deprecations (#91) - Scalar indexing problem for the NeuralODE example (#92) - Basic example from Migrating from Flux to Lux is broken || normalization issue (#94) - WeightNorm causes NaN for Conv layer gradients (#95) - [Feature request] Another type of Chain that sequentially passing x and st (#96) - Generalize normalization to work for unconstrained types (#98) - RNN and LSTM break when using GPU (#100) - Can one compose lux layers with graph neural network (#102) - optimising parameters with Optimization.jl (#108) - add OrdinaryDiffEq downstream test (#110) - Make it easier to pass empty state st = (;) (#118) - is there transposed convolution (#122) - Support for multidimensional data? (#123) - Inconsistent descripition of PairwiseFusion (#130) - getindex for Chain (#131) - No method matching with argument IRTools.Inner.Undefined in gradient computation. (#134) - checkpointing for backpropagation (#139) - CUDNNError during backpropagation in simple CNN (#141) - Proposal of Lux + Enzyme + CUDA differential programming example (#145) - concat input and output of a layer (#146) - How to avoid the activation function conversion (#152) - Allow dispatch on custom array types (#157) - Nondeterministic method error for some gradients... (#159) - Tied Weights (#182) - Frozen Weights (#183) - layermap fails on custom containers (#187) - Remove LuxCore manual installation in workflows (#192) - Custom layers (#220) - Lux.setup not found (#224) - Support for CuArray{Float64} (#237) - How to create a chain of LSTMcells in Lux.jl? (#239) - Constrain the output layer! (#242) - On using ComponentArray for L2 regularization (#243) - Shared Lux Testing Package (#270) - Automatic Differentiation Backends (#271) - Get the full run of a recurrent cell using Lux (#282) - Nested AD doesn't work with ComponentArrays (#286) - Remove weak dependencies (#294) - Lux Recurrence history is not in the correct order (I think) (#302) - tanh activation function in GRUCell docstring (#308) - WARNING: Wrapping Vararg directly in UnionAll is deprecated (wrap the tuple instead). (#312) - Adding AbstractRecurrentCell (#320) - Splitting weights initializers in own package (#321) - Include documentation on how to save models with Lux (#329) - network with multiple inputs (#330) - Working with NamedTuples (#331) - bilinear doesn't work for AbstractArray{T,3} (#332) - Use ADTypes (#354) - Add ability to load weights into Dense (#361) - Initialize weights of network from csv file (#369) - BatchNorm(; affine = false) in a Chain missing _getproperty(::SubArray... when ps = ComponentArray(ps) (#371) - Slightly broken example Polynomial Fitting (#374) - Fixing the testing on buildkite (#375) - Implementation of custom layer in Lux (#376) - deploy versions (#384) - DocumenterVitepress module into package (#385) - Segfault when using Lux.Conv with CUDA (#386) - Documentation Enhancement Suggestions (#387) - @save not defined? (#403) - The MNIST Neural ODE example does not work with ReverseDiffAdjoint (#407) - Update Documentation to mention loading AD Packages for Training (#415) - ComponentArrays makes coupling layers type-unstable unexpectedly (#416) - ComponentArrays makes Custom Layers containing Chains type-unstable (#417) - Custom Layer, Differential Equation as Activation Function. (#418) - Gradients of shared parameters do not behave as expected (#420) - inconsistent LSTM results in time series forecast between Flux.jl and Lux.jl (#424) - Broadcast Layer (#426) - Can't use freeze with ComponentArray. (#427) - Lux.testmode resorts to scalar indexing with frozen params (#432) - Custom Model for Neural ODE (#441) - Periodic Padding (#451) - Bug in ConvTranspose? (#455) - Generating Parameters with CUDA (#456) - Zygote gradient fails for Custom Layer (#457) - Adaptors should not change the dtype (#462) - Any equivalency to torch.nn.Parameter? (#464) - Support for MultiRNNCell (#472) - GPU evaluation of Recurrence() broken on Metal (#473) - Recurrent Layers don't take Vectors as Input (#478) - How to choose a specific GPU device (#480) - Training in batches and building gradient as mean of individual gradients (#481) - ComponentArrays type piracy (#482) - No Gradients with respect to parameters using Custom Layers (#483) - Where is the API doc for activatations (#486) - Distributed Training (#494) - AMDGPU CI takes a lot of time (#495) - SimpleRNN example is broken on AMDGPU (#498) - Support for multi-core CPUs? (#502) - Bayesian NN example throws Pkg Extension load errors (#507) - 404 many Tutorial links are invalid (#508) - uninitiated tutorial replicate part shows different numbers but should show the same (#513) - uninitiated tutorial - Code Font confusing for pipe |> (#514) - Documentation Request: Standardize the handling of the state st (#515) - Let @compact return the updated state (#516) - Documentation Request: Have a section about Loss Functions (#517) - Documentation Request: Also list GeometricML.jl and SciML.ai under Ecosystem (#518) - Should replicate be part of LuxCore? (#531) - pad=SamePad() does not work as intended in ConvTranspose. (#534) - Array of Structs to Struct of Array transformation for some AD backends (#538) - Documentation on main is broken (#541) - Lux.AMDGPU: type cast throws error (#542) - l should be clarified. Maybe a typo? (#543) - Bug when converting model with single layer to SimpleChains (#545) - Improve broadcasting via FastBroadcast.jl (#549) - FYI: Comment and question (#550) - TypeError using SimpleChains integration (#551) - SimpleChains-backed models do not setup consistenly with fixed RNG seeding (#554) - Stable docs missing (#566) - Tutorial links too small (#567) - Constraint on weights and bias (#568) - Continuous Benchmarking (#573) - Allow "const" arrays as inputs to @compact (#588) - Pullback over jacobian (with CUDA) (#602) - Zygote nested AD failure (#604) - Meta-Issue for improvements to @compact (#606) - Nested AD for Parameter Gradient/Jacobian (#610) - Rewrite `@layermapto use KeyPath from Functors (#615) - Extracting part of a model, with the corresponding parameters and states (#617) - DifferentiatingZygote.pullback(#621) - Batched Jacobian Functions (#622) - Error for JVP by Enzyme (#628) - [Nested AD] Incorrect gradient when taking a gradient over a gradient using StatefulLuxLayer (#630) - batched_jacobian + CUDA => InvalidIRError (#636) - Add a compiled tape version for ReverseDiff (#642) - Simple MLP requires Enzyme runtimeActivity (#647) - UsingswishasConvactivation function errors on the GPU (#662) - Fast activation error (#663) - Definition and implementation of 'Loss' in Linear Regression Tutorial "Julia & Lux for the Uninitiated" (#664) - Add improper qualified accesses checks (#667) -rruleforBase.mergedefined inChainRulesCore(#678) - Different activation functions in one layer (#680) - Remove Auto-Flattening of Chains (#682) - Add type-stability checks viaDispatchDoctor.jl(#683) - Support for inactive arguments in DifferentiationInterface (#685) - Feature request: Bidirectional for RNN layer. (#687) - Predefined loss functions (#689) - Static Type Parameters not accessible inside@compact(#692) - Auto detect and warn against performance pitfalls (#699) - Add documentation about how to partial tests. (#701) - Feature request: 1D CNN, i.e. keras.layer.Conv1d (#709) - AMDGPU CI stalls (#715) - Inference usingNN :: Chaininside a GPU kernel (#720) - customshowis often not valid julia syntax to reconstruct (#730) - Roadmap to v1 (#735) - Error incomputegradientswhen loss already has aZygote.gradient(#743) - NCCL Complex wrapper (#747) - DropTracker.jl` support for SimpleChains (#753) - Feature request: TimeDistributed Layer (#758) - Feature Request: Allow recurrent layers with 2D input (features * seqlength), even if the order is BatchLastIndex (#767) - Missing statistics tracking in normalization layers (#780) - unexpected parameter type for AbstractExplicitContainer with single trainable field (#795) - Test with DispatchDoctor error mode (#797) - Change defaults for Layers to match Pytorch (#808) - Gradient checkpointing/ rematerialization (#816) - how to use Lux.jl utility 'BinaryCrossEntropy' (#841) - Mixed-Precision Matrix Multiply Performance Regression (#847) - Lux.testmode not updating state for BatchNorm layers for nested models? (#849) - Add Float128 support (#851) - Add multiple cpu cores and multiple Julia computers support (#852) - Enzyme.Forward hits Octavian dispatch in Dense (#853) - Move uncommon layers to Boltz.jl (#854) - Update the ImageNet example (#878) - MethodError: no method matching applychain (#884) - Question: how can one use TrainState.cache? (#892) - Problem with Enzyme AD and SArray parameters (#935) - Is AbstractLuxContainerLayer abandoned in Lux 1.0.4? (#942) - Docs build is broken (#957) - Encoder-Decoder RNNs (#961) - Efficient way to compute Jacobian in nested AD (#963) - The returned values loss and trainstate of singletrainstep! are not compatible (#979) - Segfault for simple Zygote pullback (#980) - Question on intialization after breaking changes (#988) - Documentation: Using MLFlow with Lux.jl (#990) - Documentation of Layer Freezing might need small update (#991) - scalar indexing of gpu array in Zygote gradient (#1016) - sending to devices tuples, named tuples and arrays does not keep track of identical objects (#1017) - Compiling Recurrent Models with Reactant (#1025) - Getting NaNs in the pullback of ReverseSequence (#1043) - Simplify recursive code with Functors v0.5 (#1061) - `unsafefree!` from MLDataDevices fails for OneHotArrays (#1066)

- Julia
Published by github-actions[bot] over 1 year ago