Recent Releases of tensorly

tensorly - Release 0.9.0

This latest release brings several improvements, new features, and update our Python support to 3.9 to 3.13.

New Paddle backend

PaddlePaddle is a deep learning framework similar to PyTorch (both in terms of functionality and API design). Users can leverage dynamic graph mode for algorithm development, and then utilize Paddle's provided tools for model exporting, model compression, and inference libraries. Paddle deeply integrates with inference libraries like ONNX and TensorRT. Thanks to @HydrogenSulfate , https://github.com/tensorly/tensorly/pull/565

Package reorganization

We now have a solvers submodule that contains all our optimization-related code, thanks to @cohenjer, https://github.com/tensorly/tensorly/pull/550.

Algorithmic improvements

The CP regression now supports multi-dimensional output thanks to @merajhashemi in https://github.com/tensorly/tensorly/pull/252.

Quality of life improvements and other changes

Documentation improvements, unit-tests have been improved and made more reliable, and small improvements have been made to several of our algorithms.

  • Add checks for data structure and rank feasibility by @aarmey in https://github.com/tensorly/tensorly/pull/566
  • Fix the documentation and remove last reference to mxnet by @aarmey in https://github.com/tensorly/tensorly/pull/567
  • Virtually all tests working across backends by @aarmey in https://github.com/tensorly/tensorly/pull/572
  • Fix badge in documentation by @JeanKossaifi in https://github.com/tensorly/tensorly/commit/edd74d150bf46ad205005c8b0f773c6d625e81f5
  • [Paddle] Update index.html and pyramid png by @HydrogenSulfate in https://github.com/tensorly/tensorly/pull/573
  • Fix f-string In backend error message by @characat0 in https://github.com/tensorly/tensorly/pull/574
  • Add dummy keyword arguments to Backend svd to throw a more informative error by @characat0 in https://github.com/tensorly/tensorly/pull/575
  • Upgrade Python version by @aarmey in https://github.com/tensorly/tensorly/pull/576
  • Cleanup PARAFAC2 tolerances by @aarmey in https://github.com/tensorly/tensorly/pull/564

Many thanks to the whole TensorLy team and all contributors @aarmey @cohenjer @yngvem @JeanKossaifi @merajhashemi, and our new contributors @HydrogenSulfate and @characat0.

- Python
Published by JeanKossaifi over 1 year ago

tensorly - 0.8.2

New 0.8.2 features:

We're excited to release version 0.8.2 of TensorLy. As always, a huge thank you to the core team and all the contributors!

This version adds many improvements to TensorLy 0.8, including:

Tensor-Ring via ALS

We now provide an ALS-based method for tensor ring decomposition, as well as a randomized sampling-based version, thanks to @OsmanMalik in https://github.com/tensorly/tensorly/pull/501 and https://github.com/tensorly/tensorly/pull/511

Big improvements to PARAFAC2

  • SVD compression of PARAFAC2 by @MarieRoald in https://github.com/tensorly/tensorly/pull/530
  • Bro's line search for PARAFAC2 by @aarmey in https://github.com/tensorly/tensorly/pull/525
  • PARAFAC2: Avoid reprojection of X during the error calculation by @aarmey in https://github.com/tensorly/tensorly/pull/524
  • Small fixes in NN PARAFAC/PARAFAC2 by @aarmey in https://github.com/tensorly/tensorly/pull/535
  • Avoid large concatenation within PARAFAC2 upon SVD initialization by @aarmey in https://github.com/tensorly/tensorly/pull/539
  • More efficient error calculation in PARAFAC2 by @aarmey in https://github.com/tensorly/tensorly/pull/502

MXNet deprecated

We are now deprecating MXNet, and both MXNet and TensorFlow backend will be remove in the near future.

Better interfaces for SVD

We provide a neat, clean and simple to use interface to use all the major variants of SVD, and it keeps improving!

  • Fixed svd_flip() when used on GPU w/ PyTorch backend by @AtomicCactus in https://github.com/tensorly/tensorly/pull/504
  • Also provide H back from makesvdnon_negative() by @aarmey in https://github.com/tensorly/tensorly/pull/517
  • Add max rank argument to svdcompresstensor_slices by @aarmey in https://github.com/tensorly/tensorly/pull/536

Improvements to CP

  • Fix CP Partial Least Square by @cyrillustan in https://github.com/tensorly/tensorly/pull/492
  • Move choices about Khatri-Rao to tenalg backend by @aarmey in https://github.com/tensorly/tensorly/pull/495
  • Provide faster implementation of the MTTKRP by @aarmey in https://github.com/tensorly/tensorly/pull/549

Other changes and other quality of life improvements

  • Add logsumexp function by @braun-steven in https://github.com/tensorly/tensorly/pull/491
  • Add mxnet warning and remove old deprecations by @aarmey in https://github.com/tensorly/tensorly/pull/494
  • Remove padding in Pf2 by @aarmey in https://github.com/tensorly/tensorly/pull/496
  • Documentation update for svd missing values imputation by @Kiord in https://github.com/tensorly/tensorly/pull/508
  • Miscellaneous housekeeping improvements by @aarmey in https://github.com/tensorly/tensorly/pull/513
  • feat: add pip caching to CI by @SauravMaheshkar in https://github.com/tensorly/tensorly/pull/514
  • Fix initialize_tucker by @hello-fri-end in https://github.com/tensorly/tensorly/pull/519
  • Use math.pi by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/505
  • Fully deprecate mxnet by @aarmey in https://github.com/tensorly/tensorly/pull/532
  • Simplify proximal operator code by @aarmey in https://github.com/tensorly/tensorly/pull/534
  • Start testing Python 3.12 and resolve JAX deprecation by @aarmey in https://github.com/tensorly/tensorly/pull/540
  • Add missing targets to .PHONY directive by @FBen3 in https://github.com/tensorly/tensorly/pull/548
  • Fix torch tensor creation dtype/device by @braun-steven in https://github.com/tensorly/tensorly/pull/538
  • add normalization method to tuckertensor class (similar to cptensor) by @cohenjer in https://github.com/tensorly/tensorly/pull/551
  • Remove mxnet by @aarmey in https://github.com/tensorly/tensorly/pull/533

New Contributors

  • @braun-steven made their first contribution in https://github.com/tensorly/tensorly/pull/491
  • @OsmanMalik made their first contribution in https://github.com/tensorly/tensorly/pull/501
  • @AtomicCactus made their first contribution in https://github.com/tensorly/tensorly/pull/504
  • @Kiord made their first contribution in https://github.com/tensorly/tensorly/pull/508
  • @SauravMaheshkar made their first contribution in https://github.com/tensorly/tensorly/pull/514
  • @hello-fri-end made their first contribution in https://github.com/tensorly/tensorly/pull/519
  • @FBen3 made their first contribution in https://github.com/tensorly/tensorly/pull/548

Full Changelog: https://github.com/tensorly/tensorly/compare/0.8.1...0.8.2

New features in 0.8.0

Transparent support for einsum

There are two main ways to implement tensor algebraic methods: 1. Perhaps the most common, using existing matrix based algebraic methods, which typically involves unfolding the tensor (reshaping the tensor into a matrix and permuting its dimensions) 2. Directly leverage tensor contraction, e.g. through an ensue interface. This implies that the einsum actually performs tensor contraction.

We improved the tenalg backend, you can transparently dispatch all tensor algebraic operations to the backend's einsum: ```python import tensorly as tl

Tensor algebra

from tensorly import tenalg

Dispatch all operations to einsum

tenalg.set_backend('einsum') ``` Now all tenalg functions will call einsum under the hood!

Opt-Einsum support

In addition, for each einsum call, you can now use opt-einsum to compute a (near) optimal contraction path and cache it with just one call!

```python

New opt-einsum plugin

from tensorly.plugins import useopteinsum

Transparently compute and cache contraction path using opt-einsum

useopteinsum('optimal') ```

Switch back to the original backend's einsum: ```python

New opt-einsum plugin

from tensorly.plugins import usedefaulteinsum

usedefaulteinsum() ```

Efficient contraction on GPU with cuQuantum

If you want to accelerate your computation, you probably want to use the GPU. TensorLy has been supporting GPU transparently for a while, through its MXNet, CuPy, TensorFlow, PyTorch and more recently, JAX backends.

Now you can also get efficient tensor contractions on GPU using NVIDIA's cuQuantum library!

``python Now any function to thetenalg` module

New opt-einsum plugin

from tensorly.plugins import use_cuquantum

Transparently compute and cache contraction path using opt-einsum

use_cuquantum('optimal')

Create a new tensor on GPU

tensor = tl.randn((32, 256, 256, 3), device='cuda')

Decompose it with CP, keep 5% of the parameters

parafac(tensor, rank=0.05, init='random', nitermax=10) ```

Similarity measure

We now provide CorrIndex, a correlation invariant index * Paper

CP-partial least square regression

This release brings a new multi-linear partial least squares regression, as first introduce by Rasmus Bro, exposed in a convenient scikit-learn-like class, CP_PLSR

Tensor-Train via orthogonal iteration

We have a new tensor decomposition tensor_train_OI class for tensor-train decomposition via orthogonal iteration.

Unified SVD interface

We now have a unified interface for Singular Value Decomposition svd_interface.

It has support for resolving sign indeterminacy, returning a non-negative output, missing values (masked input), and various computation methods, all in one, neat interface!

New datasets

TensorLy now includes real-world datasets well-suited for tensor analysis, that you can now directly load/download in a ready to use form!

COVID-19 Serology Dataset

Systems serology is a new technology that examines the antibodies from a patient’s serum, aiming to comprehensively profile the interactions between the antibodies and Fc receptors alongside other types of immunological and demographic data. Here, we will apply CP decomposition to a COVID-19 system serology dataset. In this dataset, serum antibodies of 438 samples collected from COVID-19 patients were systematically profiled by their binding behavior to SARS-CoV-2 (the virus that causes COVID-19) antigens and Fc receptors activities. The data is formatted in a three-mode tensor of samples, antigens, and receptors Samples are labeled by the status of the patients.

IL2

IL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by phosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune cell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) or effector cells (helper T cells, natural killer cells, and cytotoxic T cells, or Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in immunosuppression or immunostimulation respectively. Thus, when designing a drug meant to repress the immune system, potentially for the treatment of autoimmune diseases, IL-2 which primarily enacts a response in Tregs is desirable. Conversely, when designing a drug that is meant to stimulate the immune system, potentially for the treatment of cancer, IL-2 which primarily enacts a response in effector cells is desirable. In order to achieve either signaling bias, IL-2 variants with altered affinity for its various receptors (IL2Rα or IL2Rβ) have been designed. Furthermore IL-2 variants with multiple binding domains have been designed as multivalent IL-2 may act as a more effective therapeutic.

The data contains the responses of 8 different cell types to 13 different IL-2 mutants, at 4 different timepoints, at 12 standardized IL-2 concentrations. It is formatted as a 4th order tensor of shape (13 x 4 x 12 x 8), with dimensions representing IL-2 mutant, stimulation time, dose, and cell type respectively.

Kinetic

A Kinetic fluorescence dataset, well suited for Parafac and multi-way partial least squares regression (N-PLS). The data is represented as a four-way data set with the modes: Concentration, excitation wavelength, emission wavelength and time.

Indian Pines

Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) hyperspectral sensor data. It consists of 145 times 145 pixels and 220 spectral reflectance bands in the wavelength range 0.4–2.5 10^(-6) meters.

Black linting

We now automatically check for code formatting and the CI tests the code style against the Black styleguides.

List of merged pull requests in this release

In addition to these big features, this release also comes with a whole lot of improvements, better documentation and bug fixes!

Non-exhaustive list of changes:

  • solves hals rec_error0 issue and does some pep improvements by @caglayantuna in https://github.com/tensorly/tensorly/pull/339
  • Copy cptensor by @caglayantuna in https://github.com/tensorly/tensorly/pull/324
  • Normalization by @caglayantuna in https://github.com/tensorly/tensorly/pull/281
  • Add random_state for non negative PARAFAC HALS by @MarieRoald in https://github.com/tensorly/tensorly/pull/344
  • fix tensorflow dtype issue by @caglayantuna in https://github.com/tensorly/tensorly/pull/340
  • nn_tucker hals class, doc, api improvements by @caglayantuna in https://github.com/tensorly/tensorly/pull/345
  • Constrained parafac example and improved docstrings by @caglayantuna in https://github.com/tensorly/tensorly/pull/347
  • IL-2 stimulation dataset by @borcuttjahns in https://github.com/tensorly/tensorly/pull/348
  • Adds matricize function by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/366
  • copy tucker tensor by @caglayantuna in https://github.com/tensorly/tensorly/pull/367
  • simplex projection issue by @caglayantuna in https://github.com/tensorly/tensorly/pull/363
  • Make tl.shape return tuple for PyTorch backend by @MarieRoald in https://github.com/tensorly/tensorly/pull/357
  • Add keepdims to tl.sum with the PyTorch backend by @MarieRoald in https://github.com/tensorly/tensorly/pull/356
  • Fix bug with tl.clip for the PyTorch and TensorFlow backends by @MarieRoald in https://github.com/tensorly/tensorly/pull/355
  • Import COVID-19 systems serology example dataset by @cyrillustan in https://github.com/tensorly/tensorly/pull/359
  • Add Covid example notebook by @cyrillustan in https://github.com/tensorly/tensorly/pull/361
  • Adds exp to the backend by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/377
  • Adds digamma fun to backend by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/378
  • Adds log function to tensorly by @caglayantuna in https://github.com/tensorly/tensorly/pull/381
  • Clip function to sparse backend with a_max=None by @caglayantuna in https://github.com/tensorly/tensorly/pull/379
  • CorrIndex implementation for comparing decomposition outputs by @hmbaghdassarian in https://github.com/tensorly/tensorly/pull/364
  • Adds padttrank by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/387
  • fix normalized sparsity test by @caglayantuna in https://github.com/tensorly/tensorly/pull/385
  • User defined indices list for sample kr by @caglayantuna in https://github.com/tensorly/tensorly/pull/382
  • Drop nosetests by @yan12125 in https://github.com/tensorly/tensorly/pull/388
  • default axis for tensorflow concatenate by @caglayantuna in https://github.com/tensorly/tensorly/pull/389
  • Add exp and log to top-level backend exports by @j6k4m8 in https://github.com/tensorly/tensorly/pull/393
  • Add trig functions and constants by @j6k4m8 in https://github.com/tensorly/tensorly/pull/398
  • Add black code style and linting to CI by @j6k4m8 in https://github.com/tensorly/tensorly/pull/400
  • JAX backend - v > 0.3.0 by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/397
  • Permute cp factors by @caglayantuna in https://github.com/tensorly/tensorly/pull/380
  • Return errors for tucker by @caglayantuna in https://github.com/tensorly/tensorly/pull/396
  • Constrained cp class by @caglayantuna in https://github.com/tensorly/tensorly/pull/390
  • Added IL2 PARAFAC Analysis Example Script by @borcuttjahns in https://github.com/tensorly/tensorly/pull/362
  • Update applications examples by @borcuttjahns in https://github.com/tensorly/tensorly/pull/405
  • Tensor permutation fix for Jax by @aarmey in https://github.com/tensorly/tensorly/pull/406
  • 2 new dataset by @caglayantuna in https://github.com/tensorly/tensorly/pull/408
  • Permute factors api by @caglayantuna in https://github.com/tensorly/tensorly/pull/404
  • removed numpy copy bug by @Mahmood-Hussain in https://github.com/tensorly/tensorly/pull/415
  • Test for reproducibility of CP by @aarmey in https://github.com/tensorly/tensorly/pull/371
  • reorder modes and ranks in partial_tucker by @caglayantuna in https://github.com/tensorly/tensorly/pull/418
  • Remove descending argument in sorting functions by @aarmey in https://github.com/tensorly/tensorly/pull/419
  • Moves tf to the numpy interface by @aarmey in https://github.com/tensorly/tensorly/pull/407
  • change decimal for randomized_svd by @caglayantuna in https://github.com/tensorly/tensorly/pull/421
  • Add assert allclose and tests for test utils by @MarieRoald in https://github.com/tensorly/tensorly/pull/420
  • Make parafac() robust to complex tensors by @maximeguillaud in https://github.com/tensorly/tensorly/pull/298
  • Apply black-style formatting to repository by @j6k4m8 in https://github.com/tensorly/tensorly/pull/401
  • Better initialization of CMTF ALS by @aarmey in https://github.com/tensorly/tensorly/pull/424
  • Fix mxnet testing by @aarmey in https://github.com/tensorly/tensorly/pull/425
  • Add testing for complex values in CP by @aarmey in https://github.com/tensorly/tensorly/pull/423
  • Move SVD to a common frontend interface by @aarmey in https://github.com/tensorly/tensorly/pull/429
  • Warning when fixing last mode by @caglayantuna in https://github.com/tensorly/tensorly/pull/437
  • Doc fix typo in tensor_basics.rst by @ssnio in https://github.com/tensorly/tensorly/pull/445
  • Tensor PLSR by @aarmey in https://github.com/tensorly/tensorly/pull/435
  • Fix the documentation build by @aarmey in https://github.com/tensorly/tensorly/pull/450
  • Callback interface for CP decomposition by @aarmey in https://github.com/tensorly/tensorly/pull/417
  • Adds svd interface to TT and TR, as well as TensorRing class by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/453
  • TT improvements + doc by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/454
  • Raise error for users trying to use tl.partial_svd. by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/455
  • Tt rank errors by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/456
  • FIX cp_norm: preserve context by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/461
  • Finished f-string formatting by @aarmey in https://github.com/tensorly/tensorly/pull/464
  • Temporarily skip indian_pines test by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/466
  • Adds opt-einsum path caching plugin by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/462
  • Use a secure link to tensorly.org by @johnthagen in https://github.com/tensorly/tensorly/pull/467
  • Add TTOI functions by @Lili-Zheng-stat in https://github.com/tensorly/tensorly/pull/411
  • Decorator for backend specific implementations by @MarieRoald in https://github.com/tensorly/tensorly/pull/434
  • Remove in-place projection operations in PARAFAC2 by @aarmey in #474
  • Adding indian pines locally and updating loader by @cohenjer in #472

Credit

This release is only possible thanks to a lot of voluntary work by the whole TensorLy team that work hard to maintain and improve the library! Thanks in particular to the core devs * @aarmey * @caglayantuna * @cohenjer * @JeanKossaifi * @MarieRoald * @yngvem

New Contributors

Big thanks to all the new contributors and welcome to the TensorLy community!

  • @borcuttjahns made their first contribution in https://github.com/tensorly/tensorly/pull/348
  • @cyrillustan made their first contribution in https://github.com/tensorly/tensorly/pull/359
  • @hmbaghdassarian made their first contribution in https://github.com/tensorly/tensorly/pull/364
  • @yan12125 made their first contribution in https://github.com/tensorly/tensorly/pull/388
  • @j6k4m8 made their first contribution in https://github.com/tensorly/tensorly/pull/393
  • @Mahmood-Hussain made their first contribution in https://github.com/tensorly/tensorly/pull/415
  • @maximeguillaud made their first contribution in https://github.com/tensorly/tensorly/pull/298
  • @ssnio made their first contribution in https://github.com/tensorly/tensorly/pull/445
  • @johnthagen made their first contribution in https://github.com/tensorly/tensorly/pull/467
  • @Lili-Zheng-stat made their first contribution in https://github.com/tensorly/tensorly/pull/411

Full Changelog: https://github.com/tensorly/tensorly/compare/0.7.0...0.8.0

- Python
Published by JeanKossaifi over 1 year ago

tensorly - Release 0.8.0

We are releasing a new version of TensorLy, long in the making, with a long list of major improvements, new features, better documentation, bug fixes and overall quality of life improvements!

New features

Transparent support for einsum

There are two main ways to implement tensor algebraic methods: 1. Perhaps the most common, using existing matrix based algebraic methods, which typically involves unfolding the tensor (reshaping the tensor into a matrix and permuting its dimensions) 2. Directly leverage tensor contraction, e.g. through an ensue interface. This implies that the einsum actually performs tensor contraction.

We improved the tenalg backend, you can transparently dispatch all tensor algebraic operations to the backend's einsum: ```python import tensorly as tl

Tensor algebra

from tensorly import tenalg

Dispatch all operations to einsum

tenalg.set_backend('einsum') ``` Now all tenalg functions will call einsum under the hood!

Opt-Einsum support

In addition, for each einsum call, you can now use opt-einsum to compute a (near) optimal contraction path and cache it with just one call!

```python

New opt-einsum plugin

from tensorly.plugins import useopteinsum

Transparently compute and cache contraction path using opt-einsum

useopteinsum('optimal') ```

Switch back to the original backend's einsum: ```python

New opt-einsum plugin

from tensorly.plugins import usedefaulteinsum

usedefaulteinsum() ```

Efficient contraction on GPU with cuQuantum

If you want to accelerate your computation, you probably want to use the GPU. TensorLy has been supporting GPU transparently for a while, through its MXNet, CuPy, TensorFlow, PyTorch and more recently, JAX backends.

Now you can also get efficient tensor contractions on GPU using NVIDIA's cuQuantum library!

``python Now any function to thetenalg` module

New opt-einsum plugin

from tensorly.plugins import use_cuquantum

Transparently compute and cache contraction path using opt-einsum

use_cuquantum('optimal')

Create a new tensor on GPU

tensor = tl.randn((32, 256, 256, 3), device='cuda')

Decompose it with CP, keep 5% of the parameters

parafac(tensor, rank=0.05, init='random', nitermax=10) ```

Similarity measure

We now provide CorrIndex, a correlation invariant index * Paper

CP-partial least square regression

This release brings a new multi-linear partial least squares regression, as first introduce by Rasmus Bro, exposed in a convenient scikit-learn-like class, CP_PLSR

Tensor-Train via orthogonal iteration

We have a new tensor decomposition tensor_train_OI class for tensor-train decomposition via orthogonal iteration.

Unified SVD interface

We now have a unified interface for Singular Value Decomposition svd_interface.

It has support for resolving sign indeterminacy, returning a non-negative output, missing values (masked input), and various computation methods, all in one, neat interface!

New datasets

TensorLy now includes real-world datasets well-suited for tensor analysis, that you can now directly load/download in a ready to use form!

COVID-19 Serology Dataset

Systems serology is a new technology that examines the antibodies from a patient’s serum, aiming to comprehensively profile the interactions between the antibodies and Fc receptors alongside other types of immunological and demographic data. Here, we will apply CP decomposition to a COVID-19 system serology dataset. In this dataset, serum antibodies of 438 samples collected from COVID-19 patients were systematically profiled by their binding behavior to SARS-CoV-2 (the virus that causes COVID-19) antigens and Fc receptors activities. The data is formatted in a three-mode tensor of samples, antigens, and receptors Samples are labeled by the status of the patients.

IL2

IL-2 signals through the Jak/STAT pathway and transmits a signal into immune cells by phosphorylating STAT5 (pSTAT5). When phosphorylated, STAT5 will cause various immune cell types to proliferate, and depending on whether regulatory (regulatory T cells, or Tregs) or effector cells (helper T cells, natural killer cells, and cytotoxic T cells, or Thelpers, NKs, and CD8+ cells) respond, IL-2 signaling can result in immunosuppression or immunostimulation respectively. Thus, when designing a drug meant to repress the immune system, potentially for the treatment of autoimmune diseases, IL-2 which primarily enacts a response in Tregs is desirable. Conversely, when designing a drug that is meant to stimulate the immune system, potentially for the treatment of cancer, IL-2 which primarily enacts a response in effector cells is desirable. In order to achieve either signaling bias, IL-2 variants with altered affinity for its various receptors (IL2Rα or IL2Rβ) have been designed. Furthermore IL-2 variants with multiple binding domains have been designed as multivalent IL-2 may act as a more effective therapeutic.

The data contains the responses of 8 different cell types to 13 different IL-2 mutants, at 4 different timepoints, at 12 standardized IL-2 concentrations. It is formatted as a 4th order tensor of shape (13 x 4 x 12 x 8), with dimensions representing IL-2 mutant, stimulation time, dose, and cell type respectively.

Kinetic

A Kinetic fluorescence dataset, well suited for Parafac and multi-way partial least squares regression (N-PLS). The data is represented as a four-way data set with the modes: Concentration, excitation wavelength, emission wavelength and time.

Indian Pines

Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) hyperspectral sensor data. It consists of 145 times 145 pixels and 220 spectral reflectance bands in the wavelength range 0.4–2.5 10^(-6) meters.

Black linting

We now automatically check for code formatting and the CI tests the code style against the Black styleguides.

List of merged pull requests in this release

In addition to these big features, this release also comes with a whole lot of improvements, better documentation and bug fixes!

Non-exhaustive list of changes:

  • solves hals rec_error0 issue and does some pep improvements by @caglayantuna in https://github.com/tensorly/tensorly/pull/339
  • Copy cptensor by @caglayantuna in https://github.com/tensorly/tensorly/pull/324
  • Normalization by @caglayantuna in https://github.com/tensorly/tensorly/pull/281
  • Add random_state for non negative PARAFAC HALS by @MarieRoald in https://github.com/tensorly/tensorly/pull/344
  • fix tensorflow dtype issue by @caglayantuna in https://github.com/tensorly/tensorly/pull/340
  • nn_tucker hals class, doc, api improvements by @caglayantuna in https://github.com/tensorly/tensorly/pull/345
  • Constrained parafac example and improved docstrings by @caglayantuna in https://github.com/tensorly/tensorly/pull/347
  • IL-2 stimulation dataset by @borcuttjahns in https://github.com/tensorly/tensorly/pull/348
  • Adds matricize function by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/366
  • copy tucker tensor by @caglayantuna in https://github.com/tensorly/tensorly/pull/367
  • simplex projection issue by @caglayantuna in https://github.com/tensorly/tensorly/pull/363
  • Make tl.shape return tuple for PyTorch backend by @MarieRoald in https://github.com/tensorly/tensorly/pull/357
  • Add keepdims to tl.sum with the PyTorch backend by @MarieRoald in https://github.com/tensorly/tensorly/pull/356
  • Fix bug with tl.clip for the PyTorch and TensorFlow backends by @MarieRoald in https://github.com/tensorly/tensorly/pull/355
  • Import COVID-19 systems serology example dataset by @cyrillustan in https://github.com/tensorly/tensorly/pull/359
  • Add Covid example notebook by @cyrillustan in https://github.com/tensorly/tensorly/pull/361
  • Adds exp to the backend by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/377
  • Adds digamma fun to backend by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/378
  • Adds log function to tensorly by @caglayantuna in https://github.com/tensorly/tensorly/pull/381
  • Clip function to sparse backend with a_max=None by @caglayantuna in https://github.com/tensorly/tensorly/pull/379
  • CorrIndex implementation for comparing decomposition outputs by @hmbaghdassarian in https://github.com/tensorly/tensorly/pull/364
  • Adds padttrank by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/387
  • fix normalized sparsity test by @caglayantuna in https://github.com/tensorly/tensorly/pull/385
  • User defined indices list for sample kr by @caglayantuna in https://github.com/tensorly/tensorly/pull/382
  • Drop nosetests by @yan12125 in https://github.com/tensorly/tensorly/pull/388
  • default axis for tensorflow concatenate by @caglayantuna in https://github.com/tensorly/tensorly/pull/389
  • Add exp and log to top-level backend exports by @j6k4m8 in https://github.com/tensorly/tensorly/pull/393
  • Add trig functions and constants by @j6k4m8 in https://github.com/tensorly/tensorly/pull/398
  • Add black code style and linting to CI by @j6k4m8 in https://github.com/tensorly/tensorly/pull/400
  • JAX backend - v > 0.3.0 by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/397
  • Permute cp factors by @caglayantuna in https://github.com/tensorly/tensorly/pull/380
  • Return errors for tucker by @caglayantuna in https://github.com/tensorly/tensorly/pull/396
  • Constrained cp class by @caglayantuna in https://github.com/tensorly/tensorly/pull/390
  • Added IL2 PARAFAC Analysis Example Script by @borcuttjahns in https://github.com/tensorly/tensorly/pull/362
  • Update applications examples by @borcuttjahns in https://github.com/tensorly/tensorly/pull/405
  • Tensor permutation fix for Jax by @aarmey in https://github.com/tensorly/tensorly/pull/406
  • 2 new dataset by @caglayantuna in https://github.com/tensorly/tensorly/pull/408
  • Permute factors api by @caglayantuna in https://github.com/tensorly/tensorly/pull/404
  • removed numpy copy bug by @Mahmood-Hussain in https://github.com/tensorly/tensorly/pull/415
  • Test for reproducibility of CP by @aarmey in https://github.com/tensorly/tensorly/pull/371
  • reorder modes and ranks in partial_tucker by @caglayantuna in https://github.com/tensorly/tensorly/pull/418
  • Remove descending argument in sorting functions by @aarmey in https://github.com/tensorly/tensorly/pull/419
  • Moves tf to the numpy interface by @aarmey in https://github.com/tensorly/tensorly/pull/407
  • change decimal for randomized_svd by @caglayantuna in https://github.com/tensorly/tensorly/pull/421
  • Add assert allclose and tests for test utils by @MarieRoald in https://github.com/tensorly/tensorly/pull/420
  • Make parafac() robust to complex tensors by @maximeguillaud in https://github.com/tensorly/tensorly/pull/298
  • Apply black-style formatting to repository by @j6k4m8 in https://github.com/tensorly/tensorly/pull/401
  • Better initialization of CMTF ALS by @aarmey in https://github.com/tensorly/tensorly/pull/424
  • Fix mxnet testing by @aarmey in https://github.com/tensorly/tensorly/pull/425
  • Add testing for complex values in CP by @aarmey in https://github.com/tensorly/tensorly/pull/423
  • Move SVD to a common frontend interface by @aarmey in https://github.com/tensorly/tensorly/pull/429
  • Warning when fixing last mode by @caglayantuna in https://github.com/tensorly/tensorly/pull/437
  • Doc fix typo in tensor_basics.rst by @ssnio in https://github.com/tensorly/tensorly/pull/445
  • Tensor PLSR by @aarmey in https://github.com/tensorly/tensorly/pull/435
  • Fix the documentation build by @aarmey in https://github.com/tensorly/tensorly/pull/450
  • Callback interface for CP decomposition by @aarmey in https://github.com/tensorly/tensorly/pull/417
  • Adds svd interface to TT and TR, as well as TensorRing class by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/453
  • TT improvements + doc by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/454
  • Raise error for users trying to use tl.partial_svd. by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/455
  • Tt rank errors by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/456
  • FIX cp_norm: preserve context by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/461
  • Finished f-string formatting by @aarmey in https://github.com/tensorly/tensorly/pull/464
  • Temporarily skip indian_pines test by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/466
  • Adds opt-einsum path caching plugin by @JeanKossaifi in https://github.com/tensorly/tensorly/pull/462
  • Use a secure link to tensorly.org by @johnthagen in https://github.com/tensorly/tensorly/pull/467
  • Add TTOI functions by @Lili-Zheng-stat in https://github.com/tensorly/tensorly/pull/411
  • Decorator for backend specific implementations by @MarieRoald in https://github.com/tensorly/tensorly/pull/434
  • Remove in-place projection operations in PARAFAC2 by @aarmey in #474
  • Adding indian pines locally and updating loader by @cohenjer in #472

Credit

This release is only possible thanks to a lot of voluntary work by the whole TensorLy team that work hard to maintain and improve the library! Thanks in particular to the core devs * @aarmey * @caglayantuna * @cohenjer * @JeanKossaifi * @MarieRoald * @yngvem

New Contributors

Big thanks to all the new contributors and welcome to the TensorLy community!

  • @borcuttjahns made their first contribution in https://github.com/tensorly/tensorly/pull/348
  • @cyrillustan made their first contribution in https://github.com/tensorly/tensorly/pull/359
  • @hmbaghdassarian made their first contribution in https://github.com/tensorly/tensorly/pull/364
  • @yan12125 made their first contribution in https://github.com/tensorly/tensorly/pull/388
  • @j6k4m8 made their first contribution in https://github.com/tensorly/tensorly/pull/393
  • @Mahmood-Hussain made their first contribution in https://github.com/tensorly/tensorly/pull/415
  • @maximeguillaud made their first contribution in https://github.com/tensorly/tensorly/pull/298
  • @ssnio made their first contribution in https://github.com/tensorly/tensorly/pull/445
  • @johnthagen made their first contribution in https://github.com/tensorly/tensorly/pull/467
  • @Lili-Zheng-stat made their first contribution in https://github.com/tensorly/tensorly/pull/411

Full Changelog: https://github.com/tensorly/tensorly/compare/0.7.0...0.8.0

- Python
Published by JeanKossaifi about 3 years ago

tensorly - TensorLy Release 0.7.0

TensorLy 0.7.0 is out!

In this new version of TensorLy, the whole team has been working hard to bring you lots of improvements, from new decompositions to new functions, faster code and better documentation.

Major improvements and new features

New decompositions

We added some great new tensor decompositions, including

  • Coupled Matrix-Tensor Factorisation: [ CMTF-ALS #293 thanks to @IsabellLehmann and @aarmey ]
  • Tensor-Ring (e.g. MPS with periodic boundary conditions):
 [ Tensor Ring implementation #229 thanks to @merajhashemi ]
  • Non-negative Tucker decomposition via Hierarchical ALS:
 [ NN-HALS Tucker @caglayantuna @cohenjer #254 ]
  • A new CP decomposition that supports various constraints on each mode, including monotony, non-negativity, l1/l2 regularization, smoothness, sparsity, etc! [ 
Constrained parafac #284, thanks to @caglayantuna and @cohenjer ]

Brand new features

We added a brand new tensordot that supports batching! [ Adding a new Batched Tensor Dot + API simplification #309 ]

Normalization for Tucker factors, #283 thanks to @caglayantuna and @cohenjer!

Added a convenient function to compute the gradient of the difference norm between a CP and dense tensor, #294, thanks to @aarmey

Backend refactoring

In an effort to make the TensorLy backend even more flexible and fast, we refactored the main backend as well as the tensor algebra backend. We make lots of small quality of life improvement in the process! In particular, reconstructing a tt-matrix is a lot more efficient now. [ Backend refactoring : use a BackendManager class and use it directly as tensorly.backend's Module class #330, @JeanKossaifi ]

Enhancements

Improvements to Parafac2 (convergence criteria, etc) #267, thanks to @marieRoald HALS convergence FIX TODO, @marieRoald and @IsabellLehmann, #271 Ensuring consistency between the object oriented API and the functional one thanks to @yngvem, #268 Added lstsq to backend, #305, thanks to @merajhashemi Fix documentation for case insensitive clashes between the function and class: https://github.com/tensorly/tensorly/issues/219 Added random-seed for TT-cross, #304 thanks to @yngvem Fix svd sign indeterminacy #216, thanks to @merajhashemi Rewrote vonneumann_entropy to handle multidimensional tensors. #270, thanks to @taylorpatti Adding check for all modes fixed case and if true then to just return the initialization #325, thanks to @ParvaH We now provide a prod function that works like math.prod for users using Python < 3.8, in tensorly.utils.prod

New backend functions

All backend now support matmul, tensor dot (#306), as well as sin, cos, flip, argsort, count_nonzero, cumsum, any, lstsq and trace.

Bug Fixes

Fixed NN-Tucker hals sparsity coefficient issue, thanks to @caglayantuna #295 Fix svd for pytorch < 1.8 #312 thanks to @merajhashemi Fix dot and matmul in PyTorch and TF #313 thanks to @merajhashemi Fix tl.partialunfold #315, thanks to @merajhashemi Fixed behaviour of diag for TensorFlow backend. Fix tl.partialsvd : now explicitly check for NaN values, #318 thanks to @merajhashemi fix diag function for tensorflow and pytorch backends #321, thanks to @caglayantuna Fix singular vectors to be orthonormal #320 thanks to @merajhashemi fix active set and hals tests #323 thanks to @caglayantuna Add test for matmul #322 thanks to @merajhashemi Sparse backend usage fix by @caglayantuna in #280

- Python
Published by JeanKossaifi over 4 years ago

tensorly - Release 0.5.1

CP: l2 reg CP: sparsity Added fixed_modes for CP and Tucker. Masked Tucker

Sparse backend

And many small improvements and bug fixes!

Standardisation of the names: Kruskal-tensors have been renamed cp_tensors Matrix-product-state has now been renamed tensor-train

Rank selection: validatecprank, option to set rank=‘same’ or rank=float to automatically determine the rank.

- Python
Published by JeanKossaifi almost 5 years ago

tensorly - Release 0.6.0

What's new

This version brings lots of new functionalities, improvements and fixes many small bugs and issues. We have a new theme for the TensorLy project's documentations, our new TensorLy Sphinx theme, which we've open-sourced and that you can also easily use in your own projects! We've also switched testing from Travis to Github actions and for coverage, from Coveralls to CodeCov.

New features

  • Non-negative CP decomposition via Hierarchical Alternating Least Square (HALS), thanks to @caglayantuna and @cohenjer, #224
  • Non-negative SVD and refactoring of non-negative CP, thanks to @aarmey, #204
  • Randomized SVD thanks to @merajhashemi, #215
  • Sign correction method for CP thanks to @aarmey, #226
  • Support for complex tensors, thanks to @merajhashemi #213, #247
  • Entropy metrics, thanks to @taylorpatti, #231

Backend refactoring

  • Various SVD forms are backend-agnostic.
  • Norm method is backend-agnostic.
  • CuPy tests now pass thanks to @aarmey, #217
  • Random number generation seed (checkrandomstate) was moved to the backend, and random functions now use NumPy’s global seed by default thanks to @merajhashemi, #209 #212
  • Using mxnet.np interface in MXNet backend, thanks to @aarmey, #225, #207

Bug fixes and improvements

  • PARAFAC2: fixed SVD init and improved naming consistency, thanks to @MarieRoald, #220
  • Improved CP initialisation thanks to @caglayantuna, #230
  • Improved testing for check_tucker_rank
  • Support for order-1 and 2 CP and TT/TTM tensors.
  • Fixed issues in CP via robust tensor power iteration, thanks to @chrisyeh96, #244
  • Added support for tensordot in all backends
  • Fix for partial_tucker when provided with mask and applied only to a subset of modes, 691f7859e9983981ec7820641a97ec0967acb9fe
  • Documentation and user-guide improvements, switched to new theme, e328d8eb65e9a72e0cc469545f32efdba0da95c7

And many other small improvements!

- Python
Published by JeanKossaifi almost 5 years ago