https://github.com/xtensor-stack/xtensor
C++ tensors with broadcasting and lazy computing
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
11 of 131 committers (8.4%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.4%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
C++ tensors with broadcasting and lazy computing
Basic Info
Statistics
- Stars: 3,598
- Watchers: 86
- Forks: 422
- Open Issues: 429
- Releases: 0
Topics
Metadata Files
README.md
Multi-dimensional arrays with broadcasting and lazy computing.
Introduction
xtensor is a C++ library meant for numerical analysis with multi-dimensional
array expressions.
xtensor provides
- an extensible expression system enabling lazy broadcasting.
- an API following the idioms of the C++ standard library.
- tools to manipulate array expressions and build upon
xtensor.
Containers of xtensor are inspired by NumPy, the
Python array programming library. Adaptors for existing data structures to
be plugged into our expression system can easily be written.
In fact, xtensor can be used to process NumPy data structures inplace
using Python's buffer protocol.
Similarly, we can operate on Julia and R arrays. For more details on the NumPy,
Julia and R bindings, check out the xtensor-python,
xtensor-julia and
xtensor-r projects respectively.
Up to version 0.26.0, xtensor requires a C++ compiler supporting C++14.
xtensor 0.26.x requires a C++ compiler supporting C++17.
xtensor 0.27.x requires a C++ compiler supporting C++20.
Installation
Package managers
We provide a package for the mamba (or conda) package manager:
bash
mamba install -c conda-forge xtensor
Install from sources
xtensor is a header-only library.
You can directly install it from the sources:
bash
cmake -DCMAKE_INSTALL_PREFIX=your_install_prefix
make install
Installing xtensor using vcpkg
You can download and install xtensor using the vcpkg dependency manager:
bash
git clone https://github.com/Microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg integrate install
./vcpkg install xtensor
The xtensor port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.
Trying it online
You can play with xtensor interactively in a Jupyter notebook right now! Just click on the binder link below:
The C++ support in Jupyter is powered by the xeus-cling C++ kernel. Together with xeus-cling, xtensor enables a similar workflow to that of NumPy with the IPython Jupyter kernel.

Documentation
For more information on using xtensor, check out the reference documentation
http://xtensor.readthedocs.io/
Dependencies
xtensor depends on the xtl library and
has an optional dependency on the xsimd
library:
| xtensor | xtl |xsimd (optional) |
|-----------|---------|-------------------|
| master | ^0.8.0 | ^13.2.0 |
| 0.27.0 | ^0.8.0 | ^13.2.0 |
| 0.26.0 | ^0.8.0 | ^13.2.0 |
| 0.25.0 | ^0.7.5 | ^11.0.0 |
| 0.24.7 | ^0.7.0 | ^10.0.0 |
| 0.24.6 | ^0.7.0 | ^10.0.0 |
| 0.24.5 | ^0.7.0 | ^10.0.0 |
| 0.24.4 | ^0.7.0 | ^10.0.0 |
| 0.24.3 | ^0.7.0 | ^8.0.3 |
| 0.24.2 | ^0.7.0 | ^8.0.3 |
| 0.24.1 | ^0.7.0 | ^8.0.3 |
| 0.24.0 | ^0.7.0 | ^8.0.3 |
| 0.23.x | ^0.7.0 | ^7.4.8 |
| 0.22.0 | ^0.6.23 | ^7.4.8 |
The dependency on xsimd is required if you want to enable SIMD acceleration
in xtensor. This can be done by defining the macro XTENSOR_USE_XSIMD
before including any header of xtensor.
Usage
Basic usage
Initialize a 2-D array and compute the sum of one of its rows and a 1-D array.
```cpp
include
include "xtensor/xarray.hpp"
include "xtensor/xio.hpp"
include "xtensor/xview.hpp"
xt::xarray
xt::xarray
xt::xarray
std::cout << res; ```
Outputs:
{7, 11, 14}
Initialize a 1-D array and reshape it inplace.
```cpp
include
include "xtensor/xarray.hpp"
include "xtensor/xio.hpp"
xt::xarray
arr.reshape({3, 3});
std::cout << arr; ```
Outputs:
{{1, 2, 3},
{4, 5, 6},
{7, 8, 9}}
Index Access
```cpp
include
include "xtensor/xarray.hpp"
include "xtensor/xio.hpp"
xt::xarray
std::cout << arr1(0, 0) << std::endl;
xt::xarray
std::cout << arr2(0); ```
Outputs:
1.0
1
The NumPy to xtensor cheat sheet
If you are familiar with NumPy APIs, and you are interested in xtensor, you can check out the NumPy to xtensor cheat sheet provided in the documentation.
Lazy broadcasting with xtensor
Xtensor can operate on arrays of different shapes of dimensions in an element-wise fashion. Broadcasting rules of xtensor are similar to those of NumPy and libdynd.
Broadcasting rules
In an operation involving two arrays of different dimensions, the array with the lesser dimensions is broadcast across the leading dimensions of the other.
For example, if A has shape (2, 3), and B has shape (4, 2, 3), the
result of a broadcasted operation with A and B has shape (4, 2, 3).
``` (2, 3) # A
(4, 2, 3) # B
(4, 2, 3) # Result ```
The same rule holds for scalars, which are handled as 0-D expressions. If A
is a scalar, the equation becomes:
``` () # A
(4, 2, 3) # B
(4, 2, 3) # Result ```
If matched up dimensions of two input arrays are different, and one of them has
size 1, it is broadcast to match the size of the other. Let's say B has the
shape (4, 2, 1) in the previous example, so the broadcasting happens as
follows:
``` (2, 3) # A
(4, 2, 1) # B
(4, 2, 3) # Result ```
Universal functions, laziness and vectorization
With xtensor, if x, y and z are arrays of broadcastable shapes, the
return type of an expression such as x + y * sin(z) is not an array. It
is an xexpression object offering the same interface as an N-dimensional
array, which does not hold the result. Values are only computed upon access
or when the expression is assigned to an xarray object. This allows to
operate symbolically on very large arrays and only compute the result for the
indices of interest.
We provide utilities to vectorize any scalar function (taking multiple
scalar arguments) into a function that will perform on xexpressions, applying
the lazy broadcasting rules which we just described. These functions are called
xfunctions. They are xtensor's counterpart to NumPy's universal functions.
In xtensor, arithmetic operations (+, -, *, /) and all special
functions are xfunctions.
Iterating over xexpressions and broadcasting Iterators
All xexpressions offer two sets of functions to retrieve iterator pairs (and
their const counterpart).
begin()andend()provide instances ofxiterators which can be used to iterate over all the elements of the expression. The order in which elements are listed isrow-majorin that the index of last dimension is incremented first.begin(shape)andend(shape)are similar but take a broadcasting shape as an argument. Elements are iterated upon in a row-major way, but certain dimensions are repeated to match the provided shape as per the rules described above. For an expressione,e.begin(e.shape())ande.begin()are equivalent.
Runtime vs compile-time dimensionality
Two container classes implementing multi-dimensional arrays are provided:
xarray and xtensor.
xarraycan be reshaped dynamically to any number of dimensions. It is the container that is the most similar to NumPy arrays.xtensorhas a dimension set at compilation time, which enables many optimizations. For example, shapes and strides ofxtensorinstances are allocated on the stack instead of the heap.
xarray and xtensor container are both xexpressions and can be involved
and mixed in universal functions, assigned to each other etc...
Besides, two access operators are provided:
- The variadic template
operator()which can take multiple integral arguments or none. - And the
operator[]which takes a single multi-index argument, which can be of size determined at runtime.operator[]also supports access with braced initializers.
Performances
Xtensor operations make use of SIMD acceleration depending on what instruction sets are available on the platform at hand (SSE, AVX, AVX512, Neon).
The xsimd project underlies the detection of the available instruction sets, and provides generic high-level wrappers and memory allocators for client libraries such as xtensor.
Continuous benchmarking
Xtensor operations are continuously benchmarked, and are significantly improved at each new version. Current performances on statically dimensioned tensors match those of the Eigen library. Dynamically dimension tensors for which the shape is heap allocated come at a small additional cost.
Stack allocation for shapes and strides
More generally, the library implement a promote_shape mechanism at build time
to determine the optimal sequence type to hold the shape of an expression. The
shape type of a broadcasting expression whose members have a dimensionality
determined at compile time will have a stack allocated sequence type. If at
least one note of a broadcasting expression has a dynamic dimension
(for example an xarray), it bubbles up to the entire broadcasting expression
which will have a heap allocated shape. The same hold for views, broadcast
expressions, etc...
Therefore, when building an application with xtensor, we recommend using statically-dimensioned containers whenever possible to improve the overall performance of the application.
Language bindings
The xtensor-python project
provides the implementation of two xtensor containers, pyarray and
pytensor which effectively wrap NumPy arrays, allowing inplace modification,
including reshapes.
Utilities to automatically generate NumPy-style universal functions, exposed to Python from scalar functions are also provided.
The xtensor-julia project
provides the implementation of two xtensor containers, jlarray and
jltensor which effectively wrap julia arrays, allowing inplace modification,
including reshapes.
Like in the Python case, utilities to generate NumPy-style universal functions are provided.
The xtensor-r project provides the
implementation of two xtensor containers, rarray and rtensor which
effectively wrap R arrays, allowing inplace modification, including reshapes.
Like for the Python and Julia bindings, utilities to generate NumPy-style universal functions are provided.
Library bindings
The xtensor-blas project provides bindings to BLAS libraries, enabling linear-algebra operations on xtensor expressions.
The xtensor-io project enables the loading of a variety of file formats into xtensor expressions, such as image files, sound files, HDF5 files, as well as NumPy npy and npz files.
Building and running the tests
Building the tests requires the GTest testing framework and cmake.
gtest and cmake are available as packages for most Linux distributions.
Besides, they can also be installed with the conda package manager (even on
windows):
bash
conda install -c conda-forge gtest cmake
Once gtest and cmake are installed, you can build and run the tests:
bash
mkdir build
cd build
cmake -DBUILD_TESTS=ON ../
make xtest
You can also use CMake to download the source of gtest, build it, and use the
generated libraries:
bash
mkdir build
cd build
cmake -DBUILD_TESTS=ON -DDOWNLOAD_GTEST=ON ../
make xtest
Building the HTML documentation
xtensor's documentation is built with three tools
While doxygen must be installed separately, you can install breathe by typing
bash
pip install breathe sphinx_rtd_theme
Breathe can also be installed with conda
bash
conda install -c conda-forge breathe
Finally, go to docs subdirectory and build the documentation with the
following command:
bash
make html
License
We use a shared copyright model that enables all contributors to maintain the copyright on their contributions.
This software is licensed under the BSD-3-Clause license. See the LICENSE file for details.
Owner
- Name: Xtensor Stack
- Login: xtensor-stack
- Kind: organization
- Repositories: 24
- Profile: https://github.com/xtensor-stack
Data structures for data sciences
GitHub Events
Total
- Issues event: 23
- Watch event: 242
- Issue comment event: 92
- Push event: 41
- Pull request review comment event: 39
- Pull request review event: 23
- Pull request event: 52
- Fork event: 29
- Create event: 1
Last Year
- Issues event: 23
- Watch event: 242
- Issue comment event: 92
- Push event: 41
- Pull request review comment event: 39
- Pull request review event: 23
- Pull request event: 52
- Fork event: 29
- Create event: 1
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Johan Mabille | j****e@g****m | 881 |
| Sylvain Corlay | s****y@g****m | 373 |
| Wolf Vollprecht | w****t@g****m | 322 |
| Tom de Geus | t****m@g****e | 116 |
| AntoinePrv | A****v | 71 |
| David Brochart | d****t@g****m | 53 |
| martinRenou | m****u@g****m | 40 |
| Drew Hubley | d****y@h****a | 30 |
| Ewoud | w****e@a****l | 26 |
| serge-sans-paille | s****n@t****u | 19 |
| zhujun98 | z****1@g****m | 18 |
| Loic Gouarin | l****n@g****m | 18 |
| kolibri91 | n****1@g****m | 15 |
| Mario Emmenlauer | m****r@b****e | 13 |
| jvce92 | j****2@h****m | 13 |
| Ullrich Koethe | u****e@i****e | 13 |
| DerThorsten | tb@q****m | 11 |
| Adrien DELSALLE | a****e@g****m | 11 |
| Mykola Vankovych | m****h@b****e | 10 |
| Ray Zhang | p****5@g****m | 9 |
| Stuart Berg | b****s@j****g | 8 |
| E. G. Patrick Bos | e****s@g****m | 7 |
| Kenneth Hanley | k****y@g****m | 6 |
| Taras Kolomatski | t****i@g****m | 6 |
| SoundDev | 4****v | 6 |
| ray_zhang | r****g@a****m | 6 |
| Jörn Starruß | j****s@t****e | 6 |
| Matwey V. Kornilov | m****v@g****m | 5 |
| DavisVaughan | d****s@r****m | 5 |
| Evgeniy Zheltonozhskiy | z****y@g****m | 5 |
| and 101 more... | ||
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 159
- Total pull requests: 141
- Average time to close issues: 7 months
- Average time to close pull requests: 4 months
- Total issue authors: 93
- Total pull request authors: 38
- Average comments per issue: 3.32
- Average comments per pull request: 2.53
- Merged pull requests: 85
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 19
- Pull requests: 59
- Average time to close issues: about 2 months
- Average time to close pull requests: 8 days
- Issue authors: 17
- Pull request authors: 13
- Average comments per issue: 1.16
- Average comments per pull request: 0.97
- Merged pull requests: 33
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- spectre-ns (17)
- tdegeus (13)
- drew-parsons (6)
- wolfv (5)
- faze-geek (4)
- emmenlau (3)
- ThibHlln (3)
- JohanMabille (3)
- mnijhuis-tos (3)
- amwink (3)
- OUCyf (3)
- themightyoarfish (3)
- AntoinePrv (3)
- DavisVaughan (2)
- SomeoneSerge (2)
Pull Request Authors
- spectre-ns (29)
- JohanMabille (28)
- tdegeus (19)
- vakokako (7)
- AntoinePrv (7)
- sbstndb (6)
- Ivorforce (5)
- ThisIsFineTM (4)
- laramiel (4)
- alexandrehoffmann (4)
- matwey (4)
- SylvainCorlay (3)
- starboerg (3)
- Biokastin (2)
- jarkkokoivikko-code-q (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 3
-
Total downloads:
- cran 279 last-month
- Total docker downloads: 8
-
Total dependent packages: 27
(may contain duplicates) -
Total dependent repositories: 94
(may contain duplicates) - Total versions: 128
- Total maintainers: 2
conda-forge.org: xtensor
Multi dimensional arrays with broadcasting and lazy computing
- Homepage: https://github.com/xtensor-stack/xtensor
- License: BSD-3-Clause
-
Latest release: 0.24.3
published over 3 years ago
Rankings
alpine-edge: xtensor
C++ tensors with broadcasting and lazy computing
- Homepage: https://github.com/xtensor-stack/xtensor
- License: BSD-3-Clause
-
Latest release: 0.27.0-r0
published 6 months ago
Rankings
Maintainers (1)
cran.r-project.org: xtensor
Headers for the 'xtensor' Library
- Homepage: https://github.com/xtensor-stack/xtensor
- Documentation: http://cran.r-project.org/web/packages/xtensor/xtensor.pdf
- License: BSD_3_clause + file LICENSE
- Status: removed
-
Latest release: 0.14.1-0
published about 3 years ago
Rankings
Maintainers (1)
Dependencies
- xeus-cling 0.13.0.*
- xtensor 0.24.2.*
- xtensor-blas 0.20.0.*
- actions/checkout v2 composite
- conda-incubator/setup-miniconda v2 composite
- crazy-max/ghaction-github-pages v2 composite
- actions/checkout v3 composite
- mamba-org/provision-with-micromamba v13 composite
- pre-commit/action v3.0.0 composite