einops
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.7%) to scientific vocabulary
Keywords
Repository
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
Basic Info
- Host: GitHub
- Owner: arogozhnikov
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://einops.rocks
- Size: 3.11 MB
Statistics
- Stars: 9,104
- Watchers: 65
- Forks: 377
- Open Issues: 36
- Releases: 16
Topics
Metadata Files
README.md
https://user-images.githubusercontent.com/6318811/177030658-66f0eb5d-e136-44d8-99c9-86ae298ead5b.mp4
einops
Flexible and powerful tensor operations for readable and reliable code.
Supports numpy, pytorch, tensorflow, jax, and others.
Recent updates:
- 0.8.0: tinygrad backend added, small fixes
- 0.7.0: no-hassle
torch.compile, support of array api standard and more - 10'000🎉: github reports that more than 10k project use einops
- einops 0.6.1: paddle backend added
- einops 0.6 introduces packing and unpacking
- einops 0.5: einsum is now a part of einops
- Einops paper is accepted for oral presentation at ICLR 2022 (yes, it worth reading). Talk recordings are available
Previous updates
- flax and oneflow backend added - torch.jit.script is supported for pytorch layers - powerful EinMix added to einops. [Einmix tutorial notebook](https://github.com/arogozhnikov/einops/blob/main/docs/3-einmix-layer.ipynb)Tweets
In case you need convincing arguments for setting aside time to learn about einsum and einops... Tim Rocktäschel
Writing better code with PyTorch and einops 👌 Andrej Karpathy
Slowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life Nasim Rahaman
Contents
- Installation
- Documentation
- Tutorial
- API micro-reference
- Why use einops
- Supported frameworks
- Citing
- Repository and discussions
Installation
Plain and simple:
bash
pip install einops
Tutorials
Tutorials are the most convenient way to see einops in action
- part 1: einops fundamentals
- part 2: einops for deep learning
- part 3: packing and unpacking
- part 4: improve pytorch code with einops
Kapil Sachdeva recorded a small intro to einops.
API
einops has a minimalistic yet powerful API.
Three core operations provided (einops tutorial shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)
```python from einops import rearrange, reduce, repeat
rearrange elements according to the pattern
outputtensor = rearrange(inputtensor, 't b c -> b c t')
combine rearrangement and reduction
outputtensor = reduce(inputtensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)
copy along a new axis
outputtensor = repeat(inputtensor, 'h w -> h w c', c=3) ```
Later additions to the family are pack and unpack functions (better than stack/split/concatenate):
```python from einops import pack, unpack
pack and unpack allow reversibly 'packing' multiple tensors into one.
Packed tensors may be of different dimensionality:
packed, ps = pack([classtokenbc, imagetokensbhwc, texttokensbtc], 'b * c') classembbc, imageembbhwc, textembbtc = unpack(transformer(packed), ps, 'b * c') ```
Finally, einops provides einsum with a support of multi-lettered names:
```python from einops import einsum, pack, unpack
einsum is like ... einsum, generic and flexible dot-product
but 1) axes can be multi-lettered 2) pattern goes last 3) works with multiple frameworks
C = einsum(A, B, 'b t1 head c, b t2 head c -> b head t1 t2') ```
EinMix
EinMix is a generic linear layer, perfect for MLP Mixers and similar architectures.
Layers
Einops provides layers (einops keeps a separate version for each framework) that reflect corresponding functions
python
from einops.layers.torch import Rearrange, Reduce
from einops.layers.tensorflow import Rearrange, Reduce
from einops.layers.flax import Rearrange, Reduce
from einops.layers.paddle import Rearrange, Reduce
Example of using layers within a pytorch model
Example given for pytorch, but code in other frameworks is almost identical ```python from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU from einops.layers.torch import Rearrange model = Sequential( ..., Conv2d(6, 16, kernel_size=5), MaxPool2d(kernel_size=2), # flattening without need to write forward Rearrange('b c h w -> b (c h w)'), Linear(16*5*5, 120), ReLU(), Linear(120, 10), ) ``` No more flatten needed! Additionally, torch layers as those are script-able and compile-able. Operations [are torch.compile-able](https://github.com/arogozhnikov/einops/wiki/Using-torch.compile-with-einops), but not script-able due to limitations of torch.jit.script.Naming
einops stands for Einstein-Inspired Notation for operations
(though "Einstein operations" is more attractive and easier to remember).
Notation was loosely inspired by Einstein summation (in particular by numpy.einsum operation).
Why use einops notation?!
Semantic information (being verbose in expectations)
python
y = x.view(x.shape[0], -1)
y = rearrange(x, 'b c h w -> b (c h w)')
While these two lines are doing the same job in some context,
the second one provides information about the input and output.
In other words, einops focuses on interface: what is the input and output, not how the output is computed.
The next operation looks similar:
python
y = rearrange(x, 'time c h w -> time (c h w)')
but it gives the reader a hint:
this is not an independent batch of images we are processing,
but rather a sequence (video).
Semantic information makes the code easier to read and maintain.
Convenient checks
Reconsider the same example:
python
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)')
The second line checks that the input has four dimensions,
but you can also specify particular dimensions.
That's opposed to just writing comments about shapes since comments don't prevent mistakes,
not tested, and without code review tend to be outdated
python
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)
Result is strictly determined
Below we have at least two ways to define the depth-to-space operation ```python
depth-to-space
rearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2) rearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2) ``` There are at least four more ways to do it. Which one is used by the framework?
These details are ignored, since usually it makes no difference, but it can make a big difference (e.g. if you use grouped convolutions in the next stage), and you'd like to specify this in your code.
Uniformity
python
reduce(x, 'b c (x dx) -> b c x', 'max', dx=2)
reduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3)
reduce(x, 'b c (x dx) (y dy) (z dz) -> b c x y z', 'max', dx=2, dy=3, dz=4)
These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling,
those are all defined in a uniform way.
Space-to-depth and depth-to space are defined in many frameworks but how about width-to-height? Here you go:
python
rearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)
Framework independent behavior
Even simple functions are defined differently by different frameworks
python
y = x.flatten() # or flatten(x)
Suppose x's shape was (3, 4, 5), then y has shape ...
- numpy, pytorch, cupy, chainer, jax:
(60,) - keras, tensorflow.layers, gluon:
(3, 20)
einops works the same way in all frameworks.
Independence of framework terminology
Example: tile vs repeat causes lots of confusion. To copy image along width:
python
np.tile(image, (1, 2)) # in numpy
image.repeat(1, 2) # pytorch's repeat ~ numpy's tile
With einops you don't need to decipher which axis was repeated:
python
repeat(image, 'h w -> h (tile w)', tile=2) # in numpy
repeat(image, 'h w -> h (tile w)', tile=2) # in pytorch
repeat(image, 'h w -> h (tile w)', tile=2) # in tf
repeat(image, 'h w -> h (tile w)', tile=2) # in jax
repeat(image, 'h w -> h (tile w)', tile=2) # in cupy
... (etc.)
Testimonials provide users' perspective on the same question.
Supported frameworks
Einops works with ...
- numpy
- pytorch
- tensorflow
- jax
- cupy
- flax (community)
- paddle (community)
- oneflow (community)
- tinygrad (community)
- pytensor (community)
Additionally, einops can be used with any framework that supports Python array API standard, which includes
- numpy >= 2.0
- MLX # yes, einops works with apple's framework
- pydata/sparse >= 0.15 # and works with sparse tensors
- cubed # and with distributed tensors too
- quantco/ndonnx
- recent releases of jax and cupy.
- dask is supported via array-api-compat
Development
Devcontainer is provided, this environment can be used locally, or on your server, or within github codespaces. To start with devcontainers in vs code, clone repo, and click 'Reopen in Devcontainer'.
Starting from einops 0.8.1, einops distributes tests as a part of package.
```bash
pip install einops pytest
python -m einops.tests.run_tests numpy pytorch jax --pip-install ```
numpy pytorch jax is an example, any subset of testable frameworks can be provided.
Every framework is tested against numpy, so it is a requirement for tests.
Specifying --pip-install will install requirements in current virtualenv,
and should be omitted if dependencies are installed locally.
To build/test docs:
bash
hatch run docs:serve # Serving on http://localhost:8000/
Citing einops
Please use the following bibtex record
text
@inproceedings{
rogozhnikov2022einops,
title={Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation},
author={Alex Rogozhnikov},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=oapKSVM2bcj}
}
Supported python versions
einops works with python 3.9 or later.
Owner
- Name: Alex Rogozhnikov
- Login: arogozhnikov
- Kind: user
- Location: San Francisco
- Company: Aperture Science
- Website: https://arogozhnikov.github.io
- Repositories: 9
- Profile: https://github.com/arogozhnikov
ML + Science, einops, scientific tools
Citation (CITATION.cff)
cff-version: 1.2.0
title: einops
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Alex
family-names: Rogozhnikov
repository-code: 'https://github.com/arogozhnikov/einops'
abstract: >-
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
license: MIT
preferred-citation:
type: article
authors:
- given-names: Alex
family-names: Rogozhnikov
journal: "International Conference on Learning Representations"
title: "Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation"
year: 2022
url: https://openreview.net/forum?id=oapKSVM2bc
GitHub Events
Total
- Create event: 15
- Release event: 3
- Issues event: 30
- Watch event: 655
- Delete event: 14
- Issue comment event: 70
- Push event: 51
- Pull request review comment event: 12
- Pull request review event: 14
- Pull request event: 26
- Fork event: 32
Last Year
- Create event: 15
- Release event: 3
- Issues event: 30
- Watch event: 655
- Delete event: 14
- Issue comment event: 70
- Push event: 51
- Pull request review comment event: 12
- Pull request review event: 14
- Pull request event: 26
- Fork event: 32
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Alex Rogozhnikov | i****m@g****m | 543 |
| MilesCranmer | m****r@g****m | 39 |
| Cristian Garcia | c****8@g****m | 9 |
| Dmitriy (Dima) Serdyuk | d****k@g****m | 3 |
| Ldpe2G | l****g@g****m | 3 |
| zhouwei25 | z****5@b****m | 3 |
| 59rentainhe | 5****7@q****m | 2 |
| Adam Kowalski | a****k@g****m | 2 |
| Davi Silva | b****s@g****m | 2 |
| Olle Månsson | 3****a | 2 |
| Ștefan Săftescu | s****u@g****m | 1 |
| eadadi | 3****i | 1 |
| Vladyslav Khaitov | V****a | 1 |
| Robin Kahlow | r****w@w****e | 1 |
| Atwam | c****d@h****m | 1 |
| Christoph Boeddeker | b****r | 1 |
| Daniel Havir | h****l@g****m | 1 |
| David Rubinstein | d****n | 1 |
| Ethan Pronovost | e****1@g****m | 1 |
| Garrett Mooney | 4****y | 1 |
| HydrogenSulfate | 4****1@q****m | 1 |
| Jerry Wu | n****l@g****m | 1 |
| Lilian Besson | N****n | 1 |
| Luke Carlson | j****n@g****m | 1 |
| Manan Shah | m****h@h****m | 1 |
| Maxwell Clarke | m****x@g****m | 1 |
| NelsonGon | g****o@h****m | 1 |
| Phil Wang | l****s@g****m | 1 |
| Piotr Żelasko | p****r@g****m | 1 |
| Ravi Kalia | r****a@g****m | 1 |
| and 1 more... | ||
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 135
- Total pull requests: 100
- Average time to close issues: 9 months
- Average time to close pull requests: about 2 months
- Total issue authors: 107
- Total pull request authors: 33
- Average comments per issue: 2.94
- Average comments per pull request: 1.16
- Merged pull requests: 80
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 28
- Pull requests: 37
- Average time to close issues: 20 days
- Average time to close pull requests: 5 days
- Issue authors: 22
- Pull request authors: 8
- Average comments per issue: 1.54
- Average comments per pull request: 0.65
- Merged pull requests: 34
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- arogozhnikov (14)
- yurivict (4)
- shoyer (3)
- MilesCranmer (2)
- fzimmermann89 (2)
- nicolas-dufour (2)
- lucidrains (2)
- befelix (2)
- lkhphuc (2)
- alisterburt (2)
- ahatamiz (2)
- elenishor (1)
- jdavidls (1)
- cleong110 (1)
- rehno-lindeque (1)
Pull Request Authors
- arogozhnikov (76)
- ifsheldon (2)
- luke-carlson (2)
- pzread (2)
- project-delphi (2)
- lvyufeng (2)
- HydrogenSulfate (2)
- jzhang533 (2)
- VladKha (2)
- MilesCranmer (2)
- jm12138 (2)
- tran-khoa (2)
- ricardoV94 (2)
- blueridanus (2)
- eadadi (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 4
-
Total downloads:
- pypi 11,892,531 last-month
- Total docker downloads: 378,307
-
Total dependent packages: 888
(may contain duplicates) -
Total dependent repositories: 14,524
(may contain duplicates) - Total versions: 39
- Total maintainers: 2
pypi.org: einops
A new flavour of deep learning operations
- Homepage: https://github.com/arogozhnikov/einops
- Documentation: https://einops.readthedocs.io/
- License: MIT
-
Latest release: 0.8.1
published about 1 year ago
Rankings
Maintainers (1)
proxy.golang.org: github.com/arogozhnikov/einops
- Documentation: https://pkg.go.dev/github.com/arogozhnikov/einops#section-documentation
- License: mit
-
Latest release: v0.8.1
published about 1 year ago
Rankings
spack.io: py-einops
Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and others.
- Homepage: https://github.com/arogozhnikov/einops
- License: []
-
Latest release: 0.8.1
published about 1 year ago
Rankings
Maintainers (1)
conda-forge.org: einops
Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and others.
- Homepage: https://github.com/arogozhnikov/einops
- License: MIT
-
Latest release: 0.4.1
published almost 4 years ago
Rankings
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- no *