einops

Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

https://github.com/arogozhnikov/einops

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.7%) to scientific vocabulary

Keywords

chainer cupy deep-learning einops jax keras numpy pytorch tensor tensorflow
Last synced: 6 months ago · JSON representation ·

Repository

Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)

Basic Info
  • Host: GitHub
  • Owner: arogozhnikov
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage: https://einops.rocks
  • Size: 3.11 MB
Statistics
  • Stars: 9,104
  • Watchers: 65
  • Forks: 377
  • Open Issues: 36
  • Releases: 16
Topics
chainer cupy deep-learning einops jax keras numpy pytorch tensor tensorflow
Created over 7 years ago · Last pushed 7 months ago
Metadata Files
Readme License Citation

README.md

https://user-images.githubusercontent.com/6318811/177030658-66f0eb5d-e136-44d8-99c9-86ae298ead5b.mp4

einops

Run tests PyPI version Documentation Supported python versions

Flexible and powerful tensor operations for readable and reliable code.
Supports numpy, pytorch, tensorflow, jax, and others.

Recent updates:

  • 0.8.0: tinygrad backend added, small fixes
  • 0.7.0: no-hassle torch.compile, support of array api standard and more
  • 10'000🎉: github reports that more than 10k project use einops
  • einops 0.6.1: paddle backend added
  • einops 0.6 introduces packing and unpacking
  • einops 0.5: einsum is now a part of einops
  • Einops paper is accepted for oral presentation at ICLR 2022 (yes, it worth reading). Talk recordings are available
Previous updates - flax and oneflow backend added - torch.jit.script is supported for pytorch layers - powerful EinMix added to einops. [Einmix tutorial notebook](https://github.com/arogozhnikov/einops/blob/main/docs/3-einmix-layer.ipynb)

Tweets

In case you need convincing arguments for setting aside time to learn about einsum and einops... Tim Rocktäschel

Writing better code with PyTorch and einops 👌 Andrej Karpathy

Slowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life Nasim Rahaman

More testimonials

Contents

Installation

Plain and simple: bash pip install einops

Tutorials

Tutorials are the most convenient way to see einops in action

Kapil Sachdeva recorded a small intro to einops.

API

einops has a minimalistic yet powerful API.

Three core operations provided (einops tutorial shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)

```python from einops import rearrange, reduce, repeat

rearrange elements according to the pattern

outputtensor = rearrange(inputtensor, 't b c -> b c t')

combine rearrangement and reduction

outputtensor = reduce(inputtensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)

copy along a new axis

outputtensor = repeat(inputtensor, 'h w -> h w c', c=3) ```

Later additions to the family are pack and unpack functions (better than stack/split/concatenate):

```python from einops import pack, unpack

pack and unpack allow reversibly 'packing' multiple tensors into one.

Packed tensors may be of different dimensionality:

packed, ps = pack([classtokenbc, imagetokensbhwc, texttokensbtc], 'b * c') classembbc, imageembbhwc, textembbtc = unpack(transformer(packed), ps, 'b * c') ```

Finally, einops provides einsum with a support of multi-lettered names:

```python from einops import einsum, pack, unpack

einsum is like ... einsum, generic and flexible dot-product

but 1) axes can be multi-lettered 2) pattern goes last 3) works with multiple frameworks

C = einsum(A, B, 'b t1 head c, b t2 head c -> b head t1 t2') ```

EinMix

EinMix is a generic linear layer, perfect for MLP Mixers and similar architectures.

Layers

Einops provides layers (einops keeps a separate version for each framework) that reflect corresponding functions

python from einops.layers.torch import Rearrange, Reduce from einops.layers.tensorflow import Rearrange, Reduce from einops.layers.flax import Rearrange, Reduce from einops.layers.paddle import Rearrange, Reduce

Example of using layers within a pytorch model Example given for pytorch, but code in other frameworks is almost identical ```python from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU from einops.layers.torch import Rearrange model = Sequential( ..., Conv2d(6, 16, kernel_size=5), MaxPool2d(kernel_size=2), # flattening without need to write forward Rearrange('b c h w -> b (c h w)'), Linear(16*5*5, 120), ReLU(), Linear(120, 10), ) ``` No more flatten needed! Additionally, torch layers as those are script-able and compile-able. Operations [are torch.compile-able](https://github.com/arogozhnikov/einops/wiki/Using-torch.compile-with-einops), but not script-able due to limitations of torch.jit.script.

Naming

einops stands for Einstein-Inspired Notation for operations (though "Einstein operations" is more attractive and easier to remember).

Notation was loosely inspired by Einstein summation (in particular by numpy.einsum operation).

Why use einops notation?!

Semantic information (being verbose in expectations)

python y = x.view(x.shape[0], -1) y = rearrange(x, 'b c h w -> b (c h w)') While these two lines are doing the same job in some context, the second one provides information about the input and output. In other words, einops focuses on interface: what is the input and output, not how the output is computed.

The next operation looks similar:

python y = rearrange(x, 'time c h w -> time (c h w)') but it gives the reader a hint: this is not an independent batch of images we are processing, but rather a sequence (video).

Semantic information makes the code easier to read and maintain.

Convenient checks

Reconsider the same example:

python y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19) y = rearrange(x, 'b c h w -> b (c h w)') The second line checks that the input has four dimensions, but you can also specify particular dimensions. That's opposed to just writing comments about shapes since comments don't prevent mistakes, not tested, and without code review tend to be outdated python y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19) y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)

Result is strictly determined

Below we have at least two ways to define the depth-to-space operation ```python

depth-to-space

rearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2) rearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2) ``` There are at least four more ways to do it. Which one is used by the framework?

These details are ignored, since usually it makes no difference, but it can make a big difference (e.g. if you use grouped convolutions in the next stage), and you'd like to specify this in your code.

Uniformity

python reduce(x, 'b c (x dx) -> b c x', 'max', dx=2) reduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3) reduce(x, 'b c (x dx) (y dy) (z dz) -> b c x y z', 'max', dx=2, dy=3, dz=4) These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling, those are all defined in a uniform way.

Space-to-depth and depth-to space are defined in many frameworks but how about width-to-height? Here you go:

python rearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)

Framework independent behavior

Even simple functions are defined differently by different frameworks

python y = x.flatten() # or flatten(x)

Suppose x's shape was (3, 4, 5), then y has shape ...

  • numpy, pytorch, cupy, chainer, jax: (60,)
  • keras, tensorflow.layers, gluon: (3, 20)

einops works the same way in all frameworks.

Independence of framework terminology

Example: tile vs repeat causes lots of confusion. To copy image along width: python np.tile(image, (1, 2)) # in numpy image.repeat(1, 2) # pytorch's repeat ~ numpy's tile

With einops you don't need to decipher which axis was repeated: python repeat(image, 'h w -> h (tile w)', tile=2) # in numpy repeat(image, 'h w -> h (tile w)', tile=2) # in pytorch repeat(image, 'h w -> h (tile w)', tile=2) # in tf repeat(image, 'h w -> h (tile w)', tile=2) # in jax repeat(image, 'h w -> h (tile w)', tile=2) # in cupy ... (etc.)

Testimonials provide users' perspective on the same question.

Supported frameworks

Einops works with ...

Additionally, einops can be used with any framework that supports Python array API standard, which includes

Development

Devcontainer is provided, this environment can be used locally, or on your server, or within github codespaces. To start with devcontainers in vs code, clone repo, and click 'Reopen in Devcontainer'.

Starting from einops 0.8.1, einops distributes tests as a part of package.

```bash

pip install einops pytest

python -m einops.tests.run_tests numpy pytorch jax --pip-install ```

numpy pytorch jax is an example, any subset of testable frameworks can be provided. Every framework is tested against numpy, so it is a requirement for tests.

Specifying --pip-install will install requirements in current virtualenv, and should be omitted if dependencies are installed locally.

To build/test docs:

bash hatch run docs:serve # Serving on http://localhost:8000/

Citing einops

Please use the following bibtex record

text @inproceedings{ rogozhnikov2022einops, title={Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation}, author={Alex Rogozhnikov}, booktitle={International Conference on Learning Representations}, year={2022}, url={https://openreview.net/forum?id=oapKSVM2bcj} }

Supported python versions

einops works with python 3.9 or later.

Owner

  • Name: Alex Rogozhnikov
  • Login: arogozhnikov
  • Kind: user
  • Location: San Francisco
  • Company: Aperture Science

ML + Science, einops, scientific tools

Citation (CITATION.cff)

cff-version: 1.2.0
title: einops
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Alex
    family-names: Rogozhnikov
repository-code: 'https://github.com/arogozhnikov/einops'
abstract: >-
  Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
license: MIT
preferred-citation:
  type: article
  authors:
  - given-names: Alex
    family-names: Rogozhnikov
  journal: "International Conference on Learning Representations"
  title: "Einops: Clear and Reliable Tensor Manipulations with Einstein-like Notation"
  year: 2022
  url: https://openreview.net/forum?id=oapKSVM2bc

GitHub Events

Total
  • Create event: 15
  • Release event: 3
  • Issues event: 30
  • Watch event: 655
  • Delete event: 14
  • Issue comment event: 70
  • Push event: 51
  • Pull request review comment event: 12
  • Pull request review event: 14
  • Pull request event: 26
  • Fork event: 32
Last Year
  • Create event: 15
  • Release event: 3
  • Issues event: 30
  • Watch event: 655
  • Delete event: 14
  • Issue comment event: 70
  • Push event: 51
  • Pull request review comment event: 12
  • Pull request review event: 14
  • Pull request event: 26
  • Fork event: 32

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 629
  • Total Committers: 31
  • Avg Commits per committer: 20.29
  • Development Distribution Score (DDS): 0.137
Past Year
  • Commits: 35
  • Committers: 7
  • Avg Commits per committer: 5.0
  • Development Distribution Score (DDS): 0.171
Top Committers
Name Email Commits
Alex Rogozhnikov i****m@g****m 543
MilesCranmer m****r@g****m 39
Cristian Garcia c****8@g****m 9
Dmitriy (Dima) Serdyuk d****k@g****m 3
Ldpe2G l****g@g****m 3
zhouwei25 z****5@b****m 3
59rentainhe 5****7@q****m 2
Adam Kowalski a****k@g****m 2
Davi Silva b****s@g****m 2
Olle Månsson 3****a 2
Ștefan Săftescu s****u@g****m 1
eadadi 3****i 1
Vladyslav Khaitov V****a 1
Robin Kahlow r****w@w****e 1
Atwam c****d@h****m 1
Christoph Boeddeker b****r 1
Daniel Havir h****l@g****m 1
David Rubinstein d****n 1
Ethan Pronovost e****1@g****m 1
Garrett Mooney 4****y 1
HydrogenSulfate 4****1@q****m 1
Jerry Wu n****l@g****m 1
Lilian Besson N****n 1
Luke Carlson j****n@g****m 1
Manan Shah m****h@h****m 1
Maxwell Clarke m****x@g****m 1
NelsonGon g****o@h****m 1
Phil Wang l****s@g****m 1
Piotr Żelasko p****r@g****m 1
Ravi Kalia r****a@g****m 1
and 1 more...
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 135
  • Total pull requests: 100
  • Average time to close issues: 9 months
  • Average time to close pull requests: about 2 months
  • Total issue authors: 107
  • Total pull request authors: 33
  • Average comments per issue: 2.94
  • Average comments per pull request: 1.16
  • Merged pull requests: 80
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 28
  • Pull requests: 37
  • Average time to close issues: 20 days
  • Average time to close pull requests: 5 days
  • Issue authors: 22
  • Pull request authors: 8
  • Average comments per issue: 1.54
  • Average comments per pull request: 0.65
  • Merged pull requests: 34
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • arogozhnikov (14)
  • yurivict (4)
  • shoyer (3)
  • MilesCranmer (2)
  • fzimmermann89 (2)
  • nicolas-dufour (2)
  • lucidrains (2)
  • befelix (2)
  • lkhphuc (2)
  • alisterburt (2)
  • ahatamiz (2)
  • elenishor (1)
  • jdavidls (1)
  • cleong110 (1)
  • rehno-lindeque (1)
Pull Request Authors
  • arogozhnikov (76)
  • ifsheldon (2)
  • luke-carlson (2)
  • pzread (2)
  • project-delphi (2)
  • lvyufeng (2)
  • HydrogenSulfate (2)
  • jzhang533 (2)
  • VladKha (2)
  • MilesCranmer (2)
  • jm12138 (2)
  • tran-khoa (2)
  • ricardoV94 (2)
  • blueridanus (2)
  • eadadi (2)
Top Labels
Issue Labels
feature suggestion (40) bug (30) question (9) enhancement (8) backend bug (5) wontfix (3) context required (1) good first issue (1)
Pull Request Labels

Packages

  • Total packages: 4
  • Total downloads:
    • pypi 11,892,531 last-month
  • Total docker downloads: 378,307
  • Total dependent packages: 888
    (may contain duplicates)
  • Total dependent repositories: 14,524
    (may contain duplicates)
  • Total versions: 39
  • Total maintainers: 2
pypi.org: einops

A new flavour of deep learning operations

  • Versions: 16
  • Dependent Packages: 871
  • Dependent Repositories: 14,487
  • Downloads: 11,892,531 Last month
  • Docker Downloads: 378,307
Rankings
Dependent packages count: 0.0%
Dependent repos count: 0.1%
Downloads: 0.1%
Stargazers count: 0.3%
Average: 0.8%
Docker downloads count: 1.4%
Forks count: 2.9%
Maintainers (1)
Last synced: 6 months ago
proxy.golang.org: github.com/arogozhnikov/einops
  • Versions: 10
  • Dependent Packages: 0
  • Dependent Repositories: 1
Rankings
Stargazers count: 0.8%
Forks count: 1.7%
Average: 4.2%
Dependent repos count: 4.7%
Dependent packages count: 9.6%
Last synced: 7 months ago
spack.io: py-einops

Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and others.

  • Versions: 7
  • Dependent Packages: 6
  • Dependent Repositories: 0
Rankings
Dependent repos count: 0.0%
Stargazers count: 1.7%
Average: 4.8%
Forks count: 5.9%
Dependent packages count: 11.6%
Maintainers (1)
Last synced: 7 months ago
conda-forge.org: einops

Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, and others.

  • Versions: 6
  • Dependent Packages: 11
  • Dependent Repositories: 36
Rankings
Stargazers count: 4.2%
Dependent packages count: 5.5%
Dependent repos count: 6.1%
Average: 6.5%
Forks count: 10.2%
Last synced: 7 months ago

Dependencies

.github/workflows/deploy_to_pypi.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/run_tests.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
scripts/setup.py pypi
  • no *
pyproject.toml pypi