https://github.com/jorenham/optype
Opinionated typing package for precise type hints in Python
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.0%) to scientific vocabulary
Keywords
Repository
Opinionated typing package for precise type hints in Python
Basic Info
Statistics
- Stars: 59
- Watchers: 3
- Forks: 5
- Open Issues: 9
- Releases: 29
Topics
Metadata Files
README.md
optype
Building blocks for precise & flexible type hints.
Installation
PyPI
Optype is available as optype on PyPI:
shell
pip install optype
For optional NumPy support, ensure that you use the optype[numpy] extra.
This ensures that the installed numpy and the required numpy-typing-compat
versions are compatible with each other.
shell
pip install "optype[numpy]"
See the optype.numpy docs for more info.
Conda
Optype can also be installed with conda from the conda-forge channel:
shell
conda install conda-forge::optype
If you want to use optype.numpy, you should instead install
optype-numpy:
shell
conda install conda-forge::optype-numpy
Example
Let's say you're writing a twice(x) function, that evaluates 2 * x.
Implementing it is trivial, but what about the type annotations?
Because twice(2) == 4, twice(3.14) == 6.28 and twice('I') = 'II', it
might seem like a good idea to type it as twice[T](x: T) -> T: ....
However, that wouldn't include cases such as twice(True) == 2 or
twice((42, True)) == (42, True, 42, True), where the input- and output types
differ.
Moreover, twice should accept any type with a custom __rmul__ method
that accepts 2 as argument.
This is where optype comes in handy, which has single-method protocols for
all the builtin special methods.
For twice, we can use optype.CanRMul[T, R], which, as the name suggests,
is a protocol with (only) the def __rmul__(self, lhs: T) -> R: ... method.
With this, the twice function can written as:
| Python 3.11 | Python 3.12+ |
|---|---|
| ```python from typing import Literal from typing import TypeAlias, TypeVar from optype import CanRMul R = TypeVar("R") Two: TypeAlias = Literal[2] RMul2: TypeAlias = CanRMul[Two, R] def twice(x: RMul2[R]) -> R: return 2 * x ``` | ```python from typing import Literal from optype import CanRMul type Two = Literal[2] type RMul2[R] = CanRMul[Two, R] def twice[R](x: RMul2[R]) -> R: return 2 * x ``` |
But what about types that implement __add__ but not __radd__?
In this case, we could return x * 2 as fallback (assuming commutativity).
Because the optype.Can* protocols are runtime-checkable, the revised
twice2 function can be compactly written as:
| Python 3.11 | Python 3.12+ |
|---|---|
| ```python from optype import CanMul Mul2: TypeAlias = CanMul[Two, R] CMul2: TypeAlias = Mul2[R] | RMul2[R] def twice2(x: CMul2[R]) -> R: if isinstance(x, CanRMul): return 2 * x else: return x * 2 ``` | ```python from optype import CanMul type Mul2[R] = CanMul[Two, R] type CMul2[R] = Mul2[R] | RMul2[R] def twice2[R](x: CMul2[R]) -> R: if isinstance(x, CanRMul): return 2 * x else: return x * 2 ``` |
See examples/twice.py for the full example.
Reference
The API of optype is flat; a single import optype as opt is all you need
(except for optype.numpy).
optypeoptype.copyoptype.dataclassesoptype.inspectoptype.iooptype.jsonoptype.pickle<!-- - -optype.rick-->optype.stringoptype.typingoptype.dlpackoptype.numpy
optype
There are five flavors of things that live within optype,
- The
optype.Just[T]and itsoptype.Just{Int,Float,Complex}subtypes only accept instances of the type itself, while rejecting instances of strict subtypes. This can be used to e.g. work around thefloatandcomplextype promotions, annotatingobject()sentinels withJust[object], rejectingboolin functions that acceptint, etc. optype.Can{}types describe what can be done with it. For instance, anyCanAbs[T]type can be used as argument to theabs()builtin function with return typeT. MostCan{}implement a single special method, whose name directly matches that of the type.CanAbsimplements__abs__,CanAddimplements__add__, etc.optype.Has{}is the analogue ofCan{}, but for special attributes.HasNamehas a__name__attribute,HasDicthas a__dict__, etc.optype.Does{}describe the type of operators. SoDoesAbsis the type of theabs({})builtin function, andDoesPosthe type of the+{}prefix operator.optype.do_{}are the correctly-typed implementations ofDoes{}. For eachdo_{}there is aDoes{}, and vice-versa. Sodo_abs: DoesAbsis the typed alias ofabs({}), anddo_pos: DoesPosis a typed version ofoperator.pos. Theoptype.do_operators are more complete thanoperators, have runtime-accessible type annotations, and have names you don't need to know by heart.
The reference docs are structured as follows:
All typing protocols here live in the root optype namespace.
They are runtime-checkable so that you can do e.g.
isinstance('snail', optype.CanAdd), in case you want to check whether
snail implements __add__.
Unlikecollections.abc, optype's protocols aren't abstract base classes,
i.e. they don't extend abc.ABC, only typing.Protocol.
This allows the optype protocols to be used as building blocks for .pyi
type stubs.
Just
Just is an invariant type "wrapper", where Just[T] only accepts instances of T,
and rejects instances of any strict subtypes of T.
Note that e.g. Literal[""] and LiteralString are not a strict str subtypes,
and are therefore assignable to Just[str], but instances of class S(str): ...
are not assignable to Just[str].
Disallow passing bool as int:
```py import optype as op
def assert_int(x: op.Just[int]) -> int: assert type(x) is int return x
assertint(42) # ok assertint(False) # rejected ```
Annotating a sentinel:
```py import optype as op
_DEFAULT = object()
def intmap(
value: int,
# same as dict[int, int] | op.Just[object]
mapping: dict[int, int] | op.JustObject = _DEFAULT,
/,
) -> int:
# same as type(mapping) is object
if isinstance(mapping, op.JustObject):
return value
return mapping[value]
intmap(1) # ok intmap(1, {1: 42}) # ok intmap(1, "some object") # rejected ```
[!TIP] The
Just{Bytes,Int,Float,Complex,Date,Object}protocols are runtime-checkable, so thatinstance(42, JustInt) is Trueandinstance(bool(), JustInt) is False. It's implemented through meta-classes, and type-checkers have no problem with it.
| optype type | accepts instances of |
| ------------- | -------------------- |
| Just[T] | T |
| JustInt | builtins.int |
| JustFloat | builtins.float |
| JustComplex | builtins.complex |
| JustBytes | builtins.bytes |
| JustObject | builtins.object |
| JustDate | datetime.date |
Builtin type conversion
The return type of these special methods is invariant. Python will raise an
error if some other (sub)type is returned.
This is why these optype interfaces don't accept generic type arguments.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
complex(_) |
do_complex |
DoesComplex |
__complex__ |
CanComplex |
float(_) |
do_float |
DoesFloat |
__float__ |
CanFloat |
int(_) |
do_int |
DoesInt |
__int__ |
CanInt[+R: int = int] |
bool(_) |
do_bool |
DoesBool |
__bool__ |
CanBool[+R: bool = bool] |
bytes(_) |
do_bytes |
DoesBytes |
__bytes__ |
CanBytes[+R: bytes = bytes] |
str(_) |
do_str |
DoesStr |
__str__ |
CanStr[+R: str = str] |
[!NOTE] The
Can*interfaces of the types that can used astyping.Literalaccept an optional type parameterR. This can be used to indicate a literal return type, for surgically precise typing, e.g.None,True, and42are instances ofCanBool[Literal[False]],CanInt[Literal[1]], andCanStr[Literal['42']], respectively.
These formatting methods are allowed to return instances that are a subtype
of the str builtin. The same holds for the __format__ argument.
So if you're a 10x developer that wants to hack Python's f-strings, but only
if your type hints are spot-on; optype is you friend.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
repr(_) |
do_repr |
DoesRepr |
__repr__ |
CanRepr[+R:str = str] |
format(_, x) |
do_format |
DoesFormat |
__format__ |
CanFormat[-T:str = str, +R:str = str] |
Additionally, optype provides protocols for types with (custom) hash or
index methods:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
hash(_) |
do_hash |
DoesHash |
__hash__ |
CanHash |
_.__index__()
(docs)
|
do_index |
DoesIndex |
__index__ |
CanIndex[+R: int = int] |
Rich relations
The "rich" comparison special methods often return a bool.
However, instances of any type can be returned (e.g. a numpy array).
This is why the corresponding optype.Can* interfaces accept a second type
argument for the return type, that defaults to bool when omitted.
The first type parameter matches the passed method argument, i.e. the
right-hand side operand, denoted here as x.
| operator | operand | ||||
|---|---|---|---|---|---|
| expression | reflected | function | type | method | type |
_ == x |
x == _ |
do_eq |
DoesEq |
__eq__ |
CanEq[-T = object, +R = bool] |
_ != x |
x != _ |
do_ne |
DoesNe |
__ne__ |
CanNe[-T = object, +R = bool] |
_ < x |
x > _ |
do_lt |
DoesLt |
__lt__ |
CanLt[-T, +R = bool] |
_ <= x |
x >= _ |
do_le |
DoesLe |
__le__ |
CanLe[-T, +R = bool] |
_ > x |
x < _ |
do_gt |
DoesGt |
__gt__ |
CanGt[-T, +R = bool] |
_ >= x |
x <= _ |
do_ge |
DoesGe |
__ge__ |
CanGe[-T, +R = bool] |
Binary operations
In the Python docs, these are referred to as "arithmetic operations". But the operands aren't limited to numeric types, and because the operations aren't required to be commutative, might be non-deterministic, and could have side-effects. Classifying them "arithmetic" is, at the very least, a bit of a stretch.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
_ + x |
do_add |
DoesAdd |
__add__ |
CanAdd[-T, +R = T]CanAddSelf[-T]CanAddSame[-T?, +R?]
|
_ - x |
do_sub |
DoesSub |
__sub__ |
CanSub[-T, +R = T]CanSubSelf[-T]CanSubSame[-T?, +R?]
|
_ * x |
do_mul |
DoesMul |
__mul__ |
CanMul[-T, +R = T]CanMulSelf[-T]CanMulSame[-T?, +R?]
|
_ @ x |
do_matmul |
DoesMatmul |
__matmul__ |
CanMatmul[-T, +R = T]CanMatmulSelf[-T]CanMatmulSame[-T?, +R?]
|
_ / x |
do_truediv |
DoesTruediv |
__truediv__ |
CanTruediv[-T, +R = T]CanTruedivSelf[-T]CanTruedivSame[-T?, +R?]
|
_ // x |
do_floordiv |
DoesFloordiv |
__floordiv__ |
CanFloordiv[-T, +R = T]CanFloordivSelf[-T]CanFloordivSame[-T?, +R?]
|
_ % x |
do_mod |
DoesMod |
__mod__ |
CanMod[-T, +R = T]CanModSelf[-T]CanModSame[-T?, +R?]
|
divmod(_, x) |
do_divmod |
DoesDivmod |
__divmod__ |
CanDivmod[-T, +R] |
_ ** xpow(_, x)
|
do_pow/2 |
DoesPow |
__pow__ |
CanPow2[-T, +R = T]CanPowSelf[-T]CanPowSame[-T?, +R?]
|
pow(_, x, m) |
do_pow/3 |
DoesPow |
__pow__ |
CanPow3[-T, -M, +R = int] |
_ << x |
do_lshift |
DoesLshift |
__lshift__ |
CanLshift[-T, +R = T]CanLshiftSelf[-T]CanLshiftSame[-T?, +R?]
|
_ >> x |
do_rshift |
DoesRshift |
__rshift__ |
CanRshift[-T, +R = T]CanRshiftSelf[-T]CanRshiftSame[-T?, +R?]
|
_ & x |
do_and |
DoesAnd |
__and__ |
CanAnd[-T, +R = T]CanAndSelf[-T]CanAndSame[-T?, +R?]
|
_ ^ x |
do_xor |
DoesXor |
__xor__ |
CanXor[-T, +R = T]CanXorSelf[-T]CanXorSame[-T?, +R?]
|
_ | x |
do_or |
DoesOr |
__or__ |
CanOr[-T, +R = T]CanOrSelf[-T]CanOrSame[-T?, +R?]
|
[!TIP] Because
pow()can take an optional third argument,optypeprovides separate interfaces forpow()with two and three arguments. Additionally, there is the overloaded intersection typetype CanPow[-T, -M, +R, +RM] = CanPow2[T, R] & CanPow3[T, M, RM], as interface for types that can take an optional third argument.
[!NOTE] The
Can*Selfprotocols method returntyping.Selfand optionally acceptTandR. TheCan*Sameprotocols also returnSelf, but instead acceptSelf | T, withTandRoptional generic type parameters that default totyping.Never. To illustrate,CanAddSelf[T]implements__add__as(self, rhs: T, /) -> Self, whileCanAddSame[T, R]implements it as(self, rhs: Self | T, /) -> Self | R, andCanAddSame(withoutTandR) as(self, rhs: Self, /) -> Self.
Reflected operations
For the binary infix operators above, optype additionally provides
interfaces with reflected (swapped) operands, e.g. __radd__ is a reflected
__add__.
They are named like the original, but prefixed with CanR prefix, i.e.
__name__.replace('Can', 'CanR').
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
x + _ |
do_radd |
DoesRAdd |
__radd__ |
CanRAdd[-T, +R=T]CanRAddSelf[-T]
|
x - _ |
do_rsub |
DoesRSub |
__rsub__ |
CanRSub[-T, +R=T]CanRSubSelf[-T]
|
x * _ |
do_rmul |
DoesRMul |
__rmul__ |
CanRMul[-T, +R=T]CanRMulSelf[-T]
|
x @ _ |
do_rmatmul |
DoesRMatmul |
__rmatmul__ |
CanRMatmul[-T, +R=T]CanRMatmulSelf[-T]
|
x / _ |
do_rtruediv |
DoesRTruediv |
__rtruediv__ |
CanRTruediv[-T, +R=T]CanRTruedivSelf[-T]
|
x // _ |
do_rfloordiv |
DoesRFloordiv |
__rfloordiv__ |
CanRFloordiv[-T, +R=T]CanRFloordivSelf[-T]
|
x % _ |
do_rmod |
DoesRMod |
__rmod__ |
CanRMod[-T, +R=T]CanRModSelf[-T]
|
divmod(x, _) |
do_rdivmod |
DoesRDivmod |
__rdivmod__ |
CanRDivmod[-T, +R] |
x ** _pow(x, _)
|
do_rpow |
DoesRPow |
__rpow__ |
CanRPow[-T, +R=T]CanRPowSelf[-T]
|
x << _ |
do_rlshift |
DoesRLshift |
__rlshift__ |
CanRLshift[-T, +R=T]CanRLshiftSelf[-T]
|
x >> _ |
do_rrshift |
DoesRRshift |
__rrshift__ |
CanRRshift[-T, +R=T]CanRRshiftSelf[-T]
|
x & _ |
do_rand |
DoesRAnd |
__rand__ |
CanRAnd[-T, +R=T]CanRAndSelf[-T]
|
x ^ _ |
do_rxor |
DoesRXor |
__rxor__ |
CanRXor[-T, +R=T]CanRXorSelf[-T]
|
x | _ |
do_ror |
DoesROr |
__ror__ |
CanROr[-T, +R=T]CanROrSelf[-T]
|
[!NOTE]
CanRPowcorresponds toCanPow2; the 3-parameter "modulo"powdoes not reflect in Python.According to the relevant python docs:
Note that ternary
pow()will not try calling__rpow__()(the coercion rules would become too complicated).
Inplace operations
Similar to the reflected ops, the inplace/augmented ops are prefixed with
CanI, namely:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | types |
_ += x |
do_iadd |
DoesIAdd |
__iadd__ |
CanIAdd[-T, +R]CanIAddSelf[-T]CanIAddSame[-T?]
|
_ -= x |
do_isub |
DoesISub |
__isub__ |
CanISub[-T, +R]CanISubSelf[-T]CanISubSame[-T?]
|
_ *= x |
do_imul |
DoesIMul |
__imul__ |
CanIMul[-T, +R]CanIMulSelf[-T]CanIMulSame[-T?]
|
_ @= x |
do_imatmul |
DoesIMatmul |
__imatmul__ |
CanIMatmul[-T, +R]CanIMatmulSelf[-T]CanIMatmulSame[-T?]
|
_ /= x |
do_itruediv |
DoesITruediv |
__itruediv__ |
CanITruediv[-T, +R]CanITruedivSelf[-T]CanITruedivSame[-T?]
|
_ //= x |
do_ifloordiv |
DoesIFloordiv |
__ifloordiv__ |
CanIFloordiv[-T, +R]CanIFloordivSelf[-T]CanIFloordivSame[-T?]
|
_ %= x |
do_imod |
DoesIMod |
__imod__ |
CanIMod[-T, +R]CanIModSelf[-T]CanIModSame[-T?]
|
_ **= x |
do_ipow |
DoesIPow |
__ipow__ |
CanIPow[-T, +R]CanIPowSelf[-T]CanIPowSame[-T?]
|
_ <<= x |
do_ilshift |
DoesILshift |
__ilshift__ |
CanILshift[-T, +R]CanILshiftSelf[-T]CanILshiftSame[-T?]
|
_ >>= x |
do_irshift |
DoesIRshift |
__irshift__ |
CanIRshift[-T, +R]CanIRshiftSelf[-T]CanIRshiftSame[-T?]
|
_ &= x |
do_iand |
DoesIAnd |
__iand__ |
CanIAnd[-T, +R]CanIAndSelf[-T]CanIAndSame[-T?]
|
_ ^= x |
do_ixor |
DoesIXor |
__ixor__ |
CanIXor[-T, +R]CanIXorSelf[-T]CanIXorSame[-T?]
|
_ |= x |
do_ior |
DoesIOr |
__ior__ |
CanIOr[-T, +R]CanIOrSelf[-T]CanIOrSame[-T?]
|
These inplace operators usually return themselves (after some in-place mutation).
But unfortunately, it currently isn't possible to use Self for this (i.e.
something like type MyAlias[T] = optype.CanIAdd[T, Self] isn't allowed).
So to help ease this unbearable pain, optype comes equipped with ready-made
aliases for you to use. They bear the same name, with an additional *Self
suffix, e.g. optype.CanIAddSelf[T].
[!NOTE] The
CanI*Selfprotocols method returntyping.Selfand optionally acceptT. TheCanI*Sameprotocols also returnSelf, but instead acceptrhs: Self | T. SinceTdefaults toNever, it will acceptrhs: Self | NeverifTis not provided, which is equivalent torhs: Self.Available since
0.12.1
Unary operations
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | types |
+_ |
do_pos |
DoesPos |
__pos__ |
CanPos[+R]CanPosSelf[+R?]
|
-_ |
do_neg |
DoesNeg |
__neg__ |
CanNeg[+R]CanNegSelf[+R?]
|
~_ |
do_invert |
DoesInvert |
__invert__ |
CanInvert[+R]CanInvertSelf[+R?]
|
abs(_) |
do_abs |
DoesAbs |
__abs__ |
CanAbs[+R]CanAbsSelf[+R?]
|
The Can*Self variants return -> Self instead of R. Since optype 0.12.1 these
also accept an optional R type parameter (with a default of Never), which, when
provided, will result in a return type of -> Self | R.
Rounding
The round() built-in function takes an optional second argument.
From a typing perspective, round() has two overloads, one with 1 parameter,
and one with two.
For both overloads, optype provides separate operand interfaces:
CanRound1[R] and CanRound2[T, RT].
Additionally, optype also provides their (overloaded) intersection type:
CanRound[-T, +R1, +R2] = CanRound1[R1] & CanRound2[T, R2].
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
round(_) |
do_round/1 |
DoesRound |
__round__/1 |
CanRound1[+R=int] |
round(_, n) |
do_round/2 |
DoesRound |
__round__/2 |
CanRound2[-T=int, +R=float] |
round(_, n=...) |
do_round |
DoesRound |
__round__ |
CanRound[-T=int, +R1=int, +R2=float] |
For example, type-checkers will mark the following code as valid (tested with pyright in strict mode):
python
x: float = 3.14
x1: CanRound1[int] = x
x2: CanRound2[int, float] = x
x3: CanRound[int, int, float] = x
Furthermore, there are the alternative rounding functions from the
math standard library:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
math.trunc(_) |
do_trunc |
DoesTrunc |
__trunc__ |
CanTrunc[+R=int] |
math.floor(_) |
do_floor |
DoesFloor |
__floor__ |
CanFloor[+R=int] |
math.ceil(_) |
do_ceil |
DoesCeil |
__ceil__ |
CanCeil[+R=int] |
Almost all implementations use int for R.
In fact, if no type for R is specified, it will default in int.
But technically speaking, these methods can be made to return anything.
Callables
Unlike operator, optype provides an operator for callable objects:
optype.do_call(f, *args. **kwargs).
CanCall is similar to collections.abc.Callable, but is runtime-checkable,
and doesn't use esoteric hacks.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
_(*args, **kwargs) |
do_call |
DoesCall |
__call__ |
CanCall[**Tss, +R] |
[!NOTE] Pyright (and probably other typecheckers) tend to accept
collections.abc.Callablein more places thanoptype.CanCall. This could be related to the lack of co/contra-variance specification fortyping.ParamSpec(they should almost always be contravariant, but currently they can only be invariant).In case you encounter such a situation, please open an issue about it, so we can investigate further.
Iteration
The operand x of iter(_) is within Python known as an iterable, which is
what collections.abc.Iterable[V] is often used for (e.g. as base class, or
for instance checking).
The optype analogue is CanIter[R], which as the name suggests,
also implements __iter__. But unlike Iterable[V], its type parameter R
binds to the return type of iter(_) -> R. This makes it possible to annotate
the specific type of the iterable that iter(_) returns. Iterable[V] is
only able to annotate the type of the iterated value. To see why that isn't
possible, see python/typing#548.
The collections.abc.Iterator[V] is even more awkward; it is a subtype of
Iterable[V]. For those familiar with collections.abc this might come as a
surprise, but an iterator only needs to implement __next__, __iter__ isn't
needed. This means that the Iterator[V] is unnecessarily restrictive.
Apart from that being theoretically "ugly", it has significant performance
implications, because the time-complexity of isinstance on a
typing.Protocol is $O(n)$, with the $n$ referring to the amount of members.
So even if the overhead of the inheritance and the abc.ABC usage is ignored,
collections.abc.Iterator is twice as slow as it needs to be.
That's one of the (many) reasons that optype.CanNext[V] and
optype.CanIter[R] are the better alternatives to Iterable and Iterator
from the abracadabra collections. This is how they are defined:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
next(_) |
do_next |
DoesNext |
__next__ |
CanNext[+V] |
iter(_) |
do_iter |
DoesIter |
__iter__ |
CanIter[+R: CanNext[object]] |
For the sake of compatibility with collections.abc, there is
optype.CanIterSelf[V], which is a protocol whose __iter__ returns
typing.Self, as well as a __next__ method that returns T.
I.e. it is equivalent to collections.abc.Iterator[V], but without the abc
nonsense.
Awaitables
The optype.CanAwait[R] is almost the same as collections.abc.Awaitable[R], except
that optype.CanAwait[R] is a pure interface, whereas Awaitable is
also an abstract base class (making it absolutely useless when writing stubs).
| operator | operand | |
|---|---|---|
| expression | method | type |
await _ |
__await__ |
CanAwait[+R] |
Async Iteration
Yes, you guessed it right; the abracadabra collections made the exact same mistakes for the async iterablors (or was it "iteramblers"...?).
But fret not; the optype alternatives are right here:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
anext(_) |
do_anext |
DoesANext |
__anext__ |
CanANext[+V] |
aiter(_) |
do_aiter |
DoesAIter |
__aiter__ |
CanAIter[+R: CanAnext[object]] |
But wait, shouldn't V be a CanAwait? Well, only if you don't want to get
fired...
Technically speaking, __anext__ can return any type, and anext will pass
it along without nagging. For details, see the discussion at python/typeshed#7491.
Just because something is legal, doesn't mean it's a good idea (don't eat the
yellow snow).
Additionally, there is optype.CanAIterSelf[R], with both the
__aiter__() -> Self and the __anext__() -> V methods.
Containers
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
len(_) |
do_len |
DoesLen |
__len__ |
CanLen[+R:int=int] |
_.__length_hint__()
(docs)
|
do_length_hint |
DoesLengthHint |
__length_hint__ |
CanLengthHint[+R:int=int] |
_[k] |
do_getitem |
DoesGetitem |
__getitem__ |
CanGetitem[-K, +V] |
_.__missing__()
(docs)
|
do_missing |
DoesMissing |
__missing__ |
CanMissing[-K, +D] |
_[k] = v |
do_setitem |
DoesSetitem |
__setitem__ |
CanSetitem[-K, -V] |
del _[k] |
do_delitem |
DoesDelitem |
__delitem__ |
CanDelitem[-K] |
k in _ |
do_contains |
DoesContains |
__contains__ |
CanContains[-K=object] |
reversed(_) |
do_reversed |
DoesReversed |
__reversed__ |
CanReversed[+R], orCanSequence[-I, +V, +N=int]
|
Because CanMissing[K, D] generally doesn't show itself without
CanGetitem[K, V] there to hold its hand, optype conveniently stitched them
together as optype.CanGetMissing[K, V, D=V].
Similarly, there is optype.CanSequence[K: CanIndex | slice, V], which is the
combination of both CanLen and CanItem[I, V], and serves as a more
specific and flexible collections.abc.Sequence[V].
Attributes
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
v = _.k orv = getattr(_, k)
|
do_getattr |
DoesGetattr |
__getattr__ |
CanGetattr[-K:str=str, +V=object] |
_.k = v orsetattr(_, k, v)
|
do_setattr |
DoesSetattr |
__setattr__ |
CanSetattr[-K:str=str, -V=object] |
del _.k ordelattr(_, k)
|
do_delattr |
DoesDelattr |
__delattr__ |
CanDelattr[-K:str=str] |
dir(_) |
do_dir |
DoesDir |
__dir__ |
CanDir[+R:CanIter[CanIterSelf[str]]] |
Context managers
Support for the with statement.
| operator | operand | |
|---|---|---|
| expression | method(s) | type(s) |
__enter__ |
CanEnter[+C], or
CanEnterSelf
|
|
__exit__ |
CanExit[+R = None]
|
|
with _ as c: |
__enter__, and __exit__
|
CanWith[+C, +R = None], orCanWithSelf[+R = None]
|
CanEnterSelf and CanWithSelf are (runtime-checkable) aliases for
CanEnter[Self] and CanWith[Self, R], respectively.
For the async with statement the interfaces look very similar:
| operator | operand | |
|---|---|---|
| expression | method(s) | type(s) |
__aenter__ |
CanAEnter[+C], orCanAEnterSelf
|
|
__aexit__ |
CanAExit[+R = None] |
|
async with _ as c: |
__aenter__, and__aexit__
|
CanAsyncWith[+C, +R = None], orCanAsyncWithSelf[+R = None]
|
Descriptors
Interfaces for descriptors.
| operator | operand | |
|---|---|---|
| expression | method | type |
v: V = T().dvt: VT = T.d
|
__get__ |
CanGet[-T, +V, +VT=V] |
v: V = T().dvt: Self = T.d
|
__get__ |
CanGetSelf[-T, +V] |
T().k = v |
__set__ |
CanSet[-T, -V] |
del T().k |
__delete__ |
CanDelete[-T] |
class T: d = _ |
__set_name__ |
CanSetName[-T, -N: str = str] |
Buffer types
Interfaces for emulating buffer types using the buffer protocol.
| operator | operand | |
|---|---|---|
| expression | method | type |
v = memoryview(_) |
__buffer__ |
CanBuffer[-T: int = int] |
del v |
__release_buffer__ |
CanReleaseBuffer |
optype.copy
For the copy standard library, optype.copy provides the following
runtime-checkable interfaces:
copy standard library |
optype.copy |
|
|---|---|---|
| function | type | method |
copy.copy(_) -> R |
__copy__() -> R |
CanCopy[+R] |
copy.deepcopy(_, memo={}) -> R |
__deepcopy__(memo, /) -> R |
CanDeepcopy[+R] |
copy.replace(_, /, **changes: V) -> R
[1]
|
__replace__(**changes: V) -> R |
CanReplace[-V, +R] |
[1] copy.replace requires python>=3.13
(but optype.copy.CanReplace doesn't)
In practice, it makes sense that a copy of an instance is the same type as the
original.
But because typing.Self cannot be used as a type argument, this difficult
to properly type.
Instead, you can use the optype.copy.Can{}Self types, which are the
runtime-checkable equivalents of the following (recursive) type aliases:
python
type CanCopySelf = CanCopy[CanCopySelf]
type CanDeepcopySelf = CanDeepcopy[CanDeepcopySelf]
type CanReplaceSelf[V] = CanReplace[V, CanReplaceSelf[V]]
optype.dataclasses
For the dataclasses standard library, optype.dataclasses provides the
HasDataclassFields[V: Mapping[str, Field]] interface.
It can conveniently be used to check whether a type or instance is a
dataclass, i.e. isinstance(obj, HasDataclassFields).
optype.inspect
A collection of functions for runtime inspection of types, modules, and other objects.
| Function | Description |
|---|---|
get_args(_) |
A better alternative to [`typing.get_args()`][GET_ARGS], that
- unpacks `typing.Annotated` and Python 3.12 `type _` alias types
(i.e. `typing.TypeAliasType`),
- recursively flattens unions and nested `typing.Literal` types, and
- raises `TypeError` if not a type expression.
Return a `tuple[type | object, ...]` of type arguments or parameters.
To illustrate one of the (many) issues with `typing.get_args`:
```pycon
>>> from typing import Literal, TypeAlias, get_args
>>> Falsy: TypeAlias = Literal[None] | Literal[False, 0] | Literal["", b""]
>>> get_args(Falsy)
(typing.Literal[None], typing.Literal[False, 0], typing.Literal['', b''])
```
But this is in direct contradiction with the
[official typing documentation][LITERAL-DOCS]:
> When a Literal is parameterized with more than one value, it’s treated as
> exactly equivalent to the union of those types.
> That is, `Literal[v1, v2, v3]` is equivalent to
> `Literal[v1] | Literal[v2] | Literal[v3]`.
So this is why `optype.inspect.get_args` should be used
```pycon
>>> import optype as opt
>>> opt.inspect.get_args(Falsy)
(None, False, 0, '', b'')
```
Another issue of `typing.get_args` is with Python 3.12 `type _ = ...` aliases,
which are meant as a replacement for `_: typing.TypeAlias = ...`, and should
therefore be treated equally:
```pycon
>>> import typing
>>> import optype as opt
>>> type StringLike = str | bytes
>>> typing.get_args(StringLike)
()
>>> opt.inspect.get_args(StringLike)
( |
get_protocol_members(_) |
A better alternative to [`typing.get_protocol_members()`][PROTO_MEM], that - doesn't require Python 3.13 or above, - supports [PEP 695][PEP695] `type _` alias types on Python 3.12 and above, - unpacks unions of `typing.Literal` ... - ... and flattens them if nested within another `typing.Literal`, - treats `typing.Annotated[T]` as `T`, and - raises a `TypeError` if the passed value isn't a type expression. Returns a `frozenset[str]` with member names. |
get_protocols(_) |
Returns a `frozenset[type]` of the public protocols within the passed module. Pass `private=True` to also return the private protocols. |
is_iterable(_) |
Check whether the object can be iterated over, i.e. if it can be used in a `for` loop, without attempting to do so. If `True` is returned, then the object is a `optype.typing.AnyIterable` instance. |
is_final(_) |
Check if the type, method / classmethod / staticmethod / property, is decorated with [`@typing.final`][@FINAL]. Note that a `@property` won't be recognized unless the `@final` decorator is placed *below* the `@property` decorator. See the function docstring for more information. |
is_protocol(_) |
A backport of [`typing.is_protocol`][IS_PROTO] that was added in Python 3.13, a re-export of [`typing_extensions.is_protocol`][IS_PROTO_EXT]. |
is_runtime_protocol(_) |
Check if the type expression is a *runtime-protocol*, i.e. a `typing.Protocol` *type*, decorated with `@typing.runtime_checkable` (also supports `typing_extensions`). |
is_union_type(_) |
Check if the type is a [`typing.Union`][UNION] type, e.g. `str | int`. Unlike `isinstance(_, types.Union)`, this function also returns `True` for unions of user-defined `Generic` or `Protocol` types (because those are different union types for some reason). |
is_generic_alias(_) |
Check if the type is a *subscripted* type, e.g. `list[str]` or `optype.CanNext[int]`, but not `list`, `CanNext`. Unlike `isinstance(_, typing.GenericAlias)`, this function also returns `True` for user-defined `Generic` or `Protocol` types (because those are use a different generic alias for some reason). Even though technically `T1 | T2` is represented as `typing.Union[T1, T2]` (which is a (special) generic alias), `is_generic_alias` will returns `False` for such union types, because calling `T1 | T2` a subscripted type just doesn't make much sense. |
[!NOTE] All functions in
optype.inspectalso work for Python 3.12type _aliases (i.e.types.TypeAliasType) and withtyping.Annotated.
optype.io
A collection of protocols and type-aliases that, unlike their analogues in _typeshed,
are accessible at runtime, and use a consistent naming scheme.
optype.io protocol |
implements | replaces |
|---|---|---|
CanFSPath[+T: str | bytes =] |
__fspath__: () -> T |
os.PathLike[AnyStr: (str, bytes)] |
CanRead[+T] |
read: () -> T |
|
CanReadN[+T] |
read: (int) -> T |
_typeshed.SupportsRead[+T] |
CanReadline[+T] |
readline: () -> T |
_typeshed.SupportsNoArgReadline[+T] |
CanReadlineN[+T] |
readline: (int) -> T |
_typeshed.SupportsReadline[+T] |
CanWrite[-T, +RT = object] |
write: (T) -> RT |
_typeshed.SupportsWrite[-T] |
CanFlush[+RT = object] |
flush: () -> RT |
_typeshed.SupportsFlush |
CanFileno |
fileno: () -> int |
_typeshed.HasFileno |
optype.io type alias |
expression | replaces |
|---|---|---|
ToPath[+T: str | bytes =] |
T | CanFSPath[T] |
_typeshed.StrPath_typeshed.BytesPath_typeshed.StrOrBytesPath_typeshed.GenericPath[AnyStr] |
ToFileno |
int | CanFileno |
_typeshed.FileDescriptorLike |
optype.json
Type aliases for the json standard library:
Value |
AnyValue |
json.load(s) return type |
json.dumps(s) input type |
|---|---|
Array[V: Value = Value] |
AnyArray[~V: AnyValue = AnyValue] |
Object[V: Value = Value] |
AnyObject[~V: AnyValue = AnyValue] |
The (Any)Value can be any json input, i.e. Value | Array | Object is
equivalent to Value.
It's also worth noting that Value is a subtype of AnyValue, which means
that AnyValue | Value is equivalent to AnyValue.
optype.pickle
For the pickle standard library, optype.pickle provides the following
interfaces:
| method(s) | signature (bound) | type |
|---|---|---|
__reduce__ |
() -> R |
CanReduce[+R: str | tuple =] |
__reduce_ex__ |
(CanIndex) -> R |
CanReduceEx[+R: str | tuple =] |
__getstate__ |
() -> S |
CanGetstate[+S] |
__setstate__ |
(S) -> None |
CanSetstate[-S] |
__getnewargs____new__
|
() -> tuple[V, ...](V) -> Self |
CanGetnewargs[+V] |
__getnewargs_ex____new__
|
() -> tuple[tuple[V, ...], dict[str, KV]](*tuple[V, ...], **dict[str, KV]) -> Self |
CanGetnewargsEx[+V, ~KV] |
optype.string
The string standard
library contains practical constants, but it has two issues:
- The constants contain a collection of characters, but are represented as
a single string. This makes it practically impossible to type-hint the
individual characters, so typeshed currently types these constants as a
LiteralString. - The names of the constants are inconsistent, and doesn't follow PEP 8.
So instead, optype.string provides an alternative interface, that is
compatible with string, but with slight differences:
- For each constant, there is a corresponding
Literaltype alias for the individual characters. Its name matches the name of the constant, but is singular instead of plural. - Instead of a single string,
optype.stringuses atupleof characters, so that each character has its owntyping.Literalannotation. Note that this is only tested with (based)pyright / pylance, so it might not work with mypy (it has more bugs than it has lines of codes). - The names of the constant are consistent with PEP 8, and use a postfix
notation for variants, e.g.
DIGITS_HEXinstead ofhexdigits. - Unlike
string,optype.stringhas a constant (and type alias) for binary digits'0'and'1';DIGITS_BIN(andDigitBin). Because besidesoctandhexfunctions inbuiltins, there's also thebuiltins.binfunction.
string._ |
optype.string._ |
||
|---|---|---|---|
| constant | char type | constant | char type |
| missing | DIGITS_BIN |
DigitBin |
|
octdigits |
LiteralString |
DIGITS_OCT |
DigitOct |
digits |
DIGITS |
Digit |
|
hexdigits |
DIGITS_HEX |
DigitHex |
|
ascii_letters |
LETTERS |
Letter |
|
ascii_lowercase |
LETTERS_LOWER |
LetterLower |
|
ascii_uppercase |
LETTERS_UPPER |
LetterUpper |
|
punctuation |
PUNCTUATION |
Punctuation |
|
whitespace |
WHITESPACE |
Whitespace |
|
printable |
PRINTABLE |
Printable |
|
Each of the optype.string constants is exactly the same as the corresponding
string constant (after concatenation / splitting), e.g.
```pycon
import string import optype as opt "".join(opt.string.PRINTABLE) == string.printable True tuple(string.printable) == opt.string.PRINTABLE True ```
Similarly, the values within a constant's Literal type exactly match the
values of its constant:
```pycon
import optype as opt from optype.inspect import getargs getargs(opt.string.Printable) == opt.string.PRINTABLE True ```
The optype.inspect.get_args is a non-broken variant of typing.get_args
that correctly flattens nested literals, type-unions, and PEP 695 type aliases,
so that it matches the official typing specs.
In other words; typing.get_args is yet another fundamentally broken
python-typing feature that's useless in the situations where you need it
most.
optype.typing
Any* type aliases
Type aliases for anything that can always be passed to
int, float, complex, iter, or typing.Literal
| Python constructor | optype.typing alias |
|---|---|
int(_) |
AnyInt |
float(_) |
AnyFloat |
complex(_) |
AnyComplex |
iter(_) |
AnyIterable |
typing.Literal[_] |
AnyLiteral |
[!NOTE] Even though some
strandbytescan be converted toint,float,complex, most of them can't, and are therefore not included in these type aliases.
Empty* type aliases
These are builtin types or collections that are empty, i.e. have length 0 or yield no elements.
| instance | optype.typing type |
|---|---|
'' |
EmptyString |
b'' |
EmptyBytes |
() |
EmptyTuple |
[] |
EmptyList |
{} |
EmptyDict |
set() |
EmptySet |
(i for i in range(0)) |
EmptyIterable |
Literal types
| Literal values | optype.typing type |
Notes |
|---|---|---|
{False, True} |
LiteralFalse |
Similar to typing.LiteralString, but for
bool.
|
{0, 1, ..., 255} |
LiteralByte |
Integers in the range 0-255, that make up a bytes
or bytearray objects.
|
optype.dlpack
A collection of low-level types for working DLPack.
Protocols
| type signature | bound method |
|---|---|
| ```plain CanDLPack[ +T = int, +D: int = int, ] ``` | ```python def __dlpack__( *, stream: int | None = ..., max_version: tuple[int, int] | None = ..., dl_device: tuple[T, D] | None = ..., copy: bool | None = ..., ) -> types.CapsuleType: ... ``` |
| ```plain CanDLPackDevice[ +T = int, +D: int = int, ] ``` | ```python def __dlpack_device__() -> tuple[T, D]: ... ``` |
The + prefix indicates that the type parameter is covariant.
Enums
There are also two convenient
IntEnums
in optype.dlpack: DLDeviceType for the device types, and DLDataTypeCode for the
internal type-codes of the DLPack data types.
| `numpy.typing.NDArray`[^1] | `optype.numpy.Array` | `optype.numpy.ArrayND` |
|---|---|---|
| ```python type NDArray[ # no shape type SCT: generic, # no default ] = ndarray[Any, dtype[SCT]] ``` | ```python type Array[ NDT: (int, ...) = (int, ...), SCT: generic = generic, ] = ndarray[NDT, dtype[SCT]] ``` | ```python type ArrayND[ SCT: generic = generic, NDT: (int, ...) = (int, ...), ] = ndarray[NDT, dtype[SCT]] ``` |
Additionally, there are the four Array{0,1,2,3}D aliases, which are
equivalent to Array with tuple[()], tuple[int], tuple[int, int] and
tuple[int, int, int] as shape-type, respectively.
[^1]: Since numpy>=2.2 the NDArray alias uses tuple[int, ...] as shape-type
instead of Any.
[!TIP] Before NumPy 2.1, the shape type parameter of
ndarray(i.e. the type ofndarray.shape) was invariant. It is therefore recommended to not useLiteralwithin shape types onnumpy<2.1. So withnumpy>=2.1you can usetuple[Literal[3], Literal[3]]without problem, but withnumpy<2.1you should usetuple[int, int]instead.See numpy/numpy#25729 and numpy/numpy#26081 for details.
In the same way as ArrayND for ndarray (shown for reference), its subtypes
np.ma.MaskedArray and np.matrix are also aliased:
| `ArrayND` (`np.ndarray`) | `MArray` (`np.ma.MaskedArray`) | `Matrix` (`np.matrix`) |
|---|---|---|
| ```python type ArrayND[ SCT: generic = generic, NDT: (int, ...) = (int, ...), ] = ndarray[NDT, dtype[SCT]] ``` | ```python type MArray[ SCT: generic = generic, NDT: (int, ...) = (int, ...), ] = ma.MaskedArray[NDT, dtype[SCT]] ``` | ```python type Matrix[ SCT: generic = generic, M: int = int, N: int = M, ] = matrix[(M, N), dtype[SCT]] ``` |
For masked arrays with specific ndim, you could also use one of the four
MArray{0,1,2,3}D aliases.
Array typeguards
To check whether a given object is an instance of Array{0,1,2,3,N}D, in a way that
static type-checkers also understand it, the following PEP 742 typeguards can
be used:
| typeguard | narrows to | shape type |
|---|---|---|
optype.numpy._ |
builtins._ |
|
is_array_nd |
ArrayND[ST] |
tuple[int, ...] |
is_array_0d |
Array0D[ST] |
tuple[()] |
is_array_1d |
Array1D[ST] |
tuple[int] |
is_array_2d |
Array2D[ST] |
tuple[int, int] |
is_array_3d |
Array3D[ST] |
tuple[int, int, int] |
These functions additionally accept an optional dtype argument, that can either be
a np.dtype[ST] instance, a type[ST], or something that has a dtype: np.dtype[ST]
attribute.
The signatures are almost identical to each other, and in the 0d case it roughly
looks like this:
```py T = TypeVar("T", bound=np.generic, default=Any) ToDType: TypeAlias = type[T] | np.dtype[T] | HasDType[np.dtype[T]]
def isarray0d(a, /, dtype: ToDType[T] | None = None) -> TypeIs[Array0D[_T]]: ... ```
Shape aliases
A shape is nothing more than a tuple of (non-negative) integers, i.e.
an instance of tuple[int, ...] such as (42,), (480, 720, 3) or ().
The length of a shape is often referred to as the number of dimensions
or the dimensionality of the array or scalar.
For arrays this is accessible through the np.ndarray.ndim, which is
an alias for len(np.ndarray.shape).
[!NOTE] Before NumPy 2, the maximum number of dimensions was
32, but has since been increased tondim <= 64.
To make typing the shape of an array easier, optype provides two families of
shape type aliases: AtLeast{N}D and AtMost{N}D.
The {N} should be replaced by the number of dimensions, which currently
is limited to 0, 1, 2, and 3.
Both of these families are generic, and their (optional) type parameters must
be either int (default), or a literal (non-negative) integer, i.e. like
typing.Literal[N: int].
The names AtLeast{N}D and AtMost{N}D are pretty much as self-explanatory:
AtLeast{N}Dis atuple[int, ...]withndim >= NAtMost{N}Dis atuple[int, ...]withndim <= N
The shape aliases are roughly defined as:
N
|
ndim >= N
|
ndim <= N
| |
|---|---|---|---|
| 0 | ```python type AtLeast0D = (int, ...) ``` | ```python type AtMost0D = () ``` | |
| 1 | ```python type AtLeast1D = (int, *AtLeast0D) ``` | ```python type AtMost1D = AtMost0D | (int,) ``` | |
| 2 | ```python type AtLeast2D = ( tuple[int, int] | AtLeast3D[int] ) ``` | ```python type AtMost2D = AtMost1D | (int, int) ``` | |
| 3 | ```python type AtLeast3D = ( tuple[int, int, int] | tuple[int, int, int, int] | tuple[int, int, int, int, int] # etc... ) ``` | ```python type AtMost3D = AtMost2D | (int, int, int) ``` | |
The AtLeast{}D optionally accepts a type argument that can either be int (default),
or Any. Passing Any turns it from a gradual tuple type, so that they can also be
assigned to compatible bounded shape-types. So AtLeast1D[Any] is assignable to
tuple[int], whereas AtLeast1D (equiv. AtLeast1D[int]) is not.
However, mypy currently has a bug, causing it to falsely reject such gradual shape-type assignment for N=1 or up.
Array-likes
Similar to the numpy._typing._ArrayLike{}_co coercible array-like types,
optype.numpy provides the optype.numpy.To{}ND. Unlike the ones in numpy, these
don't accept "bare" scalar types (the __len__ method is required).
Additionally, there are the To{}1D, To{}2D, and To{}3D for vector-likes,
matrix-likes, and cuboid-likes, and the To{} aliases for "bare" scalar types.
builtins |
numpy |
optype.numpy |
|||
|---|---|---|---|---|---|
| exact scalar types | scalar-like | {1,2,3,N}-d array-like |
strict {1,2,3}-d array-like |
||
False |
False_ |
ToJustFalse |
|||
False| 0
|
False_ |
ToFalse |
|||
True |
True_ |
ToJustTrue |
|||
True| 1
|
True_ |
ToTrue |
|||
bool |
bool_ |
ToJustBool |
ToJustBool{}D |
ToJustBoolStrict{}D |
|
bool| 0| 1
|
bool_ |
ToBool |
ToBool{}D |
ToBoolStrict{}D |
|
~int |
integer |
ToJustInt |
ToJustInt{}D |
ToJustIntStrict{}D |
|
int| bool
|
integer| bool_
|
ToInt |
ToInt{}D |
ToIntStrict{}D |
|
float16 |
ToJustFloat16 |
ToJustFloat16_{}D |
ToJustFloat16Strict{}D |
||
| float16| int8| uint8| bool_
|
ToFloat32 |
ToFloat32_{}D |
ToFloat32Strict{}D |
||
float32 |
ToJustFloat32 |
ToJustFloat32_{}D |
ToJustFloat32Strict{}D |
||
| float32| float16| int16| uint16| int8| uint8| bool_
|
ToFloat32 |
ToFloat32_{}D |
ToFloat32Strict{}D |
||
~float |
float64 |
ToJustFloat64 |
ToJustFloat64_{}D |
ToJustFloat64Strict{}D |
|
float| int| bool
|
float64| float32| float16| integer| bool_
|
ToFloat64 |
ToFloat64_{}D |
ToFloat64Strict{}D |
|
~float |
floating |
ToJustFloat |
ToJustFloat{}D |
ToJustFloatStrict{}D |
|
float| int| bool
|
floating| integer| bool_
|
ToFloat |
ToFloat{}D |
ToFloatStrict{}D |
|
complex64 |
ToJustComplex64 |
ToJustComplex64_{}D |
ToJustComplex64Strict{}D |
||
| complex64| float32| float16| int16| uint16| int8| uint8| bool_
|
ToComplex64 |
ToComplex64_{}D |
ToComplex64Strict{}D |
||
~complex |
complex128 |
ToJustComplex128 |
ToJustComplex128_{}D |
ToJustComplex128Strict{}D |
|
complex| float| int| bool
|
complex128| complex64| float64| float32| float16| integer| bool_
|
ToComplex128 |
ToComplex128_{}D |
ToComplex128Strict{}D |
|
~complex |
complexfloating |
ToJustComplex |
ToJustComplex{}D |
ToJustComplexStrict{}D |
|
complex| float| int| bool
|
number| bool_
|
ToComplex |
ToComplex{}D |
ToComplexStrict{}D |
|
complex| float| int| bool
| bytes| str |
generic |
ToScalar |
ToArray{}D |
ToArrayStrict{}D |
|
[!NOTE] The
To*Strict{1,2,3}Daliases were added inoptype 0.7.3.These array-likes with strict shape-type require the shape-typed input to be shape-typed. This means that e.g.
ToFloat1DandToFloat2Dare disjoint (non-overlapping), and makes them suitable to overload array-likes of a particular dtype for different numbers of dimensions.
[!NOTE] The
ToJust{Bool,Float,Complex}*type aliases were added inoptype 0.8.0.See
optype.Justfor more information.
[!NOTE] The
To[Just]{False,True}type aliases were added inoptype 0.9.1.These only include the
np.booltypes onnumpy>=2.2. Before that,np.boolwasn't generic, making it impossible to distinguish betweennp.False_andnp.True_using static typing.
[!NOTE] The
ToArrayStrict{1,2,3}Dtypes are generic sinceoptype 0.9.1, analogous to their non-strict dual type,ToArray{1,2,3}D.
[!NOTE] The
To[Just]{Float16,Float32,Complex64}*type aliases were added inoptype 0.12.0.
Source code: optype/numpy/_to.py
Literals
| Type Alias | String values |
| --------------- | ------------------------------------------------------------------ |
| ByteOrder | ByteOrderChar \| ByteOrderName \| {L, B, N, I, S} |
| ByteOrderChar | {<, >, =, \|} |
| ByteOrderName | {little, big, native, ignore, swap} |
| Casting | CastingUnsafe \| CastingSafe |
| CastingUnsafe | {unsafe} |
| CastingSafe | {no, equiv, safe, same_kind} |
| ConvolveMode | {full, same, valid} |
| Device | {cpu} |
| IndexMode | {raise, wrap, clip} |
| OrderCF | {C, F} |
| OrderACF | {A, C, F} |
| OrderKACF | {K, A, C, F} |
| PartitionKind | {introselect} |
| SortKind | {Q, quick[sort], M, merge[sort], H, heap[sort], S, stable[sort]} |
| SortSide | {left, right} |
compat submodule
Compatibility module for supporting a wide range of numpy versions (currently >=1.25).
It contains the abstract numeric scalar types, with numpy>=2.2
type-parameter defaults, which I explained in the release notes.
random submodule
SPEC 7 -compatible type aliases.
The optype.numpy.random module provides three type aliases: RNG, ToRNG, and
ToSeed.
In general, the most useful one is ToRNG, which describes what can be
passed to numpy.random.default_rng. It is defined as the union of RNG, ToSeed,
and numpy.random.BitGenerator.
The RNG is the union type of numpy.random.Generator and its legacy dual type,
numpy.random.RandomState.
ToSeed accepts integer-like scalars, sequences, and arrays, as well as instances of
numpy.random.SeedSequence.
DType
In NumPy, a dtype (data type) object, is an instance of the
numpy.dtype[ST: np.generic] type.
It's commonly used to convey metadata of a scalar type, e.g. within arrays.
Because the type parameter of np.dtype isn't optional, it could be more
convenient to use the alias optype.numpy.DType, which is defined as:
python
type DType[ST: np.generic = np.generic] = np.dtype[ST]
Apart from the "CamelCase" name, the only difference with np.dtype is that
the type parameter can be omitted, in which case it's equivalent to
np.dtype[np.generic], but shorter.
Scalar
The optype.numpy.Scalar interface is a generic runtime-checkable protocol,
that can be seen as a "more specific" np.generic, both in name, and from
a typing perspective.
Its type signature looks roughly like this:
python
type Scalar[
# The "Python type", so that `Scalar.item() -> PT`.
PT: object,
# The "N-bits" type (without having to deal with `npt.NBitBase`).
# It matches the `itemsize: NB` property.
NB: int = int,
] = ...
It can be used as e.g.
python
are_birds_real: Scalar[bool, Literal[1]] = np.bool_(True)
the_answer: Scalar[int, Literal[2]] = np.uint16(42)
alpha: Scalar[float, Literal[8]] = np.float64(1 / 137)
[!NOTE] The second type argument for
itemsizecan be omitted, which is equivalent to setting it toint, soScalar[PT]andScalar[PT, int]are equivalent.
UFunc
A large portion of numpy's public API consists of universal functions, often
denoted as ufuncs, which are (callable) instances of
np.ufunc.
[!TIP] Custom ufuncs can be created using
np.frompyfunc, but also through a user-defined class that implements the required attributes and methods (i.e., duck typing).
But np.ufunc has a big issue; it accepts no type parameters.
This makes it very difficult to properly annotate its callable signature and
its literal attributes (e.g. .nin and .identity).
This is where optype.numpy.UFunc comes into play:
It's a runtime-checkable generic typing protocol, that has been thoroughly
type- and unit-tested to ensure compatibility with all of numpy's ufunc
definitions.
Its generic type signature looks roughly like:
python
type UFunc[
# The type of the (bound) `__call__` method.
Fn: CanCall = CanCall,
# The types of the `nin` and `nout` (readonly) attributes.
# Within numpy these match either `Literal[1]` or `Literal[2]`.
Nin: int = int,
Nout: int = int,
# The type of the `signature` (readonly) attribute;
# Must be `None` unless this is a generalized ufunc (gufunc), e.g.
# `np.matmul`.
Sig: str | None = str | None,
# The type of the `identity` (readonly) attribute (used in `.reduce`).
# Unless `Nin: Literal[2]`, `Nout: Literal[1]`, and `Sig: None`,
# this should always be `None`.
# Note that `complex` also includes `bool | int | float`.
Id: complex | bytes | str | None = float | None,
] = ...
[!NOTE] Unfortunately, the extra callable methods of
np.ufunc(at,reduce,reduceat,accumulate, andouter), are incorrectly annotated (asNoneattributes, even though at runtime they're methods that raise aValueErrorwhen called). This currently makes it impossible to properly type these inoptype.numpy.UFunc; doing so would make it incompatible with numpy's ufuncs.
Any*Array and Any*DType
The Any{Scalar}Array type aliases describe array-likes that are coercible to an
numpy.ndarray with specific dtype.
Unlike numpy.typing.ArrayLike, these optype.numpy aliases don't
accept "bare" scalar types such as float and np.float64. However, arrays of
"zero dimensions" like onp.Array[tuple[()], np.float64] will be accepted.
This is in line with the behavior of numpy.isscalar on numpy >= 2.
```py import numpy.typing as npt import optype.numpy as onp
vnp: npt.ArrayLike = 3.14 # accepted vop: onp.AnyArray = 3.14 # rejected
sigma1np: npt.ArrayLike = [[0, 1], [1, 0]] # accepted sigma1op: onp.AnyArray = [[0, 1], [1, 0]] # accepted ```
[!NOTE] The
numpy.dtypesdocs exists since NumPy 1.25, but its type annotations were incorrect before NumPy 2.1 (see numpy/numpy#27008)
See the docs for more info on the NumPy scalar type hierarchy.
Abstract types
numpy._ |
optype.numpy._ |
||
|---|---|---|---|
| scalar | scalar base | array-like | dtype-like |
generic |
AnyArray |
AnyDType |
|
number |
generic |
AnyNumberArray |
AnyNumberDType |
integer |
number |
AnyIntegerArray |
AnyIntegerDType |
inexact |
AnyInexactArray |
AnyInexactDType |
|
unsignedinteger |
integer |
AnyUnsignedIntegerArray |
AnyUnsignedIntegerDType |
signedinteger |
AnySignedIntegerArray |
AnySignedIntegerDType |
|
floating |
inexact |
AnyFloatingArray |
AnyFloatingDType |
complexfloating |
AnyComplexFloatingArray |
AnyComplexFloatingDType |
|
Unsigned integers
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
| `uint_`[^5] | unsignedinteger |
AnyUIntArray |
AnyUIntDType |
|
uintp |
AnyUIntPArray |
AnyUIntPDType |
||
uint8, ubyte |
UInt8DType |
AnyUInt8Array |
AnyUInt8DType |
|
uint16, ushort |
UInt16DType |
AnyUInt16Array |
AnyUInt16DType |
|
| `uint32`[^6] | UInt32DType |
AnyUInt32Array |
AnyUInt32DType |
|
uint64 |
UInt64DType |
AnyUInt64Array |
AnyUInt64DType |
|
| `uintc`[^6] | UIntDType |
AnyUIntCArray |
AnyUIntCDType |
|
| `ulong`[^7] | ULongDType |
AnyULongArray |
AnyULongDType |
|
ulonglong |
ULongLongDType |
AnyULongLongArray |
AnyULongLongDType |
|
Signed integers
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
| `int_`[^5] | signedinteger |
AnyIntArray |
AnyIntDType |
|
intp |
AnyIntPArray |
AnyIntPDType |
||
int8, byte |
Int8DType |
AnyInt8Array |
AnyInt8DType |
|
int16, short |
Int16DType |
AnyInt16Array |
AnyInt16DType |
|
| `int32`[^6] | Int32DType |
AnyInt32Array |
AnyInt32DType |
|
int64 |
Int64DType |
AnyInt64Array |
AnyInt64DType |
|
| `intc`[^6] | IntDType |
AnyIntCArray |
AnyIntCDType |
|
| `long`[^7] | LongDType |
AnyLongArray |
AnyLongDType |
|
longlong |
LongLongDType |
AnyLongLongArray |
AnyLongLongDType |
|
[^5]: Since NumPy 2, np.uint and np.int_ are aliases for np.uintp and np.intp, respectively.
[^6]: On unix-based platforms np.[u]intc are aliases for np.[u]int32.
[^7]: On NumPy 1 np.uint and np.int_ are what in NumPy 2 are now the np.ulong and np.long types, respectively.
Real floats
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
float16,half
|
np.floating |
Float16DType |
AnyFloat16Array |
AnyFloat16DType |
float32,single
|
Float32DType |
AnyFloat32Array |
AnyFloat32DType |
|
float64,double
|
np.floating &builtins.float
|
Float64DType |
AnyFloat64Array |
AnyFloat64DType |
| `longdouble`[^13] | np.floating |
LongDoubleDType |
AnyLongDoubleArray |
AnyLongDoubleDType |
[^13]: Depending on the platform, np.longdouble is (almost always) an alias for either float128,
float96, or (sometimes) float64.
Complex floats
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
complex64,csingle
|
complexfloating |
Complex64DType |
AnyComplex64Array |
AnyComplex64DType |
complex128,cdouble
|
complexfloating &builtins.complex
|
Complex128DType |
AnyComplex128Array |
AnyComplex128DType |
| `clongdouble`[^16] | complexfloating |
CLongDoubleDType |
AnyCLongDoubleArray |
AnyCLongDoubleDType |
[^16]: Depending on the platform, np.clongdouble is (almost always) an alias for either complex256,
complex192, or (sometimes) complex128.
"Flexible"
Scalar types with "flexible" length, whose values have a (constant) length
that depends on the specific np.dtype instantiation.
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
str_ |
character |
StrDType |
AnyStrArray |
AnyStrDType |
bytes_ |
BytesDType |
AnyBytesArray |
AnyBytesDType |
|
dtype("c") |
AnyBytes8DType |
|||
void |
flexible |
VoidDType |
AnyVoidArray |
AnyVoidDType |
Other types
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
| `bool_`[^0] | generic |
BoolDType |
AnyBoolArray |
AnyBoolDType |
object_ |
ObjectDType |
AnyObjectArray |
AnyObjectDType |
|
datetime64 |
DateTime64DType |
AnyDateTime64Array |
AnyDateTime64DType |
|
timedelta64 |
*`generic`*[^22] | TimeDelta64DType |
AnyTimeDelta64Array |
AnyTimeDelta64DType |
| [^2056] | StringDType |
AnyStringArray |
AnyStringDType |
|
[^0]: Since NumPy 2, np.bool is preferred over np.bool_, which only exists for backwards compatibility.
[^22]: At runtime np.timedelta64 is a subclass of np.signedinteger, but this is currently not
reflected in the type annotations.
[^2056]: The np.dypes.StringDType has no associated numpy scalar type, and its .type attribute returns the
builtins.str type instead. But from a typing perspective, such a np.dtype[builtins.str] isn't a valid type.
Low-level interfaces
Within optype.numpy there are several Can* (single-method) and Has*
(single-attribute) protocols, related to the __array_*__ dunders of the
NumPy Python API.
These typing protocols are, just like the optype.Can* and optype.Has* ones,
runtime-checkable and extensible (i.e. not @final).
[!TIP] All type parameters of these protocols can be omitted, which is equivalent to passing its upper type bound.
| Protocol type signature | Implements | NumPy docs |
|---|---|---|
| ```python class CanArray[ ND: tuple[int, ...] = ..., ST: np.generic = ..., ]: ... ``` | ```python def __array__[RT = ST]( _, dtype: DType[RT] | None = ..., ) -> Array[ND, RT] ``` | [User Guide: Interoperability with NumPy][DOC-ARRAY] |
| ```python class CanArrayUFunc[ U: UFunc = ..., R: object = ..., ]: ... ``` | ```python def __array_ufunc__( _, ufunc: U, method: LiteralString, *args: object, **kwargs: object, ) -> R: ... ``` | [NEP 13][NEP13] |
| ```python class CanArrayFunction[ F: CanCall[..., object] = ..., R = object, ]: ... ``` | ```python def __array_function__( _, func: F, types: CanIterSelf[type[CanArrayFunction]], args: tuple[object, ...], kwargs: Mapping[str, object], ) -> R: ... ``` | [NEP 18][NEP18] |
| ```python class CanArrayFinalize[ T: object = ..., ]: ... ``` | ```python def __array_finalize__(_, obj: T): ... ``` | [User Guide: Subclassing ndarray][DOC-AFIN] |
| ```python class CanArrayWrap: ... ``` | ```python def __array_wrap__[ND, ST]( _, array: Array[ND, ST], context: (...) | None = ..., return_scalar: bool = ..., ) -> Self | Array[ND, ST] ``` | [API: Standard array subclasses][REF_ARRAY-WRAP] |
| ```python class HasArrayInterface[ V: Mapping[str, object] = ..., ]: ... ``` | ```python __array_interface__: V ``` | [API: The array interface protocol][REF_ARRAY-INTER] |
| ```python class HasArrayPriority: ... ``` | ```python __array_priority__: float ``` | [API: Standard array subclasses][REF_ARRAY-PRIO] |
| ```python class HasDType[ DT: DType = ..., ]: ... ``` | ```python dtype: DT ``` | [API: Specifying and constructing data types][REF_DTYPE] |
Owner
- Name: Joren Hammudoglu
- Login: jorenham
- Kind: user
- Location: Delft, the Netherlands
- Repositories: 123
- Profile: https://github.com/jorenham
NumPy maintainer, author of scipy-stubs, numtype, optype, and Lmo.
GitHub Events
Total
- Create event: 185
- Release event: 13
- Issues event: 64
- Watch event: 46
- Delete event: 183
- Issue comment event: 45
- Push event: 309
- Pull request review comment event: 17
- Pull request review event: 15
- Pull request event: 350
- Fork event: 3
Last Year
- Create event: 185
- Release event: 13
- Issues event: 64
- Watch event: 46
- Delete event: 183
- Issue comment event: 45
- Push event: 309
- Pull request review comment event: 17
- Pull request review event: 15
- Pull request event: 350
- Fork event: 3
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 59
- Total pull requests: 573
- Average time to close issues: 19 days
- Average time to close pull requests: 1 day
- Total issue authors: 8
- Total pull request authors: 4
- Average comments per issue: 0.58
- Average comments per pull request: 0.12
- Merged pull requests: 457
- Bot issues: 1
- Bot pull requests: 99
Past Year
- Issues: 43
- Pull requests: 318
- Average time to close issues: 7 days
- Average time to close pull requests: about 10 hours
- Issue authors: 7
- Pull request authors: 4
- Average comments per issue: 0.6
- Average comments per pull request: 0.06
- Merged pull requests: 265
- Bot issues: 0
- Bot pull requests: 30
Top Authors
Issue Authors
- jorenham (52)
- gdfast (1)
- RandallPittmanOrSt (1)
- kam193 (1)
- jolars (1)
- schirrmacher (1)
- nstarman (1)
- pre-commit-ci[bot] (1)
Pull Request Authors
- jorenham (471)
- dependabot[bot] (58)
- pre-commit-ci[bot] (41)
- kam193 (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 890,101 last-month
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 0
(may contain duplicates) - Total versions: 59
- Total maintainers: 2
proxy.golang.org: github.com/jorenham/optype
- Documentation: https://pkg.go.dev/github.com/jorenham/optype#section-documentation
- License: bsd-3-clause
-
Latest release: v0.13.4
published 6 months ago
Rankings
pypi.org: optype
Building Blocks for Precise & Flexible Type Hints
- Documentation: https://github.com/jorenham/optype/blob/master/README.md
- License: bsd-3-clause
-
Latest release: 0.13.4
published 6 months ago
Rankings
Maintainers (2)
Funding
- https://github.com/sponsors/jorenham
Dependencies
- actions/checkout v4 composite
- actions/setup-python v5 composite
- cfgv 3.4.0
- codespell 2.2.6
- colorama 0.4.6
- distlib 0.3.8
- filelock 3.13.1
- identify 2.5.35
- iniconfig 2.0.0
- nodeenv 1.8.0
- packaging 23.2
- platformdirs 4.2.0
- pluggy 1.4.0
- pre-commit 3.6.2
- pyright 1.1.351
- pytest 8.0.1
- pytest-github-actions-annotate-failures 0.2.0
- pyyaml 6.0.1
- ruff 0.2.2
- setuptools 69.1.0
- virtualenv 20.25.1
- codespell ^2.2.6 develop
- pre-commit ^3.6.2 develop
- pyright ^1.1.351 develop
- pytest ^8.0.1 develop
- ruff ^0.2.2 develop
- pytest-github-actions-annotate-failures >=0.2,<1.0 github
- python ^3.12