https://github.com/althonos/nanoset.py

A memory-optimized wrapper for Python sets likely to be empty.

https://github.com/althonos/nanoset.py

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: ncbi.nlm.nih.gov
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.0%) to scientific vocabulary

Keywords

memory-allocation python-library rust-ffi wrapper

Keywords from Contributors

embedded progress-bar bioinformatics obo obo-graphs obofoundry ontology owl semantic-web embedded-rust
Last synced: 5 months ago · JSON representation

Repository

A memory-optimized wrapper for Python sets likely to be empty.

Basic Info
  • Host: GitHub
  • Owner: althonos
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 396 KB
Statistics
  • Stars: 8
  • Watchers: 2
  • Forks: 2
  • Open Issues: 1
  • Releases: 0
Topics
memory-allocation python-library rust-ffi wrapper
Created over 6 years ago · Last pushed over 3 years ago
Metadata Files
Readme Changelog License

README.md

nanoset.py starme

A memory-optimized wrapper for Python sets likely to be empty.

TravisCI AppVeyor License Source PyPI Crates Wheel Python Versions Python Implementations Changelog GitHub issues

⏱️ TL;DR

Save up to 85% of the memory used by empty set instances in your code with a single line of code: python from nanoset import PicoSet as set

🚩 Table of Contents

🗺️ Overview

🐏 About Python memory usage

Python is a great programming language (fight me), but sometimes you start questioning why it does things in certain ways. Since Python 2.3, the standard library provides the set collection, which is a specialized container for membership testing. On the contrary to the ubiquitous list collection, set is not ordered (or, more accurately, does not let you access the order it stores the elements in). The other feature of set is that just like its mathematical counterpart, it does not allow duplicates, which is very useful for some algorithms. However, sets are memory-expensive: ```python

import sys sys.getsizeof(list()) 56 sys.getsizeof(set()) 216 ```

An empty set takes more than three times the memory of an empty list! For some data structures or objects with a lot of set attributes, they can quickly cause a very large part of the memory to be used for nothing. This is even more sad when you are used to Rust, where most collections allocate lazily. This is where nanoset comes to the rescue: ```python

import nanoset sys.getsizeof(nanoset.NanoSet()) 48 sys.getsizeof(nanoset.PicoSet()) 32 ```

Actually, that's a lie, but keep reading.

💡 Simple example usecase

Let's imagine we are building an ordered graph data structure, where we may want to store taxonomic data, or any other kind of hierarchy. We can simply define the graphs and its nodes with the two following classes:

```python class Graph: root: Node nodes: Dict[str, Node]

class Node: neighbors: Set[node] ```

This makes adding an edge and querying for an edge existence between two nodes an O(1) operation, and iterating over all the nodes an O(n) operation, which is mot likely what we want here. We use set and not list because we want to avoid storing an edge in duplicate, which is a sensible choice. But now let's look at the statistics of the NCBITaxon project, the database for Organismal Classification developed by the US National Center for Biotechnology Information:

ignore Metrics Number of classes*: 1595237 Number of individuals: 0 Number of properties: 0 Classes without definition: 1595237 Classes without label: 0 Average number of children: 12 Classes with a single child: 40319 Maximum number of children: 41761 Classes with more than 25 children: 0 Classes with more than 1 parent: 0 Maximum depth: 38 Number of leaves**: 1130671

According to these, we are going to have 1,130,671 leaves for a total of 1,595,237 nodes, which means 70.8% of empty sets. Now you may think:

Ok, I got this. But in this case, I just need a special case for leaves, where instead of storing an empty set of neighbors, I store a reference to None when that set would be empty. I can then replace that reference with an actual set only when I want to add new edges from that node. Problem solved!

Well, glad we are on the same level: this is what nanoset does for you!

🔨 Implementation

Actually, it's not magic at all. Just imagine a class NanoSet that works as a proxy to an actual Python set it wraps, but which is only allocated when some data actually needs to be stored:

```python class NanoSet(collections.abc.Set):

def __init__(self, iterable=None):
    self.inner = None if iterable is None else set(iterable)

def add(self, element):
    if self.inner is None:
        self.inner = set()
    self.inner.add(element)

# ... the rest of the `set` API ...

```

That's about it! However, doing it like so in Python would not be super efficient, as the resulting object would be 48 bytes. Using slots, this can be reduced to 40 bytes, which is on par to what we get with NanoSet.

Note that these values are only when the inner set is empty! When actually allocating the set to store our values, we allocate an additional 232 bytes of data. This means that using NanoSet creates an overhead, since a non-empty set will now weigh 264 bytes (248 bytes for PicoSet).

Well, I was way better off with my approach of storing Optional[Set] everywhere then, I don't want to pay any additional cost for nonempty sets!

Sure. But that would mean changing your whole code. And actually, you may not gain that much memory from doing that compared to using nanoset, since the only time the wrapper performs badly is when you have a load factor of more than 90%. Furthermore, just to give you some perspective, sys.getsizeof(1) is 28 bytes.

By the way, you didn't mention PicoSet. How did you manage to get that down to 32 bytes, when a slotted Python object can't be less that 40 bytes?

Easy: PicoSet is basically NanoSet, but without an implementation of the Garbage Collector protocol. This saves us 8 bytes of object memory, but comes with a drawback: the garbage collector cannot see the set allocated inside the PicoSet. This does not change anything for execution, but debugging with a memory profiler will be harder. Here is an example where we allocate 1,000,000 singletons first with NanoSet, then with PicoSet, using guppy3 to check the heap:

```python

l = [nanoset.NanoSet({x}) for x in range(1000000)] guppy.hpy().heap() Partition of a set of 3036763 objects. Total size = 304996526 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1000044 33 216105248 71 216105248 71 set 1 1000000 33 48000000 16 264105248 87 nanoset.NanoSet ... 3 113 0 8716880 3 300851404 99 list ... python l = [nanoset.PicoSet({x}) for x in range(1000000)] guppy.hpy().heap() Partition of a set of 1036905 objects. Total size = 44998965 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 1000000 96 32000000 71 32000000 71 nanoset.PicoSet 1 96 0 8712752 24 32712752 89 list ... ```

On the second run, we have about the same order of allocated memory, saving 16 MB (16 bytes saved by switching from NanoSet to PicoSet times 1,000,000 instances). However, the garbage collector has no idea where some of the memory is, because PicoSet hides the sets it allocates (this is fine: it will be deallocated along with the PicoSet).

As such, I'd advise avoiding using PicoSet when debugging, which can be done easily with Python's __debug__ flag: python if __debug__: from nanoset import NanoSet as set else: from nanoset import PicoSet as set This will cause PicoSet to be used instead of NanoSet when running Python with the -O flag.

📈 Statistics

Okay, so let's do some maths. With S = 232 the size of an allocated set, s the size of the wrapper (48 for NanoSet, 32 for PicoSet), the x percentage of nonempty sets in our data structure, the relative size of our sets is:

  • if we're using set: S * x / (S * x) = 100% (we use that as a reference)
  • if we're using nanoset: ((S + s) * x + s * (100 - x)) / (S * x)

This gives us the following graph, which shows how much memory you can save depending of the ratio of empty sets you have at runtime:

sizegraph

If we get back to our NCBITaxon example, we have a total of 1,595,237 nodes and 1,130,671 leaves, which means that by using sets we are allocating 1,595,237 * 232 = 328.6 MiB of memory simply for set after the whole taxonomy is loaded. If we use NanoSet however, we can reduce this to 168.7 MiB, or even to 144.4 MiB with PicoSet! We just saved about 45% memory just by using NanoSet in place of set.

🔧 Installing

This module is implemented in Rust, but native Python wheels are compiled for the following platforms:

  • Windows x86-64: CPython 3.5, 3.6, 3.7, 3.8
  • Linux x86-64: CPython 3.5, 3.6, 3.7, 3.8
  • OSX x86-64: CPython 3.5, 3.6, 3.7, 3.8

If you platform is not among these, you will need a working Rust nightly toolchain as well as the setuptools-rust library installed to build the extension module.

Then, simply install with pip: console $ pip install --user nanoset

📖 API Reference

Well, this is a comprehensive wrapper for set, so you can just read the standard library documentation. Except for some very particular edge-cases, NanoSet and PicoSet both pass the set test suite of CPython.

There are however things you can't do: - Subclassing a PicoSet or a NanoSet. - Weakrefing a PicoSet or a NanoSet. - Checking for membership in a plain set or frozenset with implicit conversion to frozenset. - Creating a dict from a PicoSet or a NanoSet without rehashing keys.

📜 License

This library is provided under the open-source MIT license.

Owner

  • Name: Martin Larralde
  • Login: althonos
  • Kind: user
  • Location: Heidelberg, Germany
  • Company: EMBL / LUMC, @zellerlab

PhD candidate in Bioinformatics, passionate about programming, SIMD-enthusiast, Pythonista, Rustacean. I write poems, and sometimes they are executable.

GitHub Events

Total
Last Year

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 89
  • Total Committers: 4
  • Avg Commits per committer: 22.25
  • Development Distribution Score (DDS): 0.034
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Martin Larralde m****e@e****r 86
imgbot[bot] 3****] 1
dependabot-preview[bot] 2****] 1
ABDUL NIYAS P M a****e@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 4
  • Total pull requests: 3
  • Average time to close issues: 2 days
  • Average time to close pull requests: 20 days
  • Total issue authors: 4
  • Total pull request authors: 3
  • Average comments per issue: 0.75
  • Average comments per pull request: 0.67
  • Merged pull requests: 3
  • Bot issues: 1
  • Bot pull requests: 2
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • Theelx (1)
  • mpreusse (1)
  • alexhenrie (1)
  • dependabot-preview[bot] (1)
Pull Request Authors
  • abdulniyaspm (1)
  • imgbot[bot] (1)
  • dependabot-preview[bot] (1)
Top Labels
Issue Labels
enhancement (2)
Pull Request Labels
dependencies (1)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 631 last-month
    • cargo 6,456 total
  • Total dependent packages: 1
    (may contain duplicates)
  • Total dependent repositories: 3
    (may contain duplicates)
  • Total versions: 12
  • Total maintainers: 2
pypi.org: nanoset

A memory-optimized wrapper for Python sets likely to be empty.

  • Versions: 7
  • Dependent Packages: 1
  • Dependent Repositories: 3
  • Downloads: 631 Last month
  • Docker Downloads: 0
Rankings
Docker downloads count: 4.0%
Dependent packages count: 4.8%
Dependent repos count: 9.0%
Average: 12.2%
Downloads: 17.6%
Stargazers count: 18.5%
Forks count: 19.2%
Maintainers (1)
Last synced: 6 months ago
crates.io: nanoset-py

A memory-optimized wrapper for Python sets likely to be empty.

  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 6,456 Total
Rankings
Forks count: 24.8%
Dependent repos count: 29.3%
Stargazers count: 29.5%
Average: 31.2%
Dependent packages count: 33.8%
Downloads: 38.6%
Maintainers (1)
Last synced: 6 months ago

Dependencies

Cargo.lock cargo
  • autocfg 1.0.0
  • bitflags 1.2.1
  • built 0.4.2
  • cargo-lock 4.0.1
  • cfg-if 0.1.10
  • chrono 0.4.11
  • cloudabi 0.0.3
  • ctor 0.1.15
  • ghost 0.1.1
  • idna 0.2.0
  • indoc 0.3.5
  • indoc-impl 0.3.5
  • inventory 0.1.6
  • inventory-impl 0.1.6
  • lazy_static 1.4.0
  • libc 0.2.71
  • lock_api 0.3.4
  • matches 0.1.8
  • num-integer 0.1.42
  • num-traits 0.2.11
  • parking_lot 0.10.2
  • parking_lot_core 0.7.2
  • paste 0.1.16
  • paste-impl 0.1.16
  • percent-encoding 2.1.0
  • proc-macro-hack 0.5.16
  • proc-macro2 1.0.18
  • pyo3 0.11.0
  • pyo3-built 0.4.2
  • pyo3-derive-backend 0.11.0
  • pyo3cls 0.11.0
  • quote 1.0.7
  • redox_syscall 0.1.56
  • scopeguard 1.1.0
  • semver 0.9.0
  • semver-parser 0.7.0
  • serde 1.0.111
  • serde_derive 1.0.111
  • smallvec 1.4.0
  • syn 1.0.30
  • time 0.1.43
  • toml 0.5.6
  • unicode-bidi 0.3.4
  • unicode-normalization 0.1.12
  • unicode-xid 0.2.0
  • unindent 0.1.5
  • url 2.1.1
  • winapi 0.3.8
  • winapi-i686-pc-windows-gnu 0.4.0
  • winapi-x86_64-pc-windows-gnu 0.4.0
Cargo.toml cargo
  • lazy_static 1.4.0 development
  • pyo3 0.11.0
  • pyo3-built 0.4.2
ci/requirements.txt pypi
  • packaging *
  • semantic_version *
  • setuptools *
  • setuptools-rust *
  • toml *
  • twine *
  • wheel *