esi-acme

Asynchronous Computing Made ESI

https://github.com/esi-neuroscience/acme

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.2%) to scientific vocabulary

Keywords

asynchronous high-performance-computing parallel python
Last synced: 4 months ago · JSON representation ·

Repository

Asynchronous Computing Made ESI

Basic Info
Statistics
  • Stars: 12
  • Watchers: 5
  • Forks: 2
  • Open Issues: 1
  • Releases: 15
Topics
asynchronous high-performance-computing parallel python
Created about 5 years ago · Last pushed 7 months ago
Metadata Files
Readme Changelog License Citation Security

README.md

ACME_logo

ACME: Asynchronous Computing Made ESI

conda pypi license Open in Visual Studio Code OpenSSF Best Practices REUSE status

main: tests codecov

dev: tests codecov

Table of Contents

  1. Summary
  2. Installation
  3. Usage
  4. Handling Results
  5. Debugging
  6. Documentation and Contact

Summary

The objective of ACME (pronounced "ak-mee") is to provide easy-to-use wrappers for calling Python functions concurrently ("embarassingly parallel workloads"). ACME is developed at the Ernst Strüngmann Institute (ESI) gGmbH for Neuroscience in Cooperation with Max Planck Society and released free of charge under the BSD 3-Clause "New" or "Revised" License. ACME relies heavily on the concurrent processing library dask and was primarily designed to facilitate the use of SLURM on the ESI HPC cluster (although other HPC infrastructure running SLURM can be leveraged as well). Local multi-processing hardware (i.e., multi-core CPUs) is fully supported too. ACME is itself used as the parallelization engine of SyNCoPy.

Installation

ACME can be installed with pip

shell pip install esi-acme

or via conda

shell conda install -c conda-forge esi-acme

To get the latest development version, simply clone our GitHub repository:

shell git clone https://github.com/esi-neuroscience/acme.git cd acme/ pip install -e .

Usage

Basic Examples

Simplest use, everything is done automatically.

```python from acme import ParallelMap

def f(x, y, z=3): return (x + y) * z

with ParallelMap(f, [2, 4, 6, 8], 4) as pmap: pmap.compute() ```

See also our Quickstart Guide.

Intermediate Examples

Set number of function calls via n_inputs

```python import numpy as np from acme import ParallelMap

def f(x, y, z=3, w=np.zeros((3, 1)), **kwargs): return (sum(x) + y) * z * w.max()

pmap = ParallelMap(f, [2, 4, 6, 8], [2, 2], z=np.array([1, 2]), w=np.ones((8, 1)), n_inputs=2)

with pmap as p: p.compute() ```

More details in Override Automatic Input Argument Distribution

Advanced Use

Allocate custom client object and recycle it for several computations (use slurm_cluster_setup on non-ESI HPC infrastructure or local_cluster_setup when working on your local machine)

```python import numpy as np from acme import ParallelMap, esiclustersetup

def f(x, y, z=3, w=np.zeros((3, 1)), **kwargs): return (sum(x) + y) * z * w.max()

def g(x, y, z=3, w=np.zeros((3, 1)), **kwargs): return (max(x) + y) * z * w.sum()

nworkers = 200 client = esiclustersetup(partition="8GBXS", nworkers=n_workers)

x = [2, 4, 6, 8] z = range(n_workers) w = np.ones((8, 1))

pmap = ParallelMap(f, x, np.random.rand(nworkers), z=z, w=w, ninputs=n_workers) with pmap as p: p.compute()

pmap = ParallelMap(g, x, np.random.rand(nworkers), z=z, w=w, ninputs=n_workers) with pmap as p: p.compute() ```

For more information see Reuse Worker Clients

Handling Results

Load Results From Files

By default, results are saved to disk in HDF5 format and can be accessed using the results_container attribute of ParallelMap:

```python def f(x, y, z=3): return (x + y) * z

with ParallelMap(f, [2, 4, 6, 8], 4) as pmap: filenames = pmap.compute() ```

Example loading code:

```python import h5py import numpy as np out = np.zeros((4,))

with h5py.File(pmap.results_container, "r") as h5f: for k, key in enumerate(h5f.keys()): out[k] = h5f[key]["result_0"][()] ```

See also Where Are My Results?

Collect Results in Single HDF5 Dataset

If possible, results can be slotted into a single HDF5 dataset using the result_shape keyword (None denotes the dimension for stacking results):

```python def f(x, y, z=3): return (x + y) * z

with ParallelMap(f, [2, 4, 6, 8], 4, result_shape=(None,)) as pmap: pmap.compute() ```

Example loading code:

```python import h5py

with h5py.File(pmap.results_container, "r") as h5f: out = h5f["result_0"][()] # returns a NumPy array of shape (4,) ```

Datasets support "unlimited" dimensions that do not have to be set a priori (use np.inf in result_shape to denote a dimension of arbitrary size)

```python

Assume only the channel count but not the number of samples is known

nChannels = 10 nSamples = 1234 mockdata = np.random.rand(nChannels, nSamples) np.save("mockdata.npy", mock_data)

def mockprocessing(val): data = np.load("mockdata.npy") return val * data

with ParallelMap(mockprocessing, [2, 4, 6, 8], resultshape=(None, nChannels, np.inf)) as pmap: pmap.compute()

with h5py.File(pmap.results_container, "r") as h5f: out = h5f["result_0"][()] # returns a NumPy array of shape (4, nChannels, nSamples) ```

More examples can be found in Collect Results in Single Dataset

Collect Results in Local Memory

This is possible but not recommended.

```python def f(x, y, z=3): return (x + y) * z

with ParallelMap(f, [2, 4, 6, 8], 4, writeworkerresults=False) as pmap: result = pmap.compute() # returns a 4-element list ```

Alternatively, create an in-memory NumPy array

python with ParallelMap(f, [2, 4, 6, 8], 4, write_worker_results=False, result_shape=(None,)) as pmap: result = pmap.compute() # returns a NumPy array of shape (4,)

Debugging

Use the debug keyword to perform all function calls in the local thread of the active Python interpreter

```python def f(x, y, z=3): return (x + y) * z

with ParallelMap(f, [2, 4, 6, 8], 4, z=None) as pmap: results = pmap.compute(debug=True) ```

This way tools like pdb or %debug IPython magics can be used. More information can be found in the FAQ.

Documentation and Contact

To report bugs or ask questions please use our GitHub issue tracker. More usage details and background information is available in our online documentation.

Resources

Owner

  • Name: Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society
  • Login: esi-neuroscience
  • Kind: organization
  • Location: Frankfurt, Germany

Citation (CITATION.cff)

authors:
- affiliation: "Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society"
  family-names: Fuertinger
  given-names: Stefan
  orcid: https://orcid.org/0000-0002-8118-036X
- affiliation: "Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society"
  family-names: Shapcott
  given-names: Katharine
  orcid: https://orcid.org/0000-0001-8618-5779
- affiliation: "Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society"
  family-names: Schmiedt
  given-names: Joscha Tapani
  orcid: https://orcid.org/0000-0001-6233-1866
cff-version: 1.1.0
date-released: '2025-06-03'
keywords:
- high-performance computing
- parallel computing
license: BSD-3-Clause
message: If you use this software, please cite it based on metadata found in this
  file. ACME provides functionality to execute Python functions concurrently on HPC
  clusters.
repository-code: https://github.com/esi-neuroscience/acme
title: 'ACME: Asynchronous Computing Made ESI'
version: '2025.6'

GitHub Events

Total
  • Create event: 2
  • Release event: 2
  • Issues event: 5
  • Watch event: 1
  • Issue comment event: 2
  • Push event: 71
  • Pull request event: 5
Last Year
  • Create event: 2
  • Release event: 2
  • Issues event: 5
  • Watch event: 1
  • Issue comment event: 2
  • Push event: 71
  • Pull request event: 5

Committers

Last synced: almost 2 years ago

All Time
  • Total Commits: 359
  • Total Committers: 5
  • Avg Commits per committer: 71.8
  • Development Distribution Score (DDS): 0.036
Past Year
  • Commits: 120
  • Committers: 3
  • Avg Commits per committer: 40.0
  • Development Distribution Score (DDS): 0.05
Top Committers
Name Email Commits
Stefan Fuertinger s****r@e****e 346
timnaher t****r@g****m 5
Katharine Shapcott k****t@g****m 4
KatharineShapcott 6****t 3
Stefan Fuertinger p****y 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 27
  • Total pull requests: 41
  • Average time to close issues: 2 months
  • Average time to close pull requests: about 13 hours
  • Total issue authors: 5
  • Total pull request authors: 3
  • Average comments per issue: 2.81
  • Average comments per pull request: 0.29
  • Merged pull requests: 41
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 6
  • Average time to close issues: 27 days
  • Average time to close pull requests: 11 minutes
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 2.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 6
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • KatharineShapcott (14)
  • pantaray (10)
  • timnaher (1)
  • hummuscience (1)
  • atlaie (1)
Pull Request Authors
  • pantaray (36)
  • KatharineShapcott (4)
  • timnaher (1)
Top Labels
Issue Labels
bug (15) enhancement (8)
Pull Request Labels
bug (12) enhancement (11) documentation (4)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 186 last-month
  • Total dependent packages: 1
  • Total dependent repositories: 1
  • Total versions: 21
  • Total maintainers: 1
pypi.org: esi-acme

Asynchronous Computing Made ESI

  • Versions: 21
  • Dependent Packages: 1
  • Dependent Repositories: 1
  • Downloads: 186 Last month
Rankings
Dependent packages count: 4.8%
Downloads: 13.4%
Average: 15.3%
Stargazers count: 17.7%
Forks count: 19.1%
Dependent repos count: 21.5%
Maintainers (1)
Last synced: 5 months ago

Dependencies

.github/workflows/tests_workflow.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
pyproject.toml pypi
setup.py pypi