Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.2%) to scientific vocabulary
Keywords
Repository
Asynchronous Computing Made ESI
Basic Info
- Host: GitHub
- Owner: esi-neuroscience
- License: bsd-3-clause
- Language: Python
- Default Branch: main
- Homepage: https://esi-acme.readthedocs.io/en/latest/
- Size: 4.33 MB
Statistics
- Stars: 12
- Watchers: 5
- Forks: 2
- Open Issues: 1
- Releases: 15
Topics
Metadata Files
README.md

ACME: Asynchronous Computing Made ESI
Table of Contents
Summary
The objective of ACME (pronounced "ak-mee") is to provide easy-to-use wrappers for calling Python functions concurrently ("embarassingly parallel workloads"). ACME is developed at the Ernst Strüngmann Institute (ESI) gGmbH for Neuroscience in Cooperation with Max Planck Society and released free of charge under the BSD 3-Clause "New" or "Revised" License. ACME relies heavily on the concurrent processing library dask and was primarily designed to facilitate the use of SLURM on the ESI HPC cluster (although other HPC infrastructure running SLURM can be leveraged as well). Local multi-processing hardware (i.e., multi-core CPUs) is fully supported too. ACME is itself used as the parallelization engine of SyNCoPy.

Installation
ACME can be installed with pip
shell
pip install esi-acme
or via conda
shell
conda install -c conda-forge esi-acme
To get the latest development version, simply clone our GitHub repository:
shell
git clone https://github.com/esi-neuroscience/acme.git
cd acme/
pip install -e .
Usage
Basic Examples
Simplest use, everything is done automatically.
```python from acme import ParallelMap
def f(x, y, z=3): return (x + y) * z
with ParallelMap(f, [2, 4, 6, 8], 4) as pmap: pmap.compute() ```
See also our Quickstart Guide.
Intermediate Examples
Set number of function calls via n_inputs
```python import numpy as np from acme import ParallelMap
def f(x, y, z=3, w=np.zeros((3, 1)), **kwargs): return (sum(x) + y) * z * w.max()
pmap = ParallelMap(f, [2, 4, 6, 8], [2, 2], z=np.array([1, 2]), w=np.ones((8, 1)), n_inputs=2)
with pmap as p: p.compute() ```
More details in Override Automatic Input Argument Distribution
Advanced Use
Allocate custom client object and recycle it for several computations
(use slurm_cluster_setup on non-ESI HPC infrastructure or local_cluster_setup
when working on your local machine)
```python import numpy as np from acme import ParallelMap, esiclustersetup
def f(x, y, z=3, w=np.zeros((3, 1)), **kwargs): return (sum(x) + y) * z * w.max()
def g(x, y, z=3, w=np.zeros((3, 1)), **kwargs): return (max(x) + y) * z * w.sum()
nworkers = 200 client = esiclustersetup(partition="8GBXS", nworkers=n_workers)
x = [2, 4, 6, 8] z = range(n_workers) w = np.ones((8, 1))
pmap = ParallelMap(f, x, np.random.rand(nworkers), z=z, w=w, ninputs=n_workers) with pmap as p: p.compute()
pmap = ParallelMap(g, x, np.random.rand(nworkers), z=z, w=w, ninputs=n_workers) with pmap as p: p.compute() ```
For more information see Reuse Worker Clients
Handling Results
Load Results From Files
By default, results are saved to disk in HDF5 format and can be accessed using
the results_container attribute of ParallelMap:
```python def f(x, y, z=3): return (x + y) * z
with ParallelMap(f, [2, 4, 6, 8], 4) as pmap: filenames = pmap.compute() ```
Example loading code:
```python import h5py import numpy as np out = np.zeros((4,))
with h5py.File(pmap.results_container, "r") as h5f: for k, key in enumerate(h5f.keys()): out[k] = h5f[key]["result_0"][()] ```
See also Where Are My Results?
Collect Results in Single HDF5 Dataset
If possible, results can be slotted into a single HDF5 dataset using the
result_shape keyword (None denotes the dimension for stacking results):
```python def f(x, y, z=3): return (x + y) * z
with ParallelMap(f, [2, 4, 6, 8], 4, result_shape=(None,)) as pmap: pmap.compute() ```
Example loading code:
```python import h5py
with h5py.File(pmap.results_container, "r") as h5f: out = h5f["result_0"][()] # returns a NumPy array of shape (4,) ```
Datasets support "unlimited" dimensions that do not have to be set a priori
(use np.inf in result_shape to denote a dimension of arbitrary size)
```python
Assume only the channel count but not the number of samples is known
nChannels = 10 nSamples = 1234 mockdata = np.random.rand(nChannels, nSamples) np.save("mockdata.npy", mock_data)
def mockprocessing(val): data = np.load("mockdata.npy") return val * data
with ParallelMap(mockprocessing, [2, 4, 6, 8], resultshape=(None, nChannels, np.inf)) as pmap: pmap.compute()
with h5py.File(pmap.results_container, "r") as h5f: out = h5f["result_0"][()] # returns a NumPy array of shape (4, nChannels, nSamples) ```
More examples can be found in Collect Results in Single Dataset
Collect Results in Local Memory
This is possible but not recommended.
```python def f(x, y, z=3): return (x + y) * z
with ParallelMap(f, [2, 4, 6, 8], 4, writeworkerresults=False) as pmap: result = pmap.compute() # returns a 4-element list ```
Alternatively, create an in-memory NumPy array
python
with ParallelMap(f, [2, 4, 6, 8], 4, write_worker_results=False, result_shape=(None,)) as pmap:
result = pmap.compute() # returns a NumPy array of shape (4,)
Debugging
Use the debug keyword to perform all function calls in the local thread of
the active Python interpreter
```python def f(x, y, z=3): return (x + y) * z
with ParallelMap(f, [2, 4, 6, 8], 4, z=None) as pmap: results = pmap.compute(debug=True) ```
This way tools like pdb or %debug IPython magics can be used.
More information can be found in the FAQ.
Documentation and Contact
To report bugs or ask questions please use our GitHub issue tracker. More usage details and background information is available in our online documentation.
Resources
- ACME Presentation at deRSE23 - Conference for Research Software Engineering in Germany
- ACME Demo presented at the 4th annual Data Scientist Community Meeting
- ACME Tutorials
- ACME FAQ
Owner
- Name: Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society
- Login: esi-neuroscience
- Kind: organization
- Location: Frankfurt, Germany
- Website: http://www.esi-frankfurt.de
- Repositories: 9
- Profile: https://github.com/esi-neuroscience
Citation (CITATION.cff)
authors: - affiliation: "Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society" family-names: Fuertinger given-names: Stefan orcid: https://orcid.org/0000-0002-8118-036X - affiliation: "Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society" family-names: Shapcott given-names: Katharine orcid: https://orcid.org/0000-0001-8618-5779 - affiliation: "Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society" family-names: Schmiedt given-names: Joscha Tapani orcid: https://orcid.org/0000-0001-6233-1866 cff-version: 1.1.0 date-released: '2025-06-03' keywords: - high-performance computing - parallel computing license: BSD-3-Clause message: If you use this software, please cite it based on metadata found in this file. ACME provides functionality to execute Python functions concurrently on HPC clusters. repository-code: https://github.com/esi-neuroscience/acme title: 'ACME: Asynchronous Computing Made ESI' version: '2025.6'
GitHub Events
Total
- Create event: 2
- Release event: 2
- Issues event: 5
- Watch event: 1
- Issue comment event: 2
- Push event: 71
- Pull request event: 5
Last Year
- Create event: 2
- Release event: 2
- Issues event: 5
- Watch event: 1
- Issue comment event: 2
- Push event: 71
- Pull request event: 5
Committers
Last synced: almost 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| Stefan Fuertinger | s****r@e****e | 346 |
| timnaher | t****r@g****m | 5 |
| Katharine Shapcott | k****t@g****m | 4 |
| KatharineShapcott | 6****t | 3 |
| Stefan Fuertinger | p****y | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 27
- Total pull requests: 41
- Average time to close issues: 2 months
- Average time to close pull requests: about 13 hours
- Total issue authors: 5
- Total pull request authors: 3
- Average comments per issue: 2.81
- Average comments per pull request: 0.29
- Merged pull requests: 41
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 6
- Average time to close issues: 27 days
- Average time to close pull requests: 11 minutes
- Issue authors: 1
- Pull request authors: 1
- Average comments per issue: 2.0
- Average comments per pull request: 0.0
- Merged pull requests: 6
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- KatharineShapcott (14)
- pantaray (10)
- timnaher (1)
- hummuscience (1)
- atlaie (1)
Pull Request Authors
- pantaray (36)
- KatharineShapcott (4)
- timnaher (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 186 last-month
- Total dependent packages: 1
- Total dependent repositories: 1
- Total versions: 21
- Total maintainers: 1
pypi.org: esi-acme
Asynchronous Computing Made ESI
- Homepage: https://esi-acme.readthedocs.io/en/latest/
- Documentation: https://esi-acme.readthedocs.io/en/latest/
- License: BSD-3
-
Latest release: 2025.6
published 7 months ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite