mcsm-benchs: Benchmarking methods for multi-component signal processing
mcsm-benchs: Benchmarking methods for multi-component signal processing - Published in JOSS (2025)
Science Score: 100.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 4 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: arxiv.org, zenodo.org -
✓Committers with academic emails
5 of 9 committers (55.6%) from academic institutions -
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Repository
A toolbox that provides a common framework for benchmarks of multi-component signal processing methods.
Basic Info
Statistics
- Stars: 3
- Watchers: 2
- Forks: 3
- Open Issues: 0
- Releases: 2
Metadata Files
README.md
mcsm-benchs: A Toolbox for Benchmarking Multi-Component Signal Analysis Methods
A public, open-source, Python-based toolbox for benchmarking multi-component signal analysis methods, implemented either in Python or MATLAB/Octave.
This toolbox provides a common framework that allows researcher-independent comparisons between methods and favors reproducible research.
Create your own collaborative benchmarks using mcsm-benchs and this GitHub template.
Collaborative benchmarks allow other researchers to add new methods to your benchmark via a pull-request.
This is as easy as creating a new .py file with a Python class that wraps a call to your method (it doesn't matter if it is coded in Python, MATLAB or Octave... we welcome all!).
Template files are available for this too. Let's make collaborative science easy :).
The GitHub workflows provided in the template can automatically publish a summary report like this of the benchmarks saved in your repository, as well as make interactive online plots and give access to .csv files with the data.
[!TIP] Questions or difficulties using
mcsm-benchs?Please consider leaving an Issue so that we can help you and improve our software :whitecheckmark:.
[!TIP] :construction: Do you want to contribute to
mcsm-benchs?Please check our contribution guidelines :whitecheckmark:.
Installation using pip
bash
pip install mcsm-benchs
For installation in development mode using poetry check instructions in the documentation.
Documentation
Quick-start
Creating a new benchmark
The following simple example shows how to create a new benchmark for comparing your methods.
We set task=denoising, meaning that all methods will be compared in terms of reconstruction of the original signal from noise.
Check out examples with other tasks and performance functions in the documentation of mcsm-benchs.
```python from mcsmbenchs.Benchmark import Benchmark from mcsmbenchs.SignalBank import SignalBank
1. Import (or define) the methods to be compared.
from mymethods import method1, method_2
2. Create a dictionary of the methods to test.
mymethods = { 'Method 1': method1, 'Method 2': method_2, }
3. Create a dictionary of signals:
N = 1024 # Length of the signals sbank = SignalBank(N,) s1 = sbank.signalexpchirp() s2 = sbank.signallinearchirp() mysignals = {'Signal1':s1, 'Signal_2':s2, }
4. Set the benchmark parameters:
benchmark = Benchmark(task='denoising', N=N, repetitions = 100, SNRin=[0,10,20], # SNR in dB. methods=mymethods, signals=mysignals, verbosity=0 )
5. Launch the benchmark and save to file
benchmark.run() # Run the benchmark. benchmark.savetofile('saved_benchmark') # Give a filename and save to file ```
Processing and publishing benchmark results
```python from mcsmbenchs.Benchmark import Benchmark from mcsmbenchs.ResultsInterpreter import ResultsInterpreter
1. Load a benchmark from a file.
benchmark = Benchmark.load('path/to/file/saved_benchmark')
2. Create the interpreter
interpreter = ResultsInterpreter(benchmark)
3. Get .csv files
interpreter.getcsvfiles(path='path/to/csv/files')
4. Generate report and interactive figures
interpreter.save_report(path='path/to/report', bars=False)
5 Or generate interactive plots with plotly
from plotly.offline import iplot figs = interpreter.getsummaryplotlys(bars=True) for fig in figs: iplot(fig) ```
If you use the GitHub template for collaborative benchmarks, your results are automatically published if you enable GitHub sites in the repository configuration.
Additionally, other researchers will be able to interact with your results, download .csv files with all the benchmark data and even add their own methods to your benchmark via a pull-request.
Related works
More
:pushpin: We use oct2py to run Octave-based methods in Python.
:pushpin: We use matlabengine to run MATLAB-based methods in Python.
:pushpin: We use plotly to create online, interactive plots.
Owner
- Name: J.M.
- Login: jmiramont
- Kind: user
- Repositories: 3
- Profile: https://github.com/jmiramont
JOSS Publication
mcsm-benchs: Benchmarking methods for multi-component signal processing
Authors
Université de Lille, CNRS, Centrale Lille, UMR 9189 Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL), Lille, France., Nantes Université, Institut de Recherche en Énergie Électrique de Nantes Atlantique (IREENA, UR 4642), Saint-Nazaire, France.
Université de Lille, CNRS, Centrale Lille, UMR 9189 Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL), Lille, France.
Tags
Multi-component signals Signal processing Time-frequency representationsCitation (CITATION.cff)
cff-version: 0.1.3 message: "If you use this software, please cite it as below." authors: - family-names: "Juan M." given-names: "Miramont" orcid: "https://orcid.org/0000-0002-3847-7811" - family-names: "Remi" given-names: "Bardenet" orcid: "https://orcid.org/0000-0002-1094-9493" - family-names: "Pierre" given-names: "Chainais" orcid: "https://orcid.org/0000-0003-4377-7584" - family-names: "François" given-names: "Auger" orcid: "https://orcid.org/0000-0001-9158-1784" title: "mcsm-benchs: A Toolbox for Benchmarking Multi-Component Signal Analysis Methods" version: 0.1.3 doi: 10.5281/zenodo.17122672 date-released: 2025-09-15 url: "https://github.com/jmiramont/mcsm-benchs"
GitHub Events
Total
- Create event: 3
- Issues event: 3
- Release event: 1
- Watch event: 2
- Delete event: 1
- Issue comment event: 2
- Member event: 1
- Push event: 119
- Pull request event: 5
- Fork event: 3
Last Year
- Create event: 3
- Issues event: 3
- Release event: 1
- Watch event: 2
- Delete event: 1
- Issue comment event: 2
- Member event: 1
- Push event: 119
- Pull request event: 5
- Fork event: 3
Committers
Last synced: 3 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| jmiramont | j****t@i****r | 82 |
| juan | j****t@g****m | 61 |
| J.M | 6****t@u****m | 36 |
| jmiramont | j****t@u****r | 29 |
| jmiramont | j****t@u****r | 8 |
| Thomas S. Binns | t****s@o****m | 2 |
| pilavciy | y****i@g****r | 2 |
| Daniel S. Katz | d****z@i****g | 1 |
| Marcel Stimberg | m****g@s****r | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 2
- Total pull requests: 5
- Average time to close issues: 2 months
- Average time to close pull requests: 2 months
- Total issue authors: 2
- Total pull request authors: 4
- Average comments per issue: 0.5
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 2
- Pull requests: 4
- Average time to close issues: 2 months
- Average time to close pull requests: N/A
- Issue authors: 2
- Pull request authors: 3
- Average comments per issue: 0.5
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- tsbinns (1)
- nmy2103 (1)
Pull Request Authors
- tsbinns (2)
- mstimberg (1)
- Y2P (1)
- danielskatz (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 143 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 4
- Total maintainers: 1
pypi.org: mcsm-benchs
mcsm-benchs: A benchmarking toolbox for Multi-Component Signal Methods.
- Homepage: https://jmiramont.github.io/mcsm-benchs/
- Documentation: https://mcsm-benchs.readthedocs.io/
- License: gpl-3.0
-
Latest release: 0.1.3
published 4 months ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- s0/git-publish-subdir-action develop composite
- 118 dependencies
- ipykernel ^6.6.1
- matlabengine ^9.13.6
- matplotlib ^3.5.1
- numpy ^1.22.0
- pandas ^1.4.4
- plotly ^5.10.0
- pytest ^7.0.1
- python >=3.9,<3.10
- scipy ^1.7.3
- seaborn ^0.12.0
- tabulate ^0.8.9
- actions/checkout v2 composite
- actions/setup-python v2 composite
- snok/install-poetry v1 composite