https://github.com/digital-c-fiber/spikesortingpipeline

Jupyter notebook to analyze and sort spike based on their morphology recorded via microneurography

https://github.com/digital-c-fiber/spikesortingpipeline

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: pubmed.ncbi, ncbi.nlm.nih.gov
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.5%) to scientific vocabulary
Last synced: 4 months ago · JSON representation

Repository

Jupyter notebook to analyze and sort spike based on their morphology recorded via microneurography

Basic Info
  • Host: GitHub
  • Owner: Digital-C-Fiber
  • License: mit
  • Language: Jupyter Notebook
  • Default Branch: main
  • Size: 72.4 MB
Statistics
  • Stars: 0
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 2
Created over 1 year ago · Last pushed 7 months ago
Metadata Files
Readme License

README.md

Spike Sorting Pipeline

This repository provides a reproducible pipeline for preprocessing, sorting, and analyzing spikes based on different feature sets (amplitude and widht, features from the SS-SPDF method Caro-Martn et al., 2018, raw waveform) via microneurographyusing Snakemake and modular Python scripts. It also includes a Jupyter notebook for evaluating and visualizing classification results (e.g., heatmaps of accuracy).

The pipeline supports proprietary data formats like Dapsys and the process data format HDF5/NIX, converts them to a unified NIX format, and executes a full analysis including spike extraction (timestamps must be provided), feature set extraction, template computation, and classification evaluation.

Spike Tracking via the marking method

During the experiment the marking method is applied, a special electrical stimulation protocl to create spike tracks (vertical alignment of fiber responses). They can be extracted and analyzed post hoc.

In our workflow, we use two different tracking algorithms for microneurography data to identify and track spikes evoked by background stimuli: - Dapsys (proprietary) www.dapsys.net based on Turnquist et al., 2016 - SpikeSpy (open-source) https://github.com/Microneurography/SpikeSpy

The extracted spike times and track labels are essential inputs for running our supervised spike sorting pipeline. The setup instructions are provided below.


Setup Instructions

1. Clone the repository

bash git clone https://github.com/Digital-C-Fiber/SpikeSortingPipeline.git cd SpikeSortingPipeline

2. Create the Conda environment

This pipeline uses conda and snakemake with Python 3.11. We provide an environment file.

bash conda env create -f environment.yml conda activate Snakemake311

If you haven't already installed snakemake: bash conda install -c conda-forge snakemake


Snakemake Directory Overview

Snakefile # Main Snakemake workflow config.yaml # Configuration for dataset paths and parameters environment.yml # Conda environment file scripts/ # Core processing scripts read_in_data.py preprocessing.py feature_extraction.py create_nix.py templates_and_filtering.py compute_template_errors.py classification.py clustering.py collect_scores.py test_clusters.py datasets_test/ testset_1.dps # Example Dapsys file nix/ # Output folder for NIX


Configuration Format

Datasets and processing parameters are defined in config.yaml. Example:

yaml datasets: Testset_1: path: "datasets_test/testset_1.dps" path_nix: "datasets_test/nix/testsets_1.nix" name: "testset_1" path_dapsys: "NI Puls Stimulator/Continuous Recording" flags: use_bristol_processing: false time1: 200 time2: 922

Explanation: - path: Raw data file (e.g. Dapsys) - path_nix: Target output path for the .nix file - path_dapsys: Root path inside the Dapsys hierarchy, usually it is NI Puls Stimulator/Continuous Recording or NI Pulse Stimulator/Continuous Recording, for h5/nix files just write "" - use_bristol_processing: Adjusts preprocessing method for Bristol data - time1, time2: Time window for analysis


Run the Pipeline

bash snakemake --cores 8

This will: - Read raw data - Create data frames - Generate .nix files - Apply preprocessing (align spikes, compute derivatives, and compute templates) - Extract features - Perform clustering/classification - Output performance metrics for 6 implemented feature sets


Results Visualization

Use the provided notebook to plot and explore your results:

bash jupyter notebook visualize_results.ipynb

This notebook includes: - Heatmaps of classification accuracy across feature sets - Metric summaries (e.g. precision, recall, etc.)


Contact

If you have any questions, issues, or suggestions, feel free to reach out:

Alina Troglio Email: alina.troglio@rwth-aachen.de


How to Cite

If you use this pipeline in your work, please cite our preprint:

Supervised Spike Sorting Feasibility of Noisy Single-Electrode Extracellular Recordings: Systematic Study of Human C-Nociceptors recorded via Microneurography

DOI

Owner

  • Name: Digital-C-Fiber
  • Login: Digital-C-Fiber
  • Kind: organization

GitHub Events

Total
  • Release event: 2
  • Push event: 11
  • Create event: 2
Last Year
  • Release event: 2
  • Push event: 11
  • Create event: 2

Dependencies

requirements.txt pypi
  • Pillow ==9.5.0
  • PyQt5 ==5.15.9
  • PyQt5-Qt5 ==5.15.2
  • PyQt5-sip ==12.12.1
  • PyQt6 ==6.5.0
  • PyQt6-Qt6 ==6.5.0
  • PyQt6-sip ==13.5.1
  • PySide2 ==5.15.2.1
  • PySide6 ==6.5.0
  • PySide6-Addons ==6.5.0
  • PySide6-Essentials ==6.5.0
  • PyWavelets ==1.5.0
  • Pygments ==2.15.1
  • asttokens ==2.2.1
  • backcall ==0.2.0
  • colorama ==0.4.6
  • comm ==0.1.3
  • contourpy ==1.0.7
  • cycler ==0.11.0
  • debugpy ==1.6.7
  • decorator ==5.1.1
  • et-xmlfile ==1.1.0
  • executing ==1.2.0
  • fonttools ==4.39.3
  • h5py ==3.8.0
  • ipykernel ==6.22.0
  • ipython ==8.13.1
  • ipywidgets ==8.1.2
  • jedi ==0.18.2
  • joblib ==1.2.0
  • jupyter_client ==8.2.0
  • jupyter_core ==5.3.0
  • jupyterlab_widgets ==3.0.10
  • kiwisolver ==1.4.4
  • llvmlite ==0.40.0
  • matplotlib ==3.7.1
  • matplotlib-inline ==0.1.6
  • neo ==0.11.0
  • nest-asyncio ==1.5.6
  • nixio ==1.5.3
  • numba ==0.57.0
  • numpy ==1.24.3
  • openpyxl ==3.1.2
  • packaging ==23.1
  • pandas ==2.0.0
  • parso ==0.8.3
  • patsy ==0.5.3
  • pickleshare ==0.7.5
  • platformdirs ==3.5.0
  • prompt-toolkit ==3.0.38
  • psutil ==5.9.5
  • pure-eval ==0.2.2
  • pydapsys ==1.0b1
  • pyparsing ==3.0.9
  • python-dateutil ==2.8.2
  • pytz ==2023.3
  • pywin32 ==306
  • pyzmq ==25.0.2
  • quantities ==0.13.0
  • scikit-learn ==1.2.2
  • scipy ==1.10.1
  • seaborn ==0.12.2
  • shiboken2 ==5.15.2.1
  • shiboken6 ==6.5.0
  • six ==1.16.0
  • stack-data ==0.6.2
  • statsmodels ==0.14.0
  • threadpoolctl ==3.1.0
  • tornado ==6.3.1
  • tqdm ==4.65.0
  • traitlets ==5.9.0
  • tzdata ==2023.3
  • wcwidth ==0.2.6
  • widgetsnbextension ==4.0.10