Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 10 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.7%) to scientific vocabulary
Keywords
Repository
Analysis code for the `mpib_sp_eeg` experiment
Basic Info
- Host: GitHub
- Owner: sappelhoff
- License: mit
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://github.com/sappelhoff/sp_code
- Size: 37.3 MB
Statistics
- Stars: 1
- Watchers: 2
- Forks: 1
- Open Issues: 0
- Releases: 3
Topics
Metadata Files
README.md
Code for the mpib_sp_eeg dataset
The code in this repository was used to analyze the data from the mpib_sp_eeg dataset
(see "Download the data" below).
Below, we describe how to setup the environment necessary to reproduce the results.
Finally, there is a section describing each file contained in this repository.
Further information
Paper
The paper is available (open access) in Cerebral Cortex.
- Cerebral Cortex: https://doi.org/10.1093/cercor/bhac062
Preprint
A preprint is available on BioRxiv.
- BioRxiv: https://doi.org/10.1101/2021.06.03.446960
Experiment code
The code that was run to collect data from the human study participants is available on GitHub, and is archived on Zenodo.
- GitHub: https://github.com/sappelhoff/sp_experiment/
- Zenodo: https://doi.org/10.5281/zenodo.3354368
Data
The data is available on GIN (see "Download the data" below).
- GIN
- repository: https://gin.g-node.org/sappelhoff/mpibspeeg
- archive: https://doi.org/10.12751/g-node.dtyh14
Analysis code
The analysis code in this repository is also archived on Zenodo.
- Zenodo: https://doi.org/10.5281/zenodo.5929222
License
The analysis code is made available under the MIT license, see the LICENSE file.
Setup
Install a Python environment
The code was written and tested on Linux (Ubuntu 18.04) using Python 3.8 installed via the conda package manager (version 4.10.1).
It is RECOMMENDED to run this code on a machine with sufficient RAM, and preferably around 50 cores. Otherwise, the analyses may take a long time to run.
To prepare a similar environment, please follow these steps:
- Make sure you have an up to date version of
condaavailable from your command line (we recommend downloading miniconda: https://docs.conda.io/en/latest/miniconda.html) - Create a new conda environment using the
environment.ymlfile that you find in this repository. For that, from the command line, run:conda env create -f ./environment.yml, where the./in./environment.ymlshould be replaced with the path to theenvironment.ymlfile from this repository. - That will create a new conda environment called
sp, which you can activate usingconda activate sp(assuming that you did not have an environment with that name before).
All documentation below assumes that you have setup and activated this Python environment.
Download the data
The data is available here: https://gin.g-node.org/sappelhoff/mpibspeeg/
Please follow the download instructions listed there. Note that if you followed the steps outlined above, you will already have a working version of datalad that is necessary to download the data.
Copy the code
Place all code from this repository in the /code directory within the mpib_sp_eeg
dataset (incuding .gitignore, but except for environment.yml, which is already there).
Then, navigate to mpib_sp_eeg dataset and run:
datalad unlock .(this might take a while)cp -r ./derivatives/annotation_derivatives/sub* ./derivatives
in order to create one derivatives folder per subject, with the annotation derivatives already
stored within that folder (see README in /annotation_derivatives for more information).
Configure your path
To run the analysis code on your system, you need to configure the path where the mpib_sp_eeg
dataset is stored. For that, go to code/utils.py and find and extend the lines that are
shown in the example below:
```Python
Find these lines in utils.py:
Adjust this path to where the bids directory is stored
home = os.path.expanduser("~") if "stefanappelhoff" in home: BIDSROOT = os.path.join("/", "home", "stefanappelhoff", "Desktop", "spdata") elif "appelhoff" in home: BIDSROOT = os.path.join("/", "home", "appelhoff", "appelhoff", "spdata") elif "example" in home: BIDSROOT = os.path.join("/", "home", "example", "mpibsp_eeg") ```
In the code block above, we added "example" with their hypothetical path to the data. Please adjust for your own needs.
Convert .ipynb notebooks to .py scripts using .tpl template
Running the code is sometimes more convenient in notebook format, and at other times in
script format. Using nbconvert, we can do both. To convert the notebooks to script
format, run: bash ipynb2py.sh
That makes use of the simplepython.tpl conversion template, which makes sure that no
possibly breaking "magic" syntax is included in the Python scripts.
Other configurations
The following configurations are optional:
- For using Jupyter Notebook extensions, run:
jupyter contrib nbextension install --user - For getting reasonable git diffs on notebooks, run:
nbdime config-git --enable --global
Produce initial derivative files
Run the following commands in succession:
python 02_load_and_concatenate.pypython 03_run_ica.pyjupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace analysis_behavior.ipynbjupyter nbconvert --to html --ExecutePreprocessor.timeout=None --execute analysis_behavior.ipynbpython 04_epoching.py
or all at once as a background process using:
shell
nohup sh -c "python 02_load_and_concatenate.py && \
python 03_run_ica.py && \
jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace analysis_behavior.ipynb && \
jupyter nbconvert --to html --ExecutePreprocessor.timeout=None --execute analysis_behavior.ipynb && \
python 04_epoching.py" &
and in that case you can check the nohup.out file that is produced by that command for any logs.
Produce all remaining results
Run the following commands in succession, and inspect the resulting analysis_behavior.html
(from before) and analysis_erp.html files for the results:
jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace analysis_erp.ipynbjupyter nbconvert --to html --ExecutePreprocessor.timeout=None --execute analysis_erp.ipynb
Then run the following command:
nohup bash run_rsa_cluster_plot.sh &
The results of that command are saved in the derivatives/rsa_9x9 directory.
They are admittedly a bit scattered, so there is a short guideline to find the statistics that are reported
in the paper at the end of this text section.
Then, run:
jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace analysis_neurometrics.ipynbjupyter nbconvert --to html --ExecutePreprocessor.timeout=None --execute analysis_neurometrics.ipynb
and inspect analysis_neurometrics.html for the results.
To produce the figures, run:
jupyter nbconvert --ClearOutputPreprocessor.enabled=True --inplace publication_plots.ipynbjupyter nbconvert --to html --ExecutePreprocessor.timeout=None --execute publication_plots.ipynb
and inspect the publication_plots/ directory for the plots, and publication_plots.html for
some statistical results on the neurometrics analyses.
Note: To produce a complete figure 1 instead of separate files fig1a.pdf and fig1bcd.pdf, you need
to run publication_plots/fig1.sh; but make sure that all required software is installed before
running the script (open the script in a text editor to see the requirements).
To find the results for the RSA (after running run_rsa_cluster_plot.sh), use these hints:
- There are three folders in
derivatives/rsa_9x9. Open the one that containshalf-bothin its name. - Within that folder, there is a folder starting with
clusterperm_results_and ending with a timestamp. We will refer to that folder as theclusterperm_results_*folder below. - The statistics reported in the paper on RSA can be found as follows:
- start and stop of the overall (collapsed over conditions) numberline cluster, as well as its p-value:
perm_and_2x2_outputs/orthnumberline_length/average_onumberline_clusters.json - p-values of numberline clusters by conditions:
clusterperm_results_*/model_orthnumberline_stat-length_thresh-0.05_pvals.txt, and the associatedclusterperm_results_*/model_orthnumberline_stat-length_thresh-0.05_plot.png - time window of interaction cluster:
publication_plots.html(in the outputs) - t-tests on the significant interaction cluster:
perm_and_2x2_outputs/orthnumberline_length/posthocs_interaction-0-both.html, andperm_and_2x2_outputs/orthnumberline_length/posthocs_interaction-0-first_half.html, andperm_and_2x2_outputs/orthnumberline_length/posthocs_interaction-0-second_half.html - start and stop of the overall (collapsed over conditions) extremity cluster, as well as its p-value:
perm_and_2x2_outputs/orthextremity_length/average_oextremity_clusters.json - p-values of extremity clusters by conditions:
clusterperm_results_*/model_orthextremity_stat-length_thresh-0.05_pvals.txt, and the associatedclusterperm_results_*/model-orthextremity_stat-length_thresh-0.05_plot.png
Explanations for the different files
"other" files
.gitignore: So that temporary files, caches, or log files are not committed to the version control history..zenodo.json: Metadata for the code archive on Zenodo.environment.yml: For use with thecondapackage manager to set up a Python environment for running the code.simplepython.tpl: A template file fornbconvertthat is used to convert Jupyter Notebooks to Python scripts.ipynb2py.sh: A convenience shell script (bash) that converts all notebooks of interest to Python scripts.
Utility functions and configurations
utils.py: Contains miscellaneous functions and configurations that are imported throughout the other scripts/notebooks
Preprocessing code
01_raw_inspection.ipynb: Used for visually screening each raw file, and marking bad temporal segments and bad channels. This script was used to produce thederivatives/annotation_derivativesin thempib_sp_eegdataset (*_annotations.txtand*_badchannels.txtfiles per subject).02_load_and_concatenate.ipynb: Used to (1) load all three raw EEG files per subject in the original BrainVision format, (2) modify the event triggers to remain identifiable after concatenation of all files, (3) concatenate the files and save in MNE-Python native.fifformat: One raw.fifper subject. This needs the outputs from01_raw_inspection.ipynb.03_run_ica.ipynb: Used to run ICA on the data, and perform filtering, interpolation, and re-referencing to obtain clean data. This script can be run in "interactive" mode (from the notebook) to screen ICA components and potentially mark them for rejection. This produces the*__concat_eeg-excluded-ica-comps.txtinderivatives/annotation_derivativesin thempib_sp_eegdataset. If those derivatives are already present, the script can also be run non-interactively to just preprocess the data based on previous screening.04_epoching.py: Used to epoch the data and tag each epoch along a number of conditions that it belongs to.
Analysis code
analysis_behavior.ipynb: Analysis of behavioral data.analysis_erp.ipynb: Analysis of univariate EEG/ERP data.rsa_analysis_9x9.ipynb: Run RSA and plot results based on different parameters.rsa_analysis_script.py: Allows runningrsa_analysis_9x9.py(converted fromrsa_analysis_9x9.ipynb) with several parameter settingsanalysis_neurometrics.ipynb: Analysis of multivariate EEG data (RSA): neurometrics.clusterperm.py: Run cluster based permutation analyses on RSA analysis results (produced byrsa_analysis_9x9.ipynb)clusterperm_script.py: Allows runningclusterperm.pyfor several RSA analysis results one after another (good to run over night).rsa_plotting.ipynb: Prepare plots and statistics from RSA analysis results and their cluster based permutation analysis results.rsa_plotting_script.py: Allows runningrsa_plotting.py(converted fromrsa_plotting.ipynb) for several RSA analysis results one after another.run_rsa_cluster_plot.sh: Convenient bash script --> simply set uprsa_analysis_script.py,clusterperm_script.py, andrsa_plotting_script.pyand then runnohup bash run_rsa_cluster_plot.sh &over night to let it produce all results.
Plotting
publication_plots/README: Basic information about thepublication_plots/directory.publication_plots.ipynb: To produce the plots used in the paper.publication_plots/fig1a.odg: A "LibreOffice Draw" file for panel a in Figure 1.publication_plots/fig1a.pdf: A pdf export from thepublication_plots/fig1a.odgfile.publication_plots/fig1-tikz.tex: A XeTex file to stitch together panel a, and the remaining panels for Figure 1.publication_plots/fig1-tikz_to_raster.sh: A shell script to convert Figure 1 PDF to PNG and TIF format.
Owner
- Name: Stefan Appelhoff
- Login: sappelhoff
- Kind: user
- Location: Germany
- Company: @MPIB
- Website: https://www.stefanappelhoff.com
- Twitter: stefanappelhoff
- Repositories: 18
- Profile: https://github.com/sappelhoff
GitHub Events
Total
Last Year
Dependencies
- datalad *
- mne <0.23
- mne_bids <0.8
- pingouin <0.6
- rpy2 <3.5