https://github.com/robbisg/sekupy

Detergent for Multivariate Analysis Pipelines of Neuroimaging data in Python

https://github.com/robbisg/sekupy

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.9%) to scientific vocabulary

Keywords

fmri-analysis machine-learning multivariate-analysis neuroimaging python
Last synced: 6 months ago · JSON representation

Repository

Detergent for Multivariate Analysis Pipelines of Neuroimaging data in Python

Basic Info
Statistics
  • Stars: 1
  • Watchers: 2
  • Forks: 0
  • Open Issues: 52
  • Releases: 0
Topics
fmri-analysis machine-learning multivariate-analysis neuroimaging python
Created over 7 years ago · Last pushed about 1 year ago
Metadata Files
Readme License

README.md

sekupy

example workflow codecov Documentation Status Project Status: WIP  Initial development is in progress, but there has not yet been a stable, usable release suitable for the public. CodeFactor

sekupy is a python-package created for deterging your (dirty) (and) (multivariate) neuroimaging analyses. The package has been thought for decoding analyses but it includes also basic univariate analyses.

It has some utilities to vary sets of parameters of the analyses without struggling with for and if statements.

It deterges your results, by saving them in a safe manner, by also keeping in mind BIDS.

sekupy is the deterged version of pyitab.

Documentation

The documention can be found here.

Install

You can install it by using: pip install sekupy

Example

The main idea is to use a dictionary to configure all parameters of your analysis, feed the configuration into an AnalysisPipeline object, call fit to obtain results, then save to store in a BIDS-ish way.

For example if we want to perform a RoiDecoding analysis using some preprocessing steps we will have a script like this (this is not a complete example): ```python from sekupy.analysis.configurator import AnalysisConfigurator from sekupy.analysis.pipeline import AnalysisPipeline from sekupy.analysis.decoding.roi_decoding import RoiDecoding

defaultconfig = { # Here we specifiy that we have to transform the dataset labels # then select samples and then balance data 'prepro': ['targettransformer', 'sampleslicer', 'balancer'],

                # Here we set which attribute to choose (dataset is a pymvpa dataset)
                'target_transformer__attr': "image_type",
                # Here we select samples with a image_type equal to I or O and evidence equal to 1
                'sample_slicer__attr': {'image_type':["I", "O"], 'evidence':[1]},
                # Then we say that we want to balance image_type at subject-level
                "balancer__attr": 'subject',

                # We setup the estimator in a sklearn way
                'estimator': [
                    ('fsel', SelectKBest(k=50)),
                    ('clf', SVC(C=1, kernel='linear'))],
                'estimator__clf__C': 1,
                'estimator__clf__kernel': 'linear',

                # Then the cross-validation object (also sklearn)
                'cv': LeaveOneGroupOut,

                'scores': ['accuracy'],

                # Then the analysis
                'analysis': RoiDecoding,
                'analysis__n_jobs': -1,

                'analysis__permutation': 0,

                'analysis__verbose': 0,

                # Here we say that we want use the region with value 1 in image+type mask
                'kwargs__roi_values': [('image+type', [1]), ('image+type', [2]), ('image+type', [3]),
                                        ('image+type', [4]), ('image+type', [5])],

                # We want to use subject for our cross-validation
                'kwargs__cv_attr': 'subject'
                }

configuration = AnalysisConfigurator(*defaultconfig), kind='configuration') kwargs = configuration.getkwargs() a = AnalysisPipeline(conf, name="roidecodingacross_full").fit(ds, *kwargs) a.save() ``` Surf the code, starting from classes used here!!

Owner

  • Name: robbisg
  • Login: robbisg
  • Kind: user
  • Location: San Benedetto del Tronto
  • Company: University "G. D'Annunzio" Chieti-Pescara

Machine Learning / Deep Learning / Neuroimaging / Python / React / Kubernetes / Slurm / Flask

GitHub Events

Total
  • Push event: 2
Last Year
  • Push event: 2

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 7 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 1
  • Total maintainers: 1
pypi.org: sekupy

Detergent for your dirty neuroimaging pipelines

  • Homepage: https://github.com/robbisg/sekupy
  • Documentation: https://sekupy.readthedocs.io/
  • License: License for sekupy ================== New BSD License Copyright (c) The sekupy developers. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: a. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. b. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. c. Neither the name of the sekupy developers nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
  • Latest release: 0.0.1
    published over 1 year ago
  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 7 Last month
Rankings
Dependent packages count: 10.6%
Average: 35.1%
Dependent repos count: 59.5%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/test.yaml actions
  • actions/cache v2 composite
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • codecov/codecov-action v1 composite
Dockerfile docker
  • ubuntu bionic build
requirements-doc.txt pypi
  • coverage *
  • cython *
  • flake8 *
  • furo *
  • h5py *
  • imbalanced-learn *
  • joblib *
  • lxml *
  • matplotlib *
  • mkl *
  • mne *
  • nilearn *
  • numpy *
  • numpydoc *
  • pandas *
  • pillow *
  • pybids *
  • rinohtype *
  • scikit-learn *
  • seaborn *
  • sphinx *
  • sphinx-gallery *
  • statsmodels *
  • tqdm *
  • xlrd *
requirements-min.txt pypi
  • h5py *
  • imbalanced-learn *
  • joblib *
  • matplotlib *
  • mne *
  • nilearn *
  • numpy >=1.18.5
  • pandas *
  • pybids *
  • pymvpa2 *
  • scikit-learn *
  • scipy *
  • seaborn *
  • statsmodels *
  • tqdm *
  • xlrd *
requirements.txt pypi
  • bokeh *
  • codecov *
  • coverage *
  • dask *
  • dask-jobqueue *
  • dask-kubernetes *
  • distributed *
  • h5py *
  • hmmlearn *
  • imbalanced-learn *
  • joblib *
  • matplotlib *
  • mne *
  • mne-hcp *
  • nibabel *
  • nilearn *
  • nistats *
  • numpy >=1.18.5
  • pandas *
  • pybids *
  • pymvpa2 *
  • pytest >=3.6.0
  • pytest-cov *
  • scikit-learn *
  • scipy ==1.10.0
  • seaborn *
  • statsmodels *
  • tqdm *
  • xlrd *
pyproject.toml pypi
  • h5py *
  • imbalanced-learn *
  • lazy_loader >=0.3
  • matplotlib >=3.5.0
  • mne *
  • nibabel *
  • numpy >=1.21.2
  • pandas *
  • pymatreader *
  • pymvpa2 *
  • scikit-learn *
  • scipy >=1.7.1
  • seaborn *
  • tqdm *
requirements-legacy.txt pypi
  • bids-validator ==1.9.9
  • bokeh *
  • codecov *
  • coverage *
  • distributed *
  • h5py *
  • hmmlearn *
  • imbalanced-learn ==0.8.1
  • joblib *
  • matplotlib *
  • mne *
  • mne-hcp *
  • nibabel *
  • nilearn *
  • nistats *
  • numpy *
  • pandas *
  • pybids *
  • pybids ==0.15.6
  • pymvpa2 *
  • pytest *
  • pytest-cov *
  • scikit-learn *
  • scipy *
  • seaborn *
  • statsmodels *
  • tqdm *
  • xlrd *
environment.yaml conda
  • cython
  • cytoolz
  • matplotlib
  • nomkl
  • numpy
  • pip
  • python 3.7.*
  • python-blosc
  • scipy 1.2.*
  • tk