adolescent_normative_modeling_ct_2024

Normative Modeling of Adolescent Data: python code and data files for PNAS manuscript.

https://github.com/nevacorr/adolescent_normative_modeling_ct_2024

Science Score: 57.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.1%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Normative Modeling of Adolescent Data: python code and data files for PNAS manuscript.

Basic Info
  • Host: GitHub
  • Owner: nevacorr
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 188 KB
Statistics
  • Stars: 0
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 1 year ago · Last pushed about 1 year ago
Metadata Files
Readme Citation

README.md

Normative Modeling of Adolescent Cortical Thickness

This project implements Bayesian linear regression normative modeling according to the procedure outlined by Rutherford et al. in Nature Protocols 2022 (https://doi.org/10.1038/s41596-022-00696-5). Here the modeling is applied to adolescent cortical thickness data collected at two time points (before and after the COVID-19 pandemic lockdowns) by Patricia Kuhl's laboratory at the University of Washington. This project creates models based on pre-COVID data and applies these to the post-COVID data.

Installing dependencies

To install the required software, please execute:

pip install -r requirements.txt

Input data:

AdolCortThickdata.csv contains the data used in the analysis.

visit1subjectsusedtocreatenormativemodeltrainset_cortthick.txt contains the list of subjects whose pre-COVID data were used for model training across all programs.

visit2allsubjectsusedintestset_cortthick.txt contains the list of subjects whose post-COVID data were used to evaluate the effects of the COVID pandemic lockdowns across all programs.

visit1eulernumbers.csv contains the euler numbers for the left and right hemispheres for each study subect at the pre-COVID timepoint.

visit2eulernumbers.csv contains the euler numbers for the left and right hemispheres for each study subject at the post-COVID lockdown timepoint.

Running the analysis

You can reproduce the results by running the following scripts in order:

  1. NormativeModelGenz_Time1.py : run this file to generate the normative models for the pre-COVID data. This program saves the models to disk.

  2. ApplyNormativeModeltoGenz_Time2.py : run to apply the models to the postCOVID data. It utilizes the models produced by NormativeModelGenzTime1.py.

  3. CalculateAvgBrainAgeAfterAveragingCorticalThicknesses.py : run to compute average acceleration in cortical thickness observed in the post-COVID data. This code does not utilize the models generated by NormativeModelGenzTime1.py or any output from ApplyNormativeModeltoGenzTime2.py. However, it does use the same train (pre-COVID) and test (post-COVID) subject cohorts that are utilized by these other two programs.

  4. CalculateEffectSizeandCIusingZscore.py : run this to compute effect sizes and confidence intervals for effect sizes.

All other Python files are used to support the main python .py files listed above. In these files, the phrases "time 1", "visit1", "training" or "train" are used to refer to the pre-COVID data. The phrases "time 2", "visit 2" or "test" refer to the post-COVID data, with one exception: within the NormativeModelGenz_Time1.py file, "test" sometimes refers to a validation set that is a subset of the training data. Comments within that file provide clarification.

Alternate analysis: Separate Models for Males and Females

You can reproduce the results in the alternate analysis which allows for interactions between the two sexes by creating separate normative models by running the following script which is in folder AlternateAnalysis:

  1. NormativeModelCreateandApplyGenzMF_Separate : run this file to generate the normative models from the pre-COVID data and apply them to the post-COVID data. This also computes the average acceleration in cortical thickness observed in the post-COVID data.

  2. CalculateEffectSizeandCIusingZscore_MFseparate.py : run this to compute effect sizes and confidence intervals for effect sizes.

These scripts uses functions contained in the other Python files located in this folder, plus some of the files in the main repository folder.

Owner

  • Login: nevacorr
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
title: Normative Modeling of Adolescent Cortical Thickness
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Neva M.
    name-particle: 
    family-names: Corrigan
    email: nevao@uw.edu
    affiliation: University of Washington
  - {}
identifiers:
  - type: url
    value: >-
      https://github.com/nevacorr/Adolescent_Normative_Modeling_CT_2024
date-released: '2024-08-14'

GitHub Events

Total
  • Push event: 1
  • Pull request event: 2
  • Create event: 1
Last Year
  • Push event: 1
  • Pull request event: 2
  • Create event: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 0
  • Total pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 days
  • Total issue authors: 0
  • Total pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 days
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • arokem (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • Babel ==2.14.0
  • Jinja2 ==3.1.3
  • MarkupSafe ==2.1.5
  • Pygments ==2.17.2
  • Sphinx ==7.2.6
  • alabaster ==0.7.16
  • arviz ==0.13.0
  • asttokens ==2.4.1
  • bspline ==0.1.1
  • cachetools ==5.3.2
  • certifi ==2024.2.2
  • cftime ==1.6.3
  • charset-normalizer ==3.3.2
  • cloudpickle ==3.0.0
  • cons ==0.4.6
  • contourpy ==1.2.0
  • cycler ==0.12.1
  • decorator ==5.1.1
  • docutils ==0.20.1
  • etuples ==0.3.9
  • exceptiongroup ==1.2.0
  • executing ==2.0.1
  • fastprogress ==1.0.3
  • filelock ==3.13.1
  • fonttools ==4.49.0
  • fsspec ==2024.2.0
  • idna ==3.6
  • imagesize ==1.4.1
  • ipython ==8.22.1
  • jedi ==0.19.1
  • joblib ==1.3.2
  • kiwisolver ==1.4.5
  • logical-unification ==0.4.6
  • matplotlib ==3.8.3
  • matplotlib-inline ==0.1.6
  • miniKanren ==1.0.3
  • mpmath ==1.3.0
  • multipledispatch ==1.0.0
  • netCDF4 ==1.6.5
  • networkx ==3.2.1
  • nibabel ==5.2.0
  • numpy ==1.26.4
  • packaging ==23.2
  • pandas ==2.2.0
  • parso ==0.8.3
  • patsy ==0.5.6
  • pcntoolkit ==0.29.post1
  • pexpect ==4.9.0
  • pillow ==10.2.0
  • prompt-toolkit ==3.0.43
  • ptyprocess ==0.7.0
  • pure-eval ==0.2.2
  • pyarrow ==15.0.0
  • pymc ==5.10.4
  • pyparsing ==3.1.1
  • pytensor ==2.18.6
  • python-dateutil ==2.8.2
  • pytz ==2024.1
  • requests ==2.31.0
  • scikit-learn ==1.4.1.post1
  • scipy ==1.12.0
  • seaborn ==0.13.2
  • six ==1.16.0
  • snowballstemmer ==2.2.0
  • sphinx-tabs ==3.4.5
  • sphinxcontrib-applehelp ==1.0.8
  • sphinxcontrib-devhelp ==1.0.6
  • sphinxcontrib-htmlhelp ==2.0.5
  • sphinxcontrib-jsmath ==1.0.1
  • sphinxcontrib-qthelp ==1.0.7
  • sphinxcontrib-serializinghtml ==1.1.10
  • stack-data ==0.6.3
  • statsmodels ==0.14.1
  • sympy ==1.12
  • threadpoolctl ==3.3.0
  • toolz ==0.12.1
  • torch ==2.2.1
  • traitlets ==5.14.1
  • typing_extensions ==4.9.0
  • tzdata ==2024.1
  • urllib3 ==2.2.1
  • wcwidth ==0.2.13
  • xarray ==2024.2.0
  • xarray-einstats ==0.7.0