automatedfreesurfer
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.0%) to scientific vocabulary
Keywords
Repository
Basic Info
Statistics
- Stars: 10
- Watchers: 3
- Forks: 2
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
AutomatedFreeSurfer
Description
These scripts are tailored to process T1 images in a BIDS-formatted dataset using FreeSurfer's recon-all and the hippocampal and amygdala subfields segmentation stream with minimal (really, minimal) user input. It automates the tasks of creating a quality control montage of the FreeSurfer output for each subject and session, and extracting the metrics into a CSV file. The suite provides the added capability of running the longitudinal stream when more than one session per subject is present (also with minimal user input)
Designed for simplicity, users will interact through a series of prompts to navigate specific processes. Apart from setting some paths, no additional programming is required, the pipeline takes care of it. Ensure that all scripts are downloaded from this repository and saved in your BIDS folder alongside your subjects' directories.
Update 2025
Added Fastsurfer options for the cross sectional processing.
Table of Contents
- Author Information
- Prerequisites
- Instructions
- Scripts Overview
- Steps-by-Step Guide
- Troubleshooting
- Contact
Citation
- Ferreira-Atuesta, C., Terziev, R., Marr, H., & Galovic, M. (2023). AutomatedFreeSurfer (Version 1.0) [Computer software]. https://github.com/cfatuesta/AutomatedFreeSurfer/tree/main
Prerequisites
BIDS Dataset: Your dataset must be organized according to the BIDS specification, ensuring correctly named main directory and subdirectories. Here's an example of how it should look like:
NIFTI Files: The images should be in the nifti format (.nii or .nii.gz), accompanied by their corresponding .JSON files. These files are typically generated by DICOM to NIFTI converters like
dcm2niix. Note: Usedcm2niixand notdcm2nii.
Check if dcm2niix exists:
bash
which dcm2niix
If it doesn't exist, you can install it from its GitHub repository:
- dcm2niix GitHub repository
- FreeSurfer: FreeSurfer must be installed, and the necessary scripts (
main_script.sh,process_csv.py,merge_csv.py, andprocess_longitudinal.py) should reside in yourSUBJECTS_DIRpath (refer to the Enigma folder in the provided image).
Check if FreeSurfer is installed:
bash
which recon-all
- Freeview & ImageMagick: The tools
freeview(from FreeSurfer) andmagick(from ImageMagick) should be readily available.
Check for freeview:
bash
which freeview
Check for magick:
bash
which magick
If ImageMagick isn't installed, run:
bash
brew install imagemagick
Python3: Ensure Python3 is installed on your system. You can check the version using:
bash python --versionIf you don't have Python3:bash brew install pyenv pyenv install 3.10.10 pyenv global 3.10.10Homebrew: Several of the installations utilize Homebrew. If you don't have Homebrew installed:
bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
By following these prerequisites, you ensure a smooth setup and avoid potential issues during execution. Make sure to follow the checks and installation steps accordingly.
Instructions
Starting the Main Script
- Initialization: Open your terminal or command prompt. Navigate to the directory containing your scripts and data. Execute the
main_script.shormain_script_parallelized.shby typingbash main_script.shorbash main_script_parallelized.shand pressing Enter. - Dataset Validation: The script first validates the organization and naming conventions of the dataset. If inconsistencies are detected, you can immediately exit the script by pressing
Ctrl+C. If everything appears in order, you can proceed by pressing Enter.
Enter Paths
- Directory Check: The script checks if environment variables
SUBJECTS_DIRandFREESURFER_HOMEare set.- If they are not set, you will be prompted to manually provide the paths:
SUBJECTS_DIR: The directory where your subjects' data is.FREESURFER_HOME: The installation directory of Freesurfer.
- If they are set, the paths will be displayed for confirmation. You can choose to continue with these paths or provide alternate ones.
- If they are not set, you will be prompted to manually provide the paths:
Processing T1 Images
Metadata Extraction: All detected T1 images are processed to extract metadata from their associated .JSON headers. This metadata is then consolidated into a CSV file named "T1s_metadata.csv" in the working directory.
Processing Check: The script searches for T1 images that haven't been processed by Freesurfer. This is done by reading the
recon-all.logassociated with each image.- If the log states "finished without error", that specific image is considered already processed and thus, is skipped.
- If there's no log or if the log states "finished with errors", the image will be selected for further processing.
Processing Confirmation: After identifying all unprocessed images, you will be asked whether you'd like to continue processing them using the cross-sectional pipeline. Type
yesto proceed.❗️ Important Note: If you have T1 images processed by Freesurfer but stored in different directories, you must move them to their respective subject directories in the main dataset. Ensure you relocate the
logfolder and its contents, as the scripts utilize these logs for decision-making. See image below to recreate the appropiate subfolders. Make sure you keep the same structure and naming as the image below (except for subjects' ID).
Generating Montages
- Montage Creation: For T1 images lacking visual montages, the script will propose generating these montages. Confirm by typing
yes. - Montage Types: Two distinct montages will be created for each subject/session:
- 3D Montage: Showcasing three-dimensional aspects of the imaging.
- 2D Montage: Providing two-dimensional slices for detailed examination.
- 3D Montage: Showcasing three-dimensional aspects of the imaging.
Tables Creation
- Data Compilation: The script organizes and tabulates the cross-sectional and, if present, longitudinal data. This structured data is then saved for future analysis and referencing.
Longitudinal Stream
- Session Check: The script evaluates if subjects have more than one processed session.
- Pipeline Activation: If multi-session criteria are met, the script activates Freesurfer's longitudinal pipeline, systematically creating new directories for every output stage, namely: base, time 1, and time 2.
Scripts Details
Main Script (Parallelized)
main_script_parallelized.sh: This script functions similarly tomain_script.shbut is optimized to handle parallel processing. This enables faster processing of multiple images by distributing the tasks across available computational resources. Especially handy for large datasets, this script reduces the time required for processing without compromising on the output's quality.
Main Script (Regular)
main_script.sh: This script, as described above, provides a sequential approach to process the T1 images. It is designed to be more straightforward and is suitable for datasets of a moderate size or when computational resources are limited.
The following scripts are part of both the mainscript and mainscript_parallelized:
process_csv.py: Adjusts CSV files for better readability, extracts relevant data, and determines the pipeline type.
merge_csv.py: (Though current functionality is similar to extract_metadata.py,) Expected to merge multiple CSV files.
extract_metadata.py: Fetches metadata from JSON files associated with neuroimaging data and compiles them into a CSV.
process_longitudinal.py: Manages the processing of longitudinal MRI data, including setting up symbolic links for different sessions.
Remember to adjust your script's execution based on the desired mode: parallelized or regular. Depending on the size and complexity of your dataset and available computational resources, choose the script that aligns best with your needs.
New to programming? Here's a step-by-step guide on how to run these scripts
- Setup: Download the scripts
main_script.sh(ormain_main_parallelized.sh),process_csv.py,merge_csv.py,extract_metadata.py, andprocess_longitudinal.pyfrom this repository. Move these scripts into your primary BIDS dataset directory, where your subjects' folders are located.
Open the terminal (or command prompt).
Navigate to your BIDS directory using
cd path/to/your/BIDS/directory.Type
bash main_script.shorbash main_script_parallelized.shand hit Enter.Follow the on-screen instructions and provide inputs as prompted.
Output
Quality Control Montage: For each subject and session, a visual quality check is provided through montages generated from Freesurfer's outputs.
Freesurfer Metrics: Detailed metrics obtained from Freesurfer processing are saved as a CSV file.
Longitudinal Data Stream: For subjects with more than one session, additional longitudinal data is processed and stored.
T1 Metadata: Metadata associated with each unique T1 image is extracted and compiled into a dedicated CSV file.
Best Practices
- Script Names: Ensure that you keep the original names of all the required scripts within your
SUBJECTS_DIRpath. - Path Accuracy: Always verify that the paths you've set or entered are correct before executing the scripts. Incorrect paths can trigger errors.
- Dataset Organization: Your dataset should be in order and correctly named. Always ensure it's structured according to the BIDS specification.
Troubleshooting
- Command Issues: If you find the
freeviewormagickcommands unrecognized, refer to the provided error message's instructions to troubleshoot. - Path-Related Errors: If you input paths that either don't exist or aren't directories, you'll receive an error message, causing the script to terminate. Always double-check your paths before retrying.
Contact
For any issues or inquiries about the scripts, reach out to Carolina Ferreira-Atuesta at cfatuesta@gmail.com.
Owner
- Name: FreeSurfer
- Login: freesurfer
- Kind: organization
- Location: Boston, USA
- Website: http://freesurfer.net
- Repositories: 3
- Profile: https://github.com/freesurfer
Massachusetts General Hospital / Harvard Medical School
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Ferreira-Atuesta" given-names: "Carolina" orcid: "https://orcid.org/0000-0003-0853-7163" - family-names: "Terziev" given-names: "Robert" - family-names: "Marr" given-names: "Harry" - family-names: "Galovic" given-names: "Marian" title: "AutomatedFreeSurfer" version: 1.0 date-released: 2023-10-20 url: "https://github.com/cfatuesta/AutomatedFreeSurfer/tree/main"
GitHub Events
Total
- Watch event: 6
- Push event: 2
Last Year
- Watch event: 6
- Push event: 2