Recent Releases of halfpipe
halfpipe - 1.2.3
New features and enhancements
- Add motion scrubbing (#769)
- Add
group-levelcommand (#413, #416, #468, #477, #484, #496, #523, #543, #555, #556, #557, #558, #562, #566, #569, #574, #587, #589, #602, #604, #605, #612, #619, #621, #625, #635) - Also remove empty directories when
--keep noneis enabled to reduce inode usage (#283) - Add additional related images for the quality check (#295)
- Add import option to quality check (#328)
- Update documentation (#680, #748)
- Compatibility with COINSTAC (#322, #467)
- Additional unit tests (#338, #340, #384, #403, #421)
- Improve performance (#366, #646, #650, #651)
- Add
sigmasquaredsoutput for task-based features to allow Cohen's d calculation on the first level (#378) - Better error messages for invalid metadata (#527)
- Add more comments to code (#665)
Bug fixes
- Fix handling of not a number values in stats (#319)
- Fix handling of additional errors in stats (#345)
- Fix handling of data with unknown BIDS tags (#244, #226, #337)
- Fix loading conditions from MAT files (#345)
- Fix reset of the
global_settingsin thespec.jsonfile when adding a new feature (#345) - Fix detecting unrealistic slice timing values (#342, #347)
- Fix handling of pre-computed fieldmaps (#348)
- Fix filtering out incomplete PEPOLAR field map sets (#370)
- Fix loading numerical condition names for TSV files (#370)
- Fix user interface crash when there are too many distinct tag values to fit on screen (#400, #493)
- Fix selecting
Skipin the band pass filter setting (#498, #500) - Fix handling of underscores in subject IDs (#499)
- Fix quality check inclusion decisions when aggregating multiple scans before group statistics (#490, #501)
- Fix handling of dummy scans to actually remove volumes from outputs (#506)
- Fix loading condition files in FSL 3-column format that do not have the extension
.txt(#543) - Fix handling of empty tag values (#549, #557)
- Fix accidental modification of database when searching dor condition files (#571)
- Fix detecting the orientation for unusual input files (#694, #706)
- Fix backspace key in user interface (#722)
- Fix spacing in user interface (#777)
Maintenance
- De-deduplicate code (#392, #411, #417)
- Fix code style (#451)
- Upgrade to Python 3.11 (#486)
- Use self-hosted runner for continuous integration (#554, #622)
- Add more type annotations (#599, #666)
- Use
condato install all dependencies (#701, #702, #741) - Update
antsversion (#691) - Update build script for creating
singularitycontainers (#766)
With many thanks to @lalalavi, @F-Tomas, @dominikgoeller and @jstaph for contributions
- Python
Published by HippocampusGirl 12 months ago
halfpipe - 1.2.2
Bug fixes
- Fix issue with BOLD to T1w registration (#230, #238, #239)
- Also detect
exclude.jsonfiles that are placed in thereports/folder (#228) - Improve error message when the FreeSurfer license file is missing (#231)
- Fix a rare calculation error for
fd_meanand related image quality metrics (#237, #241) - Fix various warning messages (#247)
- Fix performance issue when collecting inputs for group statistics ()
- Fix a user interface issue where the option
Start over after modelswas missing (#259, #260) - Fix an issue where
sub-prefixes were not recognized correctly when filtering inputs for group statistics (#264) - Fix an issue when writing mixed data type columns to the text files in the
reports/folder (#274) - Fix warnings for missing quality check information (#276)
- Fix errors when aggregating subjects with different numbers of scans during group statistics (#280)
- Fix error when fMRIPrep skips a BOLD file (#285)
Maintenance
- Bump
indexed_gzip(#240) - Bump
nipypeafter bug fix (#255) - Bump
fmriprepafter bug fix (#262) - Upgrade to Python 3.10, clean up code and add more unit tests (#269)
- Make continuous integration tests run faster (#282, #284)
- Add type checking and linting to continuous integration (#285)
- Python
Published by HippocampusGirl almost 4 years ago
halfpipe - 1.2.1
Bug fixes
- Fix issues that occurred after re-scaling
fd_percto be percent (#217) - Catch error when
NaNvalues occur within the linear algebra code (#215) - Reduce memory usage when running large workflows by only loading the chunks that will be necessary for the current process (#216)
- Improve memory usage prediction for cluster submission scripts (#219)
- Update metadata module with better log messages (#220)
- Python
Published by HippocampusGirl over 4 years ago
halfpipe - 1.2.0
New features and enhancements
- Improve the assignment of field maps to functional scans, print warnings when detecting an incomplete field map or when a complete field map is not recognized by fMRIPrep (#115 and #192)
- Remove conditions that have no events from the task-based model. This is important for designs where the conditions depend on subject performance (#90)
- Output additional images during group mode. Voxel-wise descriptive statistics (#142), typical subject-level variance (#148)
- Divide outputs into subfolders to make navigating the files easier
- Output metadata to sidecar files, including resolution, field-of-view and field map type (#154 and #181)
- Add an option to skip dummy/non-steady-state scans and modify event onsets accordingly (#167, #176, #182 and #187)
- Improve performance during workflow creation (#192)
Bug fixes
- Update
fMRIPrepto fix normalization bug (#51) - Improve memory usage prediction. Fixes
BrokenProcessPoolandKilled: 137errors (#125, #156 and #157) - Refactor
Dockerfileto correctly re-buildmatplotlibcaches (#107) - Fix assignment of event files to functional scans. Make sure that the assignment is consistent between what is shown in the user interface and during workflow creation. Add unit tests (#139)
- Fix crashes for datasets deviating from the
BIDSspecification and remove misleading warnings for incompatible and hidden files - Fix
AssertionErrorcrash when no group model is specified - Rephrase user interface for loading
.matevent files. Do not say that the time unit (seconds or scans) is missing, which was confusing. - Fix various crashes when running on a cluster
- Fix user interface crash when no categorical variables are defined in a spreadsheet
- Fix loading subject-level results during group model. Get rid of
LoadResultnodes, instead use a subclass ofNode(#137) - Use slower but more robust least-squares solve for group statistics (#141)
- Fix performance issue during
t2z_convertprocedure during group statistics (#143, #144 and #145) - Remove output from heterogeneity group statistics that was causing performance issues (#146)
- Fix confusing
EOFErrormessage on exit by gracefully stopping child processes before exit (#130 and #160) - Fix running FreeSurfer with
run_reconalloption (#87) - Add error message when running on an unsupported file system such as
FAT(#102) - Fix confusing error message when no features are specified (#147)
- Re-scale
fd_percoutput to percent (#186) - Reduce user interface memory usage (#191)
- Fix automated testing hanging on the logging worker (#192)
Maintenance
- Update Python to version 3.8
- Update
templateflow,pybids,nibabel - Pin
dipyversion due to incompatibility withnipype - Pin
indexed_gzipversion due to incompatibility of newer version with some files (#85) - Add new Singularity container build workflow (#97 and #138)
- Improve documentation to suggest running Singularity with
--containallinstead of--no-home --cleanenv - Refactor code to use
defaultdictto increase readability - Add more type hints
- Rename main branch from
mastertomain - Add
pre-commitandpip-toolsto better manage dependencies - Install as many dependencies as possible via
condaand the rest viapip(#164) - Refactor workflow code to allow handling of surface-based functional images (#161)
- In-progress refactor
modelpackage intoschemapackage. Usedataclassesfor better integration with type checkers (#173, #174 and #178)
- Python
Published by HippocampusGirl over 4 years ago
halfpipe - 1.1.1
Enhancements
- Add user interface checks for slice timing so that errors in configuration can be detected before running
- Reduce memory usage
Bug fixes
- Fix using curly brackets in a tag regex, for example
/data/{subject:[0-9]{5}}.nii.gz - Fix disabling the high pass filter for task-based feature extraction
- Fix performance issue with importing large BIDS datasets that contain field maps
- Fix matplotlib error (#107)
- Fix performance issue for large datasets (#105)
- Python
Published by HippocampusGirl almost 5 years ago
halfpipe - 1.1.0
With many thanks to @jstaph for contributions
New features and enhancements
- Create high-performance computing cluster submission scripts for Torque/PBS and SGE cluster as well (#71)
- Calculate additional statistics such as heterogeneity (https://doi.org/fzx69f) and a test that data is missing-completely-at-random via logistic regression (#67)
- Always enable ICA-AROMA even when its outputs are not required for feature extraction so that its report image is always available for quality assessment (#75)
- Support loading presets or plugins that may make it easier to do harmonized analyses across many sites (#8)
- Support adding derivatives of the HRF to task-based GLM design matrices
- Support detecting the amount of available memory when running as a cluster job, or when running as a container with a memory limit such as when using Docker on Mac
Maintenance
- Add type hints to code. This allows a type checker like
pyrightto suggest possible error sources ahead of time, making programming more efficient - Add
openpyxlandxlsxwriterdependencies to support reading/writing Excel XLSX files - Update
numpy,scipyandnilearnversions - Add additional automated tests
Bug fixes
- Fix importing slice timing information from a file after going back to the prompt via undo (#55)
- Fix a warning when loading task event timings from a MAT-file. NiftiheaderLoader tried to load metadata for it like it would for a NIfTI file (#56)
- Fix
numpyarray broadcasting error when loading data from 3D NIfTI files that have been somehow marked as being four-dimensional - Fix misunderstanding of the output value
reselsof FSL'ssmoothestcommand. The value refers to the size of a resel, not the number of them in the image. The helper function_critical_znow taked this into account now. (nipy/nipype#3316) - Fix naming of output files in
derivatives/halfpipeandgrouplevelfolder so that capitalization is consistent with original IDs and names (#57) - Fix the summary display after
BIDSimport to show the number of "subjects" and not the number of "subs" - Fix getting the required metadata fields for an image type by implementing a helper function
- Fix outputting source files for the quality check web app (#62)
- Fix assigning field maps to specific functional images, which is done by a mapping between field map taks and functional image tags. The mapping is automatically inferred for BIDS datasets and manually specified otherwise (#66)
- Force re-calculation of
nipypeworkflows afterHALFpipeupdate so that changes from the new version are applied in existing working directories as well - Do not fail task-based feature extraction if no events are available for a particular condition for a particular subject (#58)
- Force using a recent version of the
indexed_gzipdependency to avoid error (#85) - Improve loading delimited data in
loadspreadsheetfunction - Fix slice timing calculation in user interface
- Python
Published by HippocampusGirl almost 5 years ago
halfpipe - 1.1.0rc1
- Performance improvements for large datasets
- Improve running on SGE and Torque/PBS clusters
Bug fixes
Does not fail even when events are missing for a condition for a participant
Check missing-completely-at-random assumption via logistic regression at group level
- Python
Published by HippocampusGirl almost 5 years ago
halfpipe - 1.0.0 Beta 6
Enhancements
- Run group models with listwise deletion so that missing brain coverage in one
subject does not lead to a missing voxel in the group statistic. This is not
possible to do with FSL
flameo, but we still wanted to use the FLAME algorithm (Woolrich et al. 2004). As such, I re-implemented the algorithm to adaptively adjust the design matrix depending on brain coverage. - Add automated testing. Any future code changes need to pass all automated tests before they can be uploaded to the master branch (and thus be available for download). The tests take around two hours to complete and include a full run of Halfpipe for one subject.
- Increase run speed by running all tasks in parallel as opposed to only most.
Previously, the code would run all tasks related to copying and organizing data
on the main thread. This is a convention introduced by
nipype. It is based on the assumption that the main thread may run on the head node of a cluster and submit all tasks as jobs to the cluster. To prevent quick tasks from clogging the cluster queue, they are run on the head node. However, as we do not usenipypethat way, we can improve performance by getting rid of this behavior. - Improve debug output to include variable names when an error occurs.
- Improve
--watchdogoption to include memory usage information.
Maintenance
- Bump
pybids,fmriprep,smriprep,niworkflows,nipypeandtemplateflowversions.
Bug fixes
- Fix design matrix specification with numeric subject names and leading zeros.
- Fix design matrix specification of F-contrasts.
- Fix selecting subjects by group for numeric group names.
- Fix an error with seed connectivity when excluding a seed due to missing brain coverage (#19).
- Force output file names to be BIDS compatible and improve their naming.
- Stop
fmriprepfrom creating aworkfolder in the Halfpipe working directory.
- Python
Published by HippocampusGirl about 5 years ago
halfpipe - 1.0.0 Beta 5
Enhancements
- Implement continuous integration that runs automated tests of any changes in code. This means that, if implemented correctly, bugs that are fixed once can be covered by these tests so that they are not accidentally introduced again further down the line. This approach is called regression testing.
- Add codecov plugin to monitor the percentage of code that is covered by automated tests. Halfpipe is currently at 2%, which is very low, but this will improve over time as we write more testing code.
- Improve granularity of the
--keepautomatic intermediate file deletion so that more files are deleted, and add automated tests to verify the correctness of file deletion decisions. - Add
--nipype-resource-monitorcommand line option to monitor memory usage of the workflow and thus diagnose memory issues - Re-implement logging code to run in a separate process, reducing the
burden on the main process. This works by passing a Python
multiprocessing.Queueto all nipype worker processes, so that all workers put log messages into the queue using alogging.handlers.QueueHandler. I then implemented a listener that would read from this queue and route the log messages to the appropriate log files and the terminal standard output. I first implemented the listener withthreading. Threading is a simple way to circumvent I/O delays slowing down the main code. With threading, the Python interpreter switches between the logging and main threads regularly. As a result, when the logging thread waits for the operating system to write to disk or to acquire a file lock, the main thread can do work in the meantime, and vice versa. Very much unexpectedly, this code led to segmentation faults in Python. To better diagnose these errors, I refactored the logging thread to a separate process, because I thought there may be some kind of problem with threading. Through this work, I discovered that I was using a differentmultiprocessingcontext for instantiating the logging queue and the nipype workers, which caused the segmentation faults. Even though it is now unnecessary, I decided to keep the refactored code with logging in a separate process, because there are no downsides and I had already put the work in. - Re-phrase some logging messages for improved clarity.
- Refactor command line argument parser and dispatch code to a separate module to increase code clarity and readability.
- Refactor spreadsheet loading code to new parse module.
- Print warnings when encountering invalid NIfTI file headers.
- Avoid unnecessary re-runs of preprocessing steps by naming workflows using hashes instead of counts. This way adding/removing features and settings from the spec.json can be more efficient if intermediate results are kept.
- Refactor
--watchdogcode - Refactor workflow code to use the new collectboldfiles function to decide which functional images to pre-process and which to exclude from processing. The collectboldfiles function implements new rules to resolve duplicate files. If multiple functional images with the same tags are found, for example identical subject name, task and run number, only one will be included. Ideally, users would delete such duplicate files before running Halfpipe, but we also do not want Halfpipe to fail in these cases. Two heuristic rules are used: 1) Use the longer functional image. Usually, the shorter image will be a scan that was aborted due to technical issues and had to be repeated. 2) If both images have the same number of volumes, the one with the alphabetically last file name will be used.
Maintenance
- Apply pylint code style rules.
- Refactor automated tests to use pytest fixtures.
Bug fixes
- Log all warning messages but reduce the severity level of warnings that are known to be benign.
- Fix custom interfaces MaskCoverage, MergeMask, and others based on the Transformer class to not discard the NIfTI header when outputting the transformed images
- Fix execution stalling when the logger is unable to acquire a lock
on the log file. Use the
flufl.lockpackage for hard link-based file locking, which is more robust on distributed file systems and NFS. Add a fallback to regularfcntl-based locking if that fails, and another fallback to circumvent log file locking entirely, so that logs will always be written out no matter what (#10). - Fix accidentally passing T1w images to fmriprep that don’t have corresponding functional images.
- Fix merging multiple exclude.json files when quality control is done collaboratively.
- Fix displaying a warning for README and dataset_description.json files in BIDS datasets.
- Fix parsing phase encoding direction from user interface to not only parse the axis but also the direction. Before, there was no difference between selecting anterior-to-posterior and posterior-to-anterior, which is incorrect.
- Fix loading repetition time coded in milliseconds or microseconds from NIfTI files (#13).
- Fix error when trying to load repetition time from 3D NIfTI file (#12).
- Fix spreadsheet loading with UTF-16 file encoding (#3).
- Fix how missing values are displayed in the user interface when checking metadata.
- Fix unnecessary inconsistent setting warnings in the user interface.
- Python
Published by HippocampusGirl over 5 years ago
halfpipe - 1.0.0 Beta 4
Thank you all so much for beta-testing and providing detailed feedback for Beta 3. For Beta 4, we have fixed the following issues and have made some minor improvements:
- ENH: Add adaptive memory requirement for the submit script generated by
--use-cluster - ENH: Output the proportion of seeds and atlas region that is covered by the brain mask to the sidecar JSON file as key
Coverage - ENH: Add option to exclude seeds and atlas regions that do not meet a user-specified
Coveragethreshold - ENH: More detailed display of missing metadata in user interface
- ENH: More robust handling of NIfTI headers
- MAINT: Update
fmriprepto latest release 20.2.0 - MAINT: Update
setup.cfgwith latestpandas,smriprep,mriqcandniworkflows - MAINT: Update
DockerfileandSingularityrecipes to use the latest version offmriprep - FIX: Fix an error that occurred when first level design matrices are sometimes passed to the higher level model code alongside the actual statistics
- FIX: Missing sidecar JSON file for atlas-based connectivity features
- FIX: Allow reading of spreadsheets that contain byte-order marks (#3)
- FIX: Incorrect file name for execgraphs file was generated or the submit script generated by
--use-cluster - FIX: Misleading warning for inconsistencies between NIfTI header
slice_durationand repetition time - FIX: Ignore additional misleading warnings
- FIX: Incorrect regular expression to select aCompCor columns from confounds
- FIX: Detect all exclude.json files in workdir
- FIX: Replace existing derivatives if nipype outputs have been overwritten
- Python
Published by HippocampusGirl over 5 years ago
halfpipe - 1.0.0 Beta 3
For Beta 3, we are fixing the bugs you reported and improving parts of the user interface. For more information, please see the detailed changelog below.
- ENH: Implement listwise deletion for missing values in linear model via the new filter type
missing - ENH: Allow the per-variable specification of missing value strategy for linear models, either listwise deletion (default) or mean substitution
- ENH: Add validators for metadata
- ENH: Allow slice timing to be specified by selecting the slice order from a menu
- ENH: Add option
Add another featurewhen using a working directory with existingspec.json - ENH: Add minimum region coverage option for atlas-based connectivity
- MAINT: Update
setup.cfgwith latestnipype,fmriprep,smriprepandniworkflowsversions - FIX: Do not crash when
MergeColumnsrow_indexis empty - FIX: Remove invalid fields from result in
AggregateResultdicts - FIX: Show slice timing option for BIDS datasets
- FIX: Correctly store manually specified slice timing in the
spec.jsonfor BIDS datasets - FIX: Build
nitimedependency from source to avoid build error - FIX: Do not crash when confounds contain
n/avalues ininit_confounds_regression_wf - FIX: Adapt code to new
fmriprepandniworkflowsversions - FIX: Correct capitalization in fixed effects aggregate model names
- FIX: Do not show group model option for atlas-based connectivity features
- FIX: Rename output files so that
contrastfrom task-based features becomestaskcontrastto avoid conflict with the contrast names in group-level models - FIX: Catch input file errors in report viewer so that it doesn’t crash
- FIX: Improve naming of group level design matrix TSV files
- Python
Published by HippocampusGirl over 5 years ago