Running XCP-D
Warning
XCP-D may not run correctly on M1 chips.
Execution and Input Formats
The XCP-D workflow takes fMRIPRep, NiBabies and HCP outputs in the form of BIDS derivatives. In these examples, we use an fmriprep output directory.
The outputs are required to include at least anatomical and functional outputs with at least one preprocessed BOLD image. Additionally, each of these should be in directories that can be parsed by the BIDS online validator (even if it is not BIDS valid - we do not require BIDS valid directories). The directories must also include a valid dataset_description.json.
The exact command to run in xcp_d depends on the Installation method and data that needs to be processed. We start first with the bare-metal Manually Prepared Environment (Python 3.8+) installation, as the command line is simpler. XCP-D can be executed on the command line, processesing fMRIPrep outputs, using the following command-line structure, for example:
xcp_d <fmriprep_dir> <output_dir> --cifti --despike --head_radius 40 -w /wkdir --smoothing 6
However, we strongly recommend using Container Technologies. Here, the command-line will be composed of a preamble to configure the container execution, followed by the XCP-D command-line options as if you were running it on a bare-metal installation.
Command-Line Arguments
xcp_d postprocessing workflow of fMRI data
usage: xcp_d [-h] [--version]
[--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
[-t TASK_ID] [--bids-filter-file FILE] [-m] [-s]
[--nthreads NTHREADS] [--omp-nthreads OMP_NTHREADS]
[--mem_gb MEM_GB] [--use-plugin USE_PLUGIN] [-v]
[--input-type {fmriprep,dcan,hcp,nibabies}]
[--smoothing SMOOTHING] [--despike]
[-p {27P,36P,24P,acompcor,aroma,acompcor_gsr,aroma_gsr,custom}]
[-c CUSTOM_CONFOUNDS] [--min_coverage MIN_COVERAGE]
[--min_time MIN_TIME] [--dummy-scans {{auto,INT}}]
[--disable-bandpass-filter] [--lower-bpf LOWER_BPF]
[--upper-bpf UPPER_BPF] [--bpf-order BPF_ORDER]
[--motion-filter-type {lp,notch}] [--band-stop-min BPM]
[--band-stop-max BPM] [--motion-filter-order MOTION_FILTER_ORDER]
[-r HEAD_RADIUS] [-f FD_THRESH] [-w WORK_DIR] [--clean-workdir]
[--resource-monitor] [--notrack] [--warp-surfaces-native2std]
[--dcan-qc]
fmri_dir output_dir {participant}
Positional Arguments
- fmri_dir
The root folder of fMRI preprocessing derivatives. For example, ‘/path/to/dset/derivatives/fmriprep’.
- output_dir
The output path for xcp_d. This should not include the ‘xcp_d’ folder. For example, ‘/path/to/dset/derivatives’.
- analysis_level
Possible choices: participant
The analysis level for xcp_d. Must be specified as ‘participant’.
Named Arguments
- --version
show program’s version number and exit
Options for filtering BIDS queries
- --participant_label, --participant-label
A space-delimited list of participant identifiers, or a single identifier. The ‘sub-’ prefix can be removed.
- -t, --task-id, --task_id
The name of a specific task to postprocess. By default, all tasks will be postprocessed. If you want to select more than one task to postprocess (but not all of them), you can either run XCP-D with the –task-id parameter, separately for each task, or you can use the –bids-filter-file to specify the tasks to postprocess.
- --bids-filter-file
A JSON file defining BIDS input filters using PyBIDS.
- -m, --combineruns
After denoising, concatenate each derivative from each task across runs.
Default: False
Options for cifti processing
- -s, --cifti
Postprocess CIFTI inputs instead of NIfTIs. A preprocessing pipeline with CIFTI derivatives is required for this flag to work. This flag is enabled by default for the ‘hcp’ and ‘dcan’ input types.
Default: False
Options for resource management
- --nthreads
Maximum number of threads across all processes.
Default: 2
- --omp-nthreads, --omp_nthreads
Maximum number of threads per process.
Default: 1
- --mem_gb, --mem-gb
Upper bound memory limit for xcp_d processes.
- --use-plugin, --use_plugin
Nipype plugin configuration file. For more information, see https://nipype.readthedocs.io/en/0.11.0/users/plugins.html.
- -v, --verbose
Increases log verbosity for each occurence. Debug level is ‘-vvv’.
Default: 0
Input flags
- --input-type, --input_type
Possible choices: fmriprep, dcan, hcp, nibabies
The pipeline used to generate the preprocessed derivatives. The default pipeline is ‘fmriprep’. The ‘dcan’, ‘hcp’, and ‘nibabies’ pipelines are also supported. ‘nibabies’ assumes the same structure as ‘fmriprep’.
Default: “fmriprep”
Postprocessing parameters
- --smoothing
FWHM, in millimeters, of the Gaussian smoothing kernel to apply to the denoised BOLD data. This may be set to 0.
Default: 6
- --despike
Despike the BOLD data before postprocessing.
Default: False
- -p, --nuisance-regressors, --nuisance_regressors
Possible choices: 27P, 36P, 24P, acompcor, aroma, acompcor_gsr, aroma_gsr, custom
Nuisance parameters to be selected. Descriptions of each of the options are included in xcp_d’s documentation.
Default: “36P”
- -c, --custom_confounds, --custom-confounds
Custom confounds to be added to the nuisance regressors. Must be a folder containing confounds files, in which the file with the name matching the preprocessing confounds file will be selected.
- --min_coverage, --min-coverage
Coverage threshold to apply to parcels in each atlas. Any parcels with lower coverage than the threshold will be replaced with NaNs. Must be a value between zero and one, indicating proportion of the parcel. Default is 0.5.
Default: 0.5
- --min_time, --min-time
Post-scrubbing threshold to apply to individual runs in the dataset. This threshold determines the minimum amount of time, in seconds, needed to post-process a given run, once high-motion outlier volumes are removed. This will have no impact if scrubbing is disabled (i.e., if the FD threshold is zero or negative). This parameter can be disabled by providing a zero or a negative value.
Default: 100
- --dummy-scans, --dummy_scans
Number of volumes to remove from the beginning of each run. If set to ‘auto’, xcp_d will extract non-steady-state volume indices from the preprocessing derivatives’ confounds file.
Default: 0
Filtering parameters
- --disable-bandpass-filter, --disable_bandpass_filter
Disable bandpass filtering. If bandpass filtering is disabled, then ALFF derivatives will not be calculated.
Default: True
- --lower-bpf, --lower_bpf
Lower cut-off frequency (Hz) for the Butterworth bandpass filter to be applied to the denoised BOLD data. Set to 0.0 or negative to disable high-pass filtering. See Satterthwaite et al. (2013).
Default: 0.01
- --upper-bpf, --upper_bpf
Upper cut-off frequency (Hz) for the Butterworth bandpass filter to be applied to the denoised BOLD data. Set to 0.0 or negative to disable low-pass filtering. See Satterthwaite et al. (2013).
Default: 0.08
- --bpf-order, --bpf_order
Number of filter coefficients for the Butterworth bandpass filter.
Default: 2
- --motion-filter-type, --motion_filter_type
Possible choices: lp, notch
Type of filter to use for removing respiratory artifact from motion regressors. If not set, no filter will be applied.
If the filter type is set to “notch”, then both
band-stop-min
andband-stop-max
must be defined. If the filter type is set to “lp”, then onlyband-stop-min
must be defined.- --band-stop-min, --band_stop_min
Lower frequency for the motion parameter filter, in breaths-per-minute (bpm). Motion filtering is only performed if
motion-filter-type
is not None. If used with the “lp”motion-filter-type
, this parameter essentially corresponds to a low-pass filter (the maximum allowed frequency in the filtered data). This parameter is used in conjunction withmotion-filter-order
andband-stop-max
.When
motion-filter-type
is set to “lp” (low-pass filter), another commonly-used value for this parameter is 6 BPM (equivalent to 0.1 Hertz), based on Gratton et al. (2020).- --band-stop-max, --band_stop_max
Upper frequency for the band-stop motion filter, in breaths-per-minute (bpm). Motion filtering is only performed if
motion-filter-type
is not None. This parameter is only used ifmotion-filter-type
is set to “notch”. This parameter is used in conjunction withmotion-filter-order
andband-stop-min
.- --motion-filter-order, --motion_filter_order
Number of filter coeffecients for the motion parameter filter.
Default: 4
Censoring and scrubbing options
- -r, --head_radius, --head-radius
Head radius used to calculate framewise displacement, in mm. The default value is 50 mm, which is recommended for adults. For infants, we recommend a value of 35 mm. A value of ‘auto’ is also supported, in which case the brain radius is estimated from the preprocessed brain mask by treating the mask as a sphere.
Default: 50
- -f, --fd-thresh, --fd_thresh
Framewise displacement threshold for censoring. Any volumes with an FD value greater than the threshold will be removed from the denoised BOLD data. A threshold of <=0 will disable censoring completely.
Default: 0.3
Other options
- -w, --work_dir, --work-dir
Path to working directory, where intermediate results should be stored.
Default: working_dir
- --clean-workdir, --clean_workdir
Clears working directory of contents. Use of this flag is not recommended when running concurrent processes of xcp_d.
Default: False
- --resource-monitor, --resource_monitor
Enable Nipype’s resource monitoring to keep track of memory and CPU usage.
Default: False
- --notrack
Opt out of sending tracking information.
Default: False
Experimental options
- --warp-surfaces-native2std, --warp_surfaces_native2std
If used, a workflow will be run to warp native-space (
fsnative
) reconstructed cortical surfaces (surf.gii
files) produced by Freesurfer into standard (fsLR
) space. These surface files are primarily used for visual quality assessment. By default, this workflow is disabled.IMPORTANT: This parameter can only be run if the –cifti flag is also enabled.
Default: False
- --dcan-qc, --dcan_qc
Run DCAN QC.
Default: False
Filtering Inputs with BIDS Filter Files
XCP-D allows users to choose which preprocessed files will be post-processed with the
--bids-filter-file
parameter.
This argument must point to a JSON file, containing filters that will be fed into PyBIDS.
The keys in this JSON file are unique to XCP-D. They are our internal terms for different inputs that will be selected from the preprocessed dataset.
"bold"
determines which preprocessed BOLD files will be chosen.
You can set a number of entities here, including “session”, “task”, “space”, “resolution”, and
“density”.
We recommend NOT setting the datatype, suffix, or file extension in the filter file.
Warning
We do not recommend applying additional filters to any of the following fields.
We have documented them here, for edge cases where they might be useful,
but the only field that most users should filter is "bold"
.
"t1w"
selects a native T1w-space, preprocessed T1w file.
"t2w"
selects a native T1w-space, preprocessed T2w file.
"anat_dseg"
selects a native T1w-space segmentation file.
This file is primarily used for figures.
"anat_brainmask"
selects a native T1w-space brain mask.
"anat_to_template_xfm"
selects a transform from T1w (or T2w, if no T1w image is available)
space to standard space.
The standard space that will be used depends on the "bold"
files that are selected.
"template_to_anat_xfm"
selects a transform from standard space to T1w/T2w space.
Again, the standard space is determined based on other files.
Example bids-filter-file
In this example file, we only run XCP-D on resting-state preprocessed BOLD runs from session “01”.
{
"bold": {
"session": ["01"],
"task": ["rest"]
}
}
Running XCP-D via Docker containers
If you are running XCP-D locally, we recommend Docker. See Container Technologies for installation instructions.
In order to run Docker smoothly, it is best to prevent permissions issues associated with the root file system. Running Docker as user on the host will ensure the ownership of files written during the container execution.
A Docker container can be created using the following command:
docker run --rm -it \
-v /dset/derivatives/fmriprep:/fmriprep:ro \
-v /tmp/wkdir:/work:rw \
-v /dset/derivatives:/out:rw \
-v /dset/derivatives/freesurfer:/freesurfer:ro \ # Necessary for fMRIPrep versions <22.0.2
pennlinc/xcp_d:latest \
/fmriprep /out participant \
--cifti --despike --head_radius 40 -w /work --smoothing 6
Running XCP-D via Singularity containers
If you are computing on an HPC, we recommend using Singularity. See Container Technologies for installation instructions.
Warning
XCP-D (and perhaps other Docker-based Singularity images) may not work with Singularity <=2.4. We strongly recommend using Singularity 3+. For more information, see this xcp_d issue and this Singularity issue.
If the data to be preprocessed is also on the HPC or a personal computer, you are ready to run xcp_d.
singularity run --cleanenv xcp_d.simg \
path/to/data/fmri_dir \
path/to/output/dir \
--participant-label label
Relevant aspects of the $HOME
directory within the container
By default, Singularity will bind the user’s $HOME
directory on the host
into the /home/$USER
directory (or equivalent) in the container.
Most of the time, it will also redefine the $HOME
environment variable and
update it to point to the corresponding mount point in /home/$USER
.
However, these defaults can be overwritten in your system.
It is recommended that you check your settings with your system’s administrator.
If your Singularity installation allows it, you can work around the $HOME
specification, combining the bind mounts argument (-B
) with the home overwrite
argument (--home
) as follows:
singularity run -B $HOME:/home/xcp \
--home /home/xcp \
--cleanenv xcp_d.simg \
<xcp_d arguments>
Therefore, once a user specifies the container options and the image to be run, the command line options are the same as the bare-metal installation.
Custom Confounds
XCP-D can include custom confounds in its denoising. Here, you can supply your confounds, and optionally add these to a confound strategy already supported in XCP-D.
To add custom confounds to your workflow, use the --custom-confounds
parameter,
and provide a folder containing the custom confounds files for all of the subjects, sessions, and
tasks you plan to post-process.
The individual confounds files should be tab-delimited, with one column for each regressor, and one row for each volume in the data being denoised.
Signal Confounds for Non-Aggressive Denoising
Let’s say you have some nuisance regressors that are not necessarily orthogonal to some associated regressors that are ostensibly noise. For example, if you ran tedana on multi-echo data, you would have a series of “rejected” (noise) and “accepted” (signal) ICA components. Because tedana uses a spatial ICA, these components’ time series are not necessarily independent, and there can be shared variance between them. If you want to properly denoise your data using the noise components, you need to perform “non-aggressive” denoising so that variance from the signal components is not removed as well. In non-aggressive denoising, you fit a GLM using both the noise and signal regressors, then reconstruct the predicted data using just the noise regressors, and finally remove that predicted data from the real data.
For more information about different types of denoising, see tedana’s documentation, this NeuroStars topic, and/or Pruim et al. (2015).
So how do we implement this in XCP-D?
In order to define regressors that should be treated as signal,
and thus use non-aggressive denoising instead of the default aggressive denoising,
you should include those regressors in your custom confounds file,
with column names starting with signal__
(lower-case “signal”, followed by two underscores).
Important
XCP-D will automatically perform non-aggressive denoising with any nuisance-regressor option
that uses AROMA regressors
(e.g., aroma
or aroma_gsr
).
Task Regression
If you want to regress task-related signals out of your data, you can use the custom confounds option to do it.
Here we document how to include task effects as confounds.
Tip
The basic approach to task regression is to convolve your task regressors with an HRF, then save those regressors to a custom confounds file.
Warning
This method is still under development.
We recommend using a tool like Nilearn to generate convolved regressors from BIDS events files. See this example.
import numpy as np
from nilearn.glm.first_level import make_first_level_design_matrix
N_VOLUMES = 200
TR = 0.8
frame_times = np.arange(N_VOLUMES) * TR
events_df = pd.read_table("sub-X_ses-Y_task-Z_run-01_events.tsv")
task_confounds = make_first_level_design_matrix(
frame_times,
events_df,
drift_model=None,
add_regs=None,
hrf_model="spm",
)
# The design matrix will include a constant column, which we should drop
task_confounds = task_confounds.drop(columns="constant")
# Assuming that the fMRIPrep confounds file is named
# "sub-X_ses-Y_task-Z_run-01_desc-confounds_timeseries.tsv",
# we will name the custom confounds file the same thing, in a separate folder.
task_confounds.to_csv(
"/my/project/directory/custom_confounds/sub-X_ses-Y_task-Z_run-01_desc-confounds_timeseries.tsv",
sep="\t",
index=False,
)
Then, when you run XCP-D, you can use the flag
--custom_confounds /my/project/directory/custom_confounds
.
Command Line XCP-D with Custom Confounds
Last, supply the file to xcp_d with the --custom_confounds
option.
--custom_confounds
should point to the directory where this file exists, rather than to the
file itself;
XCP-D will identify the correct file based on the filename,
which should match the name of the preprocessed BOLD data’s associated confounds file.
You can simultaneously perform additional confound regression by including,
for example, --nuisance-regressors 36P
in the call.
singularity run --cleanenv -B /my/project/directory:/mnt xcpabcd_latest.simg \
/mnt/input/fmriprep \
/mnt/output/directory \
participant \
--participant_label X \
--task-id Z \
--nuisance-regressors 36P \
--custom_confounds /mnt/custom_confounds
Custom Parcellations
While XCP-D comes with many built in parcellations, we understand that many users will want to use
custom parcellations.
If you use the -cifti
option, you can use the Human Connectome Project’s wb_command
to
generate the time series:
wb_command \
-cifti-parcellate \
{SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.dtseries.nii \
your_parcels.dlabel \
{SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.ptseries.nii
After this, if one wishes to have a connectivity matrix:
wb_command \
-cifti-correlation \
{SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.ptseries.nii \
{SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.pconn.nii
More information can be found at the HCP documentation.
If you use the default NIFTI processing pipeline, you can use Nilearn’s NiftiLabelsMasker
Advanced Applications
XCP-D can be used in conjunction with other tools, such as tedana
and phys2denoise
.
We have attempted to document these applications with working code in
PennLINC/xcp_d-examples.
If there is an application you think would be useful to document, please open an issue in that
repository.
Preprocessing Requirements for XCP-D
XCP-D is designed to ingest data from a variety of different preprocessing pipelines. However, each supported pipeline must be explicitly supported within XCP-D in order for the workflow to select the correct files.
Additionally, XCP-D may require files that are only created with specific settings in the preprocessing pipelines.
fMRIPrep/Nibabies
In order to work on fMRIPrep or Nibabies derivatives, XCP-D needs derivatives in one of a few template spaces, including “MNI152NLin6Asym”, “MNI152NLin2009cAsym”, “MNIInfant”, and “fsLR”. We may add support for additional templates in the future, but currently you must have at least one of these among your output spaces. XCP-D does not have any specific requirements for resolution of volumetric derivatives, but we do require fsLR-space CIFTIs be outputted in 91k density.
Troubleshooting
Logs and crashfiles are outputted into the <output dir>/xcp_d/sub-<participant_label>/log
directory.
Information on how to customize and understand these files can be found on the
nipype debugging
page.
Support and communication
All bugs, concerns and enhancement requests for this software can be submitted here: https://github.com/PennLINC/xcp_d/issues.
If you have a question about using XCP-D, please create a new topic on NeuroStars with the “Software Support” category and the “xcp_d” tag. The XCP-D developers follow NeuroStars, and will be able to answer your question there.