Running XCP-D

Warning

XCP-D may not run correctly on M1 chips.

Execution and Input Formats

The XCP-D workflow takes fMRIPRep, NiBabies and HCP outputs in the form of BIDS derivatives. In these examples, we use an fmriprep output directory.

The outputs are required to include at least anatomical and functional outputs with at least one preprocessed BOLD image. Additionally, each of theseshould be in directories that can be parsed by the BIDS online validator (even if it is not BIDS valid - we do not require BIDS valid directories). The directories must also include a valid dataset_description.json.

The exact command to run in xcp_d depends on the installation method and data that needs to be processed. We start first with the bare-metal Manually Prepared Environment (Python 3.8+) installation, as the command line is simpler. xcp_d can be executed on the command line, processesing fMRIPrep outputs, using the following command-line structure, for example:

xcp_d <fmriprep_dir> <outputdir> --cifti --despike  --head_radius 40 -w /wkdir --smoothing 6

However, we strongly recommend using Container Technologies. Here, the command-line will be composed of a preamble to configure the container execution, followed by the xcp_d command-line options as if you were running it on a bare-metal installation.

Command-Line Arguments

xcp_d postprocessing workflow of fMRI data

usage: xcp_d [-h] [--version]
             [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
             [-t TASK_ID] [--bids-filter-file FILE] [-m] [-s]
             [--nthreads NTHREADS] [--omp-nthreads OMP_NTHREADS]
             [--mem_gb MEM_GB] [--use-plugin USE_PLUGIN] [-v]
             [--input-type {fmriprep,dcan,hcp,nibabies}]
             [--smoothing SMOOTHING] [--despike]
             [-p {27P,36P,24P,acompcor,aroma,acompcor_gsr,aroma_gsr,custom}]
             [-c CUSTOM_CONFOUNDS] [-d DUMMYTIME | --dummy-scans {{auto,INT}}]
             [--disable-bandpass-filter] [--lower-bpf LOWER_BPF]
             [--upper-bpf UPPER_BPF] [--bpf-order BPF_ORDER]
             [--motion-filter-type {lp,notch}] [--band-stop-min BPM]
             [--band-stop-max BPM] [--motion-filter-order MOTION_FILTER_ORDER]
             [-r HEAD_RADIUS] [-f FD_THRESH] [-w WORK_DIR] [--clean-workdir]
             [--resource-monitor] [--notrack] [--warp-surfaces-native2std]
             [--dcan-qc]
             fmri_dir output_dir analysis_level

Positional Arguments

fmri_dir

the root folder of a preprocessed fMRI output.

output_dir

the output path for xcp_d

analysis_level

the analysis level for xcp_d, must be specified as “participant”.

Named Arguments

--version

show program’s version number and exit

Options for filtering BIDS queries

--participant_label, --participant-label

a space delimited list of participant identifiers or a single identifier (the sub- prefix can be removed)

-t, --task-id

select a specific task to be selected for the postprocessing

--bids-filter-file

A JSON file defining BIDS input filters using PyBIDS.

-m, --combineruns

this option combines all runs into one file

Default: False

Options for cifti processing

-s, --cifti

postprocess cifti instead of nifti this is set default for dcan and hcp

Default: False

Options to for resource management

--nthreads

maximum number of threads across all processes

Default: 2

--omp-nthreads

maximum number of threads per-process

Default: 1

--mem_gb, --mem_gb

upper bound memory limit for xcp_d processes

--use-plugin

nipype plugin configuration file. for more information see https://nipype.readthedocs.io/en/0.11.0/users/plugins.html

-v, --verbose

increases log verbosity for each occurence, debug level is -vvv

Default: 0

Input flags

--input-type

Possible choices: fmriprep, dcan, hcp, nibabies

The pipeline used to generate the preprocessed derivatives. The default pipeline is ‘fmriprep’. The ‘dcan’, ‘hcp’, and ‘nibabies’ pipelines are also supported. ‘nibabies’ assumes the same structure as ‘fmriprep’.

Default: “fmriprep”

Parameters for postprocessing

--smoothing

smoothing the postprocessed output (fwhm)

Default: 6

--despike

despike the nifti/cifti before postprocessing

Default: False

-p, --nuisance-regressors

Possible choices: 27P, 36P, 24P, acompcor, aroma, acompcor_gsr, aroma_gsr, custom

Nuisance parameters to be selected. See Ciric et. al (2007).

Default: “36P”

-c, --custom_confounds

Custom confound to be added to nuisance regressors. Must be a folder containing confounds files, in which case the file with the name matching the fMRIPrep confounds file will be selected.

-d, --dummytime

Number of seconds to remove from the beginning of each run. This value will be rounded up to the nearest TR. This parameter is deprecated and will be removed in version 0.4.0. Please use --dummy-scans.

Default: 0

--dummy-scans

Number of volumes to remove from the beginning of each run. If set to ‘auto’, xcp_d will extract non-steady-state volume indices from the preprocessing derivatives’ confounds file.

Default: 0

Filtering parameters and default value

--disable-bandpass-filter, --disable_bandpass_filter

Disable bandpass filtering. If bandpass filtering is disabled, then ALFF derivatives will not be calculated.

Default: True

--lower-bpf

lower cut-off frequency (Hz) for the butterworth bandpass filter

Default: 0.009

--upper-bpf

upper cut-off frequency (Hz) for the butterworth bandpass filter

Default: 0.08

--bpf-order

number of filter coefficients for butterworth bandpass filter

Default: 2

--motion-filter-type

Possible choices: lp, notch

Type of band-stop filter to use for removing respiratory artifact from motion regressors. If not set, no filter will be applied.

If the filter type is set to “notch”, then both band-stop-min and band-stop-max must be defined. If the filter type is set to “lp”, then only band-stop-min must be defined.

--band-stop-min

Lower frequency for the band-stop motion filter, in breaths-per-minute (bpm). Motion filtering is only performed if motion-filter-type is not None. If used with the “lp” motion-filter-type, this parameter essentially corresponds to a low-pass filter (the maximum allowed frequency in the filtered data). This parameter is used in conjunction with motion-filter-order and band-stop-max.

Recommended values, based on participant age

Age Range (years)

Recommended Value (bpm)

< 1

30

1 - 2

25

2 - 6

20

6 - 12

15

12 - 18

12

19 - 65

12

65 - 80

12

> 80

10

When motion-filter-type is set to “lp” (low-pass filter), another commonly-used value for this parameter is 6 BPM (equivalent to 0.1 Hertz), based on Gratton et al. (2020).

--band-stop-max

Upper frequency for the band-stop motion filter, in breaths-per-minute (bpm). Motion filtering is only performed if motion-filter-type is not None. This parameter is only used if motion-filter-type is set to “notch”. This parameter is used in conjunction with motion-filter-order and band-stop-min.

Recommended values, based on participant age

Age Range (years)

Recommended Value (bpm)

< 1

60

1 - 2

50

2 - 6

35

6 - 12

25

12 - 18

20

19 - 65

18

65 - 80

28

> 80

30

--motion-filter-order

number of filter coeffecients for the band-stop filter

Default: 4

Censoring and scrubbing options

-r, --head_radius

head radius for computing FD, default is 50mm, 35mm is recommended for baby

Default: 50

-f, --fd-thresh

framewise displacement threshold for censoring, default is 0.2mm

Default: 0.2

Other options

-w, --work_dir

path where intermediate results should be stored

Default: working_dir

--clean-workdir

Clears working directory of contents. Use of this flag is notrecommended when running concurrent processes of xcp_d.

Default: False

--resource-monitor

enable Nipype’s resource monitoring to keep track of memory and CPU usage

Default: False

--notrack

Opt-out of sending tracking information

Default: False

Experimental options

--warp-surfaces-native2std

If used, a workflow will be run to warp native-space (fsnative) reconstructed cortical surfaces (surf.gii files) produced by Freesurfer into standard (fsLR) space. These surface files are primarily used for visual quality assessment. By default, this workflow is disabled.

The surface files that are generated by the workflow

Filename

Description

<source_entities>_space-fsLR_den-32k_hemi-<L|R>_pial.surf.gii

The gray matter / pial matter border.

<source_entities>_space-fsLR_den-32k_hemi-<L|R>_smoothwm.surf.gii

The smoothed gray matter / white matter border for the cortex.

<source_entities>_space-fsLR_den-32k_hemi-<L|R>_midthickness.surf.gii

The midpoints between wm and pial surfaces. This is derived from the FreeSurfer graymid (mris_expand with distance=0.5 applied to the WM surfs).

<source_entities>_space-fsLR_den-32k_hemi-<L|R>_inflated.surf.gii

An inflation of the midthickness surface (useful for visualization). This file is only created if the input type is “hcp” or “dcan”.

<source_entities>_space-fsLR_den-32k_hemi-<L|R>_desc-hcp_midthickness.surf.gii

The midpoints between wm and pial surfaces. This is created by averaging the coordinates from the wm and pial surfaces.

<source_entities>_space-fsLR_den-32k_hemi-<L|R>_desc-hcp_inflated.surf.gii

An inflation of the midthickness surface (useful for visualization). This is derived from the HCP midthickness file. This file is only created if the input type is “fmriprep” or “nibabies”.

<source_entities>_space-fsLR_den-32k_hemi-<L|R>_desc-hcp_vinflated.surf.gii

A very-inflated midthicknesss surface (also for visualization). This is derived from the HCP midthickness file. This file is only created if the input type is “fmriprep” or “nibabies”.

Default: False

--dcan-qc, --dcan_qc

Run DCAN QC, including executive summary generation.

Default: False

see https://xcp-d.readthedocs.io/en/latest/generalworkflow.html

Filtering Inputs with BIDS Filter Files

xcp_d allows users to choose which preprocessed files will be post-processed with the --bids-filter-file parameter. This argument must point to a JSON file, containing filters that will be fed into PyBIDS.

The keys in this JSON file are unique to xcp_d. They are our internal terms for different inputs that will be selected from the preprocessed dataset.

"bold" determines which preprocessed BOLD files will be chosen. You can set a number of entities here, including “session”, “task”, “space”, “resolution”, and “density”. We recommend NOT setting the datatype, suffix, or file extension in the filter file.

Warning

We do not recommend applying additional filters to any of the following fields. We have documented them here, for edge cases where they might be useful, but the only field that most users should filter is "bold".

"t1w" selects a native T1w-space, preprocessed T1w file.

"t1w_seg" selects a native T1w-space segmentation file. This file is primarily used for figures.

"t1w_mask" selects a native T1w-space brain mask.

"t1w_to_template_xform" selects a transform from T1w space to standard space. The standard space that will be used depends on the "bold" files that are selected.

"template_to_t1w_xform" selects a transform from standard space to T1w space. Again, the standard space is determined based on other files.

Example bids-filter-file

In this example file, we only run xcp_d on resting-state preprocessed BOLD runs from session “01”.

{
   "bold": {
      "session": ["01"],
      "task": "rest"
   }
}

Running xcp_d via Docker containers

If you are running xcp_d locally, we recommend Docker. See Container Technologies for installation instructions.

In order to run Docker smoothly, it is best to prevent permissions issues associated with the root file system. Running Docker as user on the host will ensure the ownership of files written during the container execution.

A Docker container can be created using the following command:

docker run --rm -it \
   -v /dset/derivatives/fmriprep:/fmriprep:ro \
   -v /tmp/wkdir:/work:rw \
   -v /dset/derivatives:/out:rw \
   -v /dset/derivatives/freesurfer:/freesurfer:ro \  # Necessary for fMRIPrep versions <22.0.2
   pennlinc/xcp_d:latest \
   /fmriprep /out participant \
   --cifti --despike --head_radius 40 -w /work --smoothing 6

Running xcp_d via Singularity containers

If you are computing on an HPC, we recommend using Singularity. See Container Technologies for installation instructions.

If the data to be preprocessed is also on the HPC or a personal computer, you are ready to run xcp_d.

singularity run --cleanenv xcp_d.simg \
    path/to/data/fmri_dir  \
    path/to/output/dir \
    --participant-label label

Relevant aspects of the $HOME directory within the container

By default, Singularity will bind the user’s $HOME directory on the host into the /home/$USER directory (or equivalent) in the container. Most of the time, it will also redefine the $HOME environment variable and update it to point to the corresponding mount point in /home/$USER. However, these defaults can be overwritten in your system. It is recommended that you check your settings with your system’s administrator. If your Singularity installation allows it, you can work around the $HOME specification, combining the bind mounts argument (-B) with the home overwrite argument (--home) as follows:

singularity run -B $HOME:/home/xcp \
    --home /home/xcp \
    --cleanenv xcp_d.simg \
    <xcp_d arguments>

Therefore, once a user specifies the container options and the image to be run, the command line options are the same as the bare-metal installation.

Custom Confounds

XCP-D can include custom confounds in its denoising. Here, you can supply your confounds, and optionally add these to a confound strategy already supported in XCP-D.

To add custom confounds to your workflow, use the --custom-confounds parameter, and provide a folder containing the custom confounds files for all of the subjects, sessions, and tasks you plan to post-process.

The individual confounds files should be tab-delimited, with one column for each regressor, and one row for each volume in the data being denoised.

Signal Confounds for Non-Aggressive Denoising

Let’s say you have some nuisance regressors that are not necessarily orthogonal to some associated regressors that are ostensibly noise. For example, if you ran tedana on multi-echo data, you would have a series of “rejected” (noise) and “accepted” (signal) ICA components. Because tedana uses a spatial ICA, these components’ time series are not necessarily independent, and there can be shared variance between them. If you want to properly denoise your data using the noise components, you need to perform “non-aggressive” denoising so that variance from the signal components is not removed as well. In non-aggressive denoising, you fit a GLM using both the noise and signal regressors, then reconstruct the predicted data using just the noise regressors, and finally remove that predicted data from the real data.

For more information about different types of denoising, see tedana’s documentation, this NeuroStars topic, and/or Pruim et al. (2015).

So how do we implement this in xcp_d? In order to define regressors that should be treated as signal, and thus use non-aggressive denoising instead of the default aggressive denoising, you should include those regressors in your custom confounds file, with column names starting with signal__ (lower-case “signal”, followed by two underscores).

Important

xcp_d will automatically perform non-aggressive denoising with any nuisance-regressor option that uses AROMA regressors (e.g., aroma or aroma_gsr).

Task Regression

If you want to regress task-related signals out of your data, you can use the custom confounds option to do it.

Here we document how to include task effects as confounds.

Tip

The basic approach to task regression is to convolve your task regressors with an HRF, then save those regressors to a custom confounds file.

Warning

This method is still under development.

We recommend using a tool like Nilearn to generate convolved regressors from BIDS events files. See this example.

import numpy as np
from nilearn.glm.first_level import make_first_level_design_matrix

N_VOLUMES = 200
TR = 0.8
frame_times = np.arange(N_VOLUMES) * TR
events_df = pd.read_table("sub-X_ses-Y_task-Z_run-01_events.tsv")

task_confounds = make_first_level_design_matrix(
   frame_times,
   events_df,
   drift_model=None,
   add_regs=None,
   hrf_model="spm",
)

# The design matrix will include a constant column, which we should drop
task_confounds = task_confounds.drop(columns="constant")

# Assuming that the fMRIPrep confounds file is named
# "sub-X_ses-Y_task-Z_run-01_desc-confounds_timeseries.tsv",
# we will name the custom confounds file the same thing, in a separate folder.
task_confounds.to_csv(
   "/my/project/directory/custom_confounds/sub-X_ses-Y_task-Z_run-01_desc-confounds_timeseries.tsv",
   sep="\t",
   index=False,
)

Then, when you run XCP-D, you can use the flag --custom_confounds /my/project/directory/custom_confounds.

Command Line XCP-D with Custom Confounds

Last, supply the file to xcp_d with the --custom_confounds option. --custom_confounds should point to the directory where this file exists, rather than to the file itself; xcp_d will identify the correct file based on the filename, which should match the name of the preprocessed BOLD data’s associated confounds file. You can simultaneously perform additional confound regression by including, for example, --nuisance-regressors 36P in the call.

singularity run --cleanenv -B /my/project/directory:/mnt xcpabcd_latest.simg \
   /mnt/input/fmriprep \
   /mnt/output/directory \
   participant \
   --participant_label X \
   --task-id Z \
   --nuisance-regressors 36P \
   --custom_confounds /mnt/custom_confounds

Custom Parcellations

While XCP-D comes with many built in parcellations, we understand that many users will want to use custom parcellations. If you use the -cifti option, you can use the Human Connectome Project’s wb_command to generate the time series:

wb_command \
   -cifti-parcellate \
   {SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.dtseries.nii \
   your_parcels.dlabel \
   {SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.ptseries.nii

After this, if one wishes to have a connectivity matrix:

wb_command \
   -cifti-correlation \
   {SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.ptseries.nii \
   {SUB}_ses-{SESSION}_task-{TASK}_run-{RUN}_space-fsLR_den-91k_desc-residual_bold.pconn.nii

More information can be found at the HCP documentation.

If you use the default NIFTI processing pipeline, you can use Nilearn’s NiftiLabelsMasker

Troubleshooting

Logs and crashfiles are outputted into the <output dir>/xcp_d/sub-<participant_label>/log directory. Information on how to customize and understand these files can be found on the nipype debugging page.

Support and communication. The documentation of this project is found here: https://xcp-d.readthedocs.io/.

All bugs, concerns and enhancement requests for this software can be submitted here: https://github.com/PennLINC/xcp_d/issues.

If you have a question about using xcp_d, please create a new topic on NeuroStars with the “xcp_d” tag. The xcp_d developers follow NeuroStars, and will be able to answer your question there.