SPM8 Release Notes: Batch Interface
SPM8 Release Notes: Batch Interface
These Release Notes summarize changes made in the latest release of the SPM software, SPM8. This is a major update, containing theoretical, algorithmic, structural and interface enhancements over previous versions. Some of the changes described below were already available in the most recent updates of SPM5, while most of them have been introduced in this new release. Although we have tried hard to produce high quality software, in a project of this size and complexity there are certainly some remaining bugs. Please assist us by reporting them to the SPM manager <[email protected]>. We would like to thank everyone who has provided feedback on the beta version.
Batch interface
SPM8 incorporates a new batch machinery, matlabbatch, that has been derived from the SPM5 batch system. The main new feature is the handling of dependencies: if you have several jobs that you want to execute, where the input to one job depends on the output of another, you can specify that dependency explicitly in the interface. It becomes then straightforward to apply the same operations on several datasets (for example, re-using the same batch for multiple subject analyses). This batch system is compatible with the previous, SPM5, job structure.
Matlabbatch: http://sourceforge.net/projects/matlabbatch
File formats
SPM8 attempts to follow the recommendations of the Data Format Working Group (DFWG: http://nifti.nimh.nih.gov/dfwg/) that aims at proposing solutions to the problem of multiple data formats used in fMRI research. NIfTI-1 is the image file format used in SPM8. See http://nifti.nimh.nih.gov/ GIfTI-1 is the geometry file format used to store a variety of surface-based data. See http://www.nitrc.org/projects/gifti/ Functions to read/write files with these formats are implemented in the Matlab classes @nifti and @gifti.
New Segmentation
This toolbox is an extension of the default unified segmentation. The algorithm is essentially the same as that described in the Unified Segmentation paper (Ashburner and Friston, 2005), except for: A different treatment of the mixing proportions, The use of an improved registration model, The ability to use multi-spectral data, An extended set of tissue probability maps, which allows a different treatment of voxels outside the brain, A more robust initial affine registration.
Some of the options in the toolbox may not yet work, and it has not yet been seamlessly integrated into the rest of the SPM software. Also, the extended tissue probability maps may need further refinement. The current versions were crudely generated (by JA) using data that was kindly provided by Cynthia Jongen of the Imaging Sciences Institute at Utrecht, NL. This toolbox can be accessed from the Batch Editor in menu SPM > Tools > New Segment.
J. Ashburner and K.J. Friston. Unified segmentation. NeuroImage, 26:839-851, 2005.
DARTEL
This toolbox is based around the A Fast Diffeomorphic Registration Algorithm paper (Ashburner, 2007). The idea is to register images by computing a flow field which can then be exponentiated to generate both forward and backward deformations. Processing begins with the import step. This involves taking the parameter files produced by the segmentation, and writing out rigidly transformed versions of the tissue class images, such that they are in as close alignment as possible with the tissue probability maps. The next step is the registration itself. This involves the simultaneous registration of e.g. GM with GM, WM with WM and 1-(GM+WM) with 1-(GM+WM) (when needed, the 1(GM+WM) class is generated implicitly, so there is no need to include this class yourself). This procedure begins by creating a mean of all the images, which is used as an initial template. Deformations from this template to each of the individual images are computed, and the template is then re-generated by applying the inverses of the deformations to the images and averaging. This procedure is repeated a number of times. Finally, warped versions of the images (or other images that are in alignment with them) can be generated. This toolbox is not yet seamlessly integrated into the SPM package. Eventually, the plan is to use many of the ideas here as the default strategy for spatial normalisation. The toolbox may change with future updates. There will also be a number of other (as yet unspecified) extensions, which may include a variable velocity version. Note that the Fast Diffeomorphism paper only describes a sum of squares objective function. The multinomial objective function is an extension, based on a more appropriate model for aligning binary data to a template (Ashburner & Friston, 2009).
J. Ashburner. A Fast Diffeomorphic Image Registration Algorithm. NeuroImage, 38(1):95-113, 2007. J. Ashburner and K.J. Friston. Computing average shaped tissue probability templates. NeuroImage, 45(2):333-341, 2009.
model has a non-linear BOLD function and includes a new parameter per region. This parameter, , represents the region-specific ratio of intra- and extravascular signals and makes DCM more robust for applications to data acquired at higher field strengths.
K.E. Stephan, N. Weiskopf, P.M. Drysdale, P.A. Robinson, K.J. Friston. Comparing hemodynamic models with DCM. NeuroImage 38: 387-401, 2008.
by examining each models log evidence, where the greatest evidence gives the winning model. At the second level however, making inferences across the population requires a random-effects treatment that is not sensitive to outliers and accounts for heterogeneity across the population of subjects studied. To this end, a group BMS procedure has been implemented using a Bayesian approach which provides a probability density on the models themselves. This function uses a novel, hierarchical model which specifies a Dirichlet distribution that, in turn, defines the parameters of a multinomial distribution. By sampling from this distribution for each subject we obtain a posterior Dirichlet distribution that specifies the conditional density of the model probabilities. SPM returns the expected multinomial parameters for the models under test which allows users to rank the models from most to least likely at the population level.
K.E. Stephan, W.D. Penny, J. Daunizeau, R. Moran and K.J. Friston. Bayesian Model Selection for Group Studies. In press.
options available in Bayesian 1st-Level, under Analysis space are now; Volume, Slices or Clusters. In addition, after this, the user can choose how these volumes are divided into smaller blocks, which is necessary for computational reasons, c.f. in spm_spm a slice is also divided into blocks. These blocks can be either slices (by selecting Slices) or 3D segments (Subvolumes), whose extent is computed using a graph partitioning algorithm. The latter option means that the spatial prior is truly 3D, instead of 2D spatial priors stacked one on another. Two additional spatial precision matrices have been included; unweighted graphLaplacian, UGL, and weighted graph-Laplacian, WGL. The priors GMRF and LORETA are functions of the UGL, i.e. normalized and squared respectively. The WGL empirical prior uses the ordinary least squares estimate of regression coefficients to inform the precision matrix, which has the advantage of preserving edges of activations. Explicit spatial basis priors (Flandin & Penny 2007) will be included and generalized in the near future, which will include eigenvectors of the graph-Laplacian (see Harrison et al 2007-2008). In general these can be global, i.e. the spatial extent of each basis covers the whole graph, local, or multiscale. The benefit here is in the flexibility to explore natural bases that provide a sparse representation of neuronal responses.
G. Flandin & W.D. Penny. Bayesian fMRI data analysis with sparse spatial basis function priors. NeuroImage, 34:1108-1125, 2007. L. Harrison, W.D. Penny, J. Daunizeau, and K.J. Friston. Diffusion-based spatial priors for functional magnetic resonance images. NeuroImage, 41(2):408-423, 2008.
vector and the canonical variates based on (maximally) correlated mixtures of the explanatory variables and data. CVA uses the generalised eigenvalue solution to the treatment and residual sum of squares and products of a general linear model. The eigenvalues (i.e., canonical values), after transformation, have a chi-squared distribution and allow one to test the null hypothesis that the mapping is D or more dimensional. This inference is shown as a bar plot of p-values. The first p-value is formally identical to that obtained using Wilks Lambda and tests for the significance of any mapping. This routine uses the current contrast to define the subspace of interest and treats the remaining design as uninteresting. Conventional results for the canonical values are used after the data (and design matrix) have been whitened; using the appropriate ReML estimate of non-sphericity. CVA can be used to for decoding because the model employed by CVA design not care about the direction of the mapping (hence canonical correlation analysis). However, one cannot test for mappings between nonlinear mixtures of regional activity and some experimental variable (this is what MVB was introduced for).
K.J. Friston, C.D. Frith, R.S. Frackoviak and R. Turner. Characterizing dynamic brain responses with fMRI: a multivariate approach. NeuroImage, 2(2):166-172, 1995. K.J. Friston, K.M. Stephan, J.D. Heather, C.D. Frith, A.A Ioannides, L.C. Liu, M.D. Rugg, J. Vieth, H. Keber, K. Hunter, R.S. Frackowiak. A multivariate analysis of evoked responses in EEG and MEG data. NeuroImage, 3(3 Pt 1):167-174, 1996.
increase the estimated smoothness (i.e. smoothness was previously underestimated) and RESEL count will decrease. All other things equal, larger FWHM smoothness results in increased voxel-level corrected significance; larger FWHM decreases uncorrected cluster-level significance, but smaller RESEL count may counter this effect in terms of corrected significance.
K.J. Worsley. An unbiased estimator for the roughness of a multivariate Gaussian random field. Technical Report, Department of Mathematics and Statistics, McGill University, 1996. S.J. Kiebel, J.B. Poline, K.J. Friston, A.P. Holmes and K.J. Worsley. Robust smoothness estimation in Statistical Parametric Maps using standardized residuals from the General Linear Model. NeuroImage, 10:756-766, 1999. S. Hayasaka, K. Phan, I. Liberzon, K.J. Worsley, T.E. Nichols. Nonstationary cluster-size inference with random field and permutation methods. NeuroImage, 22:676-687, 2004.
models have their advantages; they are often appropriate summaries of evoked responses or helpful first approximations. In SPM8, we have implemented a variational Bayesian algorithm that enables the fast Bayesian inversion of dipole models. The approach allows for specification of priors on all the model parameters. The posterior distributions can be used to form Bayesian confidence intervals for interesting parameters, like dipole locations. Furthermore, competing models (e.g., models with different numbers of dipoles) can be compared using their evidence or marginal likelihood. At the time of release, only EEG data are supported in VB-ECD.
S.J. Kiebel, J. Daunizeau, C. Phillips, and K.J. Friston. Variational Bayesian inversion of the equivalent current dipole model in EEG/MEG. NeuroImage, 39(2):728-741, 2008.
(inhibitory/excitatory) and direction of extrinsic (between source) cortical connections and also includes meaningful physiological parameters of within-source activity e.g., post-synaptic receptor density and time constants. Under linearity and stationarity assumptions, the biophysical parameters of this model prescribe the cross-spectral density of responses measured directly (e.g., local field potentials) or indirectly through some lead-field (e.g., M/EEG data). Inversion of the ensuing DCM provides conditional probabilities on the synaptic parameters of intrinsic and extrinsic connections in the underlying neuronal network. Thus inferences about synaptic physiology, as well as changes induced by pharmacological or behavioural manipulations can be made.
R. Moran, K.E. Stephan, T. Seidenbecher, H.-C. Pape, R. Dolan and K.J. Friston. Dynamic Causal Models of steady-state responses. NeuroImage. 44:796-811, 2009.
Induced responses
DCM for induced responses aims to model coupling within and between frequencies that are associated with linear and non-linear mechanisms respectively. This is a further extension of DCM for ERP/ERF to cover the spectrum dynamics as measured with the electroencephalogram (EEG) or the magnetoencephalogram (MEG). The model parameters encode the frequency response to exogenous input and coupling among sources and different frequencies. One key aspect of the model is that it differentiates between linear and nonlinear coupling; which correspond to within and betweenfrequency coupling respectively. Furthermore, a bilinear form for the state equations can be used to model the modulation of connectivity by experimental manipulations.
C.C. Chen, S.J. Kiebel, K.J. Friston. Dynamic causal modelling of induced responses. NeuroImage, 41(4):1293-1312, 2008.
10
the nature of the models in more depth and how they are specified, integrated and used. Many of the figures produced are in the peer reviewed articles associated with each demonstration. Although MEG/EEG signals are highly variable, systematic changes in distinct frequency bands are commonly encountered. These frequency-specific changes represent robust neural correlates of cognitive or perceptual processes (for example, alpha rhythms emerge on closing the eyes). However, their functional significance remains a matter of debate. Some of the mechanisms that generate these signals are known at the cellular level and rest on a balance of excitatory and inhibitory interactions within and between populations of neurons. The kinetics of the ensuing population dynamics determine the frequency of oscillations. In these demonstrations we extend the classical nonlinear lumped-parameter model of alpha rhythms, initially developed by Lopes da Silva and colleagues, to generate more complex dynamics and consider conduction based models.
R. Moran, S.J. Kiebel, N. Rombach, W.T. O'Connor, K.J. Murphy, R.B. Reilly, and K.J. Friston. Bayesian estimation of synaptic physiology from the spectral responses of neural masses. NeuroImage, 42(1):272-284, 2008.
DEM toolbox
Dynamic expectation maximisation (DEM) is a variational treatment of hierarchical, nonlinear dynamic or static models. It uses a fixed-form Laplace assumption to approximate the conditional, variational or ensemble density of unknown states and parameters. This is an approximation to the density that would obtain from Variational Filtering (VF) in generalized coordinates of motion. The first demonstration with VF uses a simple convolution model and allows one to compare DEM and VF. We also demonstrate the inversion of increasingly complicated models; ranging from a simple General Linear Model to a Lorenz attractor. It is anticipated that the reader will examine the routines called to fully understand the nature of the scheme. DEM presents a variational treatment of dynamic models that furnishes time-dependent conditional densities on the trajectory of a system's states and the time-independent densities of its parameters. These are obtained by maximising a variational action with respect to conditional densities, under a fixed-form assumption about their form. The action or path-integral of free-energy represents a lower bound on the model's logevidence required for model selection and averaging. This approach rests on formulating the optimisation dynamically, in generalised coordinates of motion. The resulting scheme can be used for online Bayesian inversion of nonlinear dynamic causal models and is shown to outperform existing approaches, such as Kalman and particle filtering. Furthermore, it provides for dual and triple inferences on a system's states, parameters and hyperparameters using exactly the same principles. DEM can be regarded as the fixed-form homologue of variational filtering (which is covered in the demonstrations): Variational filtering represents a simple Bayesian filtering scheme, using variational calculus, for inference on the hidden states of dynamic systems. Variational filtering is a stochastic scheme that propagates particles over a changing variational energy landscape, such that their sample density approximates the conditional density of hidden states and inputs. Again, the key innovation, on which variational filtering rests, is a formulation in generalised coordinates of motion. This renders the scheme much simpler and more 11
versatile than existing approaches, such as those based on particle filtering. We demonstrate variational filtering using simulated and real data from hemodynamic systems studied in neuroimaging and provide comparative evaluations using particle filtering and the fixed-form homologue of variational filtering, namely dynamic expectation maximisation.
K.J. Friston. Variational filtering. NeuroImage, 41(3):747-766, 2008. K.J. Friston, N. Trujillo-Bareto, and J. Daunizeau. DEM: A variational treatment of dynamic systems. NeuroImage, 41(3):849-885, 2008.
Mixture toolbox
This toolbox implements Bayesian Clustering based on Bayesian Gaussian Mixture models. The algorithm (spm_mix) will cluster multidimensional data and report on the optimal number of clusters. The toolbox also contains code for a Robust General Linear Model (spm_rglm), where the error processes comprise a two-component univariate mixture model. There is no user interface but there are many demo files.
U. Noppeney, W. D. Penny, C. J. Price, G. Flandin, and K. J. Friston. Identification of degenerate neuronal systems based on intersubject variability. Neuroimage, 30:885-890, 2006. W. Penny, J. Kilner and F. Blankenburg. Robust Bayesian General Linear Models. Neuroimage, 36(3):661-671, 2007.
Spectral toolbox
This toolbox implements routines based on univariate (spm_ar) and multivariate autoregressive modelling (spm_mar), including time and frequency domain Grangercausality analysis, coherence and power spectral analysis. The routines allow you to estimate the optimal number of time lags in the AR/MAR models. There is also a routine for robust autoregressive modelling (spm_rar) in which the error process is a twocomponent mixture model (to run this routine you will need the mixture toolbox on your search path). There is no user interface but there are many demo files.
W.D. Penny and S.J. Roberts. Bayesian Multivariate Autoregresive Models with structured priors. IEE Proceedings on Vision, Image and Signal Processing, 149(1):33-41, 2002.
12