Abstract Anatomical MRI templates of the brain are essential to group-level analyses and image processing pipelines, as they provide a reference space for spatial normalisation. While it has become common for studies to acquire multimodal MRI data, many templates are still limited to one type of modality, usually either scalar or tensor-based. Aligning each modality in isolation does not take full advantage of the available complementary information, such as strong contrast between tissue types in structural images, or axonal organisation in the white matter in diffusion tensor images. Most existing strategies for multimodal template construction either do not use all modalities of interest to inform the template construction process, or do not use them in a unified framework. Here, we present multimodal, cross-sectional templates constructed from UK Biobank data: the OMM-1 template, and age-dependent templates for each year of life between 45 to 81. All templates are fully unbiased to represent the average shape of the populations they were constructed from, and internally consistent through jointly informing the template construction process with T1, T2-FLAIR and DTI data. The OMM-1 template was constructed with a multi-resolution, iterative approach using 240 individuals in the 50-55 year age range. The age-dependent templates were estimated using a Gaussian Process, which describes the change in average brain shape with age in 37,330 individuals. All templates show excellent contrast and alignment within and between modalities. The global brain shape and size is not preconditioned on existing templates, although maximal possible compatibility with MNI-152 space was maintained through rigid alignment. We showed benefits in registration accuracy across two datasets (UK Biobank and HCP), when using the OMM-1 as the template compared with FSL’s MNI-152 template, and found that the use of age-dependent templates further improved accuracy to a small but detectable extent. All templates are publicly available and can be used as a new reference space for uni- or multimodal spatial alignment.
Read full abstract