Abstract
One of the major challenges in systems neuroscience is to identify brain networks and unravel their significance for brain function –this has led to the concept of the ‘connectome’. Connectomes are currently extensively studied in large-scale international efforts at multiple scales, and follow different definitions with respect to their connections as well as their elements.Perhaps the most promising avenue for defining the elements of connectomes originates from the notion that individual brain areas maintain distinct (long-range) connection profiles. These connectivity patterns determine the areas’ functional properties and also allow for their anatomical delineation and mapping. This rationale has motivated the concept of connectivity-based cortex parcellation.In the past ten years, non-invasive mapping of human brain connectivity has led to immense advances in the development of parcellation techniques and their applications. Unfortunately, many of these approaches primarily aim for confirmation of well-known, existing architectonic maps and, to that end, unsuitably incorporate prior knowledge and frequently build on circular argumentation. Often, current approaches also tend to disregard the specific apertures of connectivity measurements, as well as the anatomical specificities of cortical areas, such as spatial compactness, regional heterogeneity, inter-subject variability, the multi-scaling nature of connectivity information, and potential hierarchical organisation. From a methodological perspective, however, a useful framework that regards all of these aspects in an unbiased way is technically demanding.In this commentary, we first outline the concept of connectivity-based cortex parcellation and discuss its prospects and limitations in particular with respect to structural connectivity. To improve reliability and efficiency, we then strongly advocate for connectivity-based cortex parcellation as a modelling approach; that is, an approximation of the data based on (model) parameter inference. As such, a parcellation algorithm can be formally tested for robustness –the precision of its predictions can be quantified and statistics about potential generalization of the results can be derived. Such a framework also allows the question of model constraints to be reformulated in terms of hypothesis testing through model selection and offers a formative way to integrate anatomical knowledge in terms of prior distributions.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.