Abstract

Connectomics generates comprehensive maps of brain networks, represented as nodes and their pairwise connections. The functional roles of nodes are defined by their direct and indirect connectivity with the rest of the network. However, the network context is not directly accessible at the level of individual nodes. Similar problems in language processing have been addressed with algorithms such as word2vec that create embeddings of words and their relations in a meaningful low-dimensional vector space. Here we apply this approach to create embedded vector representations of brain networks or connectome embeddings (CE). CE can characterize correspondence relations among brain regions, and can be used to infer links that are lacking from the original structural diffusion imaging, e.g., inter-hemispheric homotopic connections. Moreover, we construct predictive deep models of functional and structural connectivity, and simulate network-wide lesion effects using the face processing system as our application domain. We suggest that CE offers a novel approach to revealing relations between connectome structure and function.

Highlights

  • Connectomics generates comprehensive maps of brain networks, represented as nodes and their pairwise connections

  • To test whether connectome embeddings (CE) vector representations are consonant with known attributes of brain topology/topography and can be interpreted and manipulated using linear operations, we formulated a brain specific benchmark, namely, an inter-hemispheric analogies test

  • To design a benchmark for testing and tuning connectome embeddings, we postulated that the relation between each pair of regions in one hemisphere should be analogous to the same pairwise relation in the other hemisphere

Read more

Summary

Introduction

Connectomics generates comprehensive maps of brain networks, represented as nodes and their pairwise connections. Assessing distances among connectivity profiles and subsequent dimension reduction (e.g., through PCA or multidimensional scaling) can reveal pairwise similarities[6] but this approach does not capture other relations such as homologies or higher-order regularities Outside of connectomics, another field focused on mapping relationships between elements is natural language processing, where words may be represented or embedded in a lowdimensional distributed vector space[7, 8]. The resultant latent nodes representation captures neighborhood similarity and community membership in a continuous vector space with a relatively small number of dimensions[4] These lowdimensional embeddings are useful for subsequent machine learning applications directed at uncovering structural relations and similarities. We suggest that CE provides a general approach for modeling connectome data that has many potential applications, including development, individual differences and clinical/translational studies

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.