Abstract
Recent decades have witnessed an increasing number of large to very large imaging studies, prominently in the field of neurodegenerative diseases. The datasets collected during these studies form essential resources for the research aiming at new biomarkers. Collecting, hosting, managing, processing, or reviewing those datasets is typically achieved through a local neuroinformatics infrastructure. In particular for organizations with their own imaging equipment, setting up such a system is still a hard task, and relying on cloud-based solutions, albeit promising, is not always possible. This paper proposes a practical model guided by core principles including user involvement, lightweight footprint, modularity, reusability, and facilitated data sharing. This model is based on the experience from an 8-year-old research center managing cohort research programs on Alzheimer’s disease. Such a model gave rise to an ecosystem of tools aiming at improved quality control through seamless automatic processes combined with a variety of code libraries, command line tools, graphical user interfaces, and instant messaging applets. The present ecosystem was shaped around XNAT and is composed of independently reusable modules that are freely available on GitLab/GitHub. This paradigm is scalable to the general community of researchers working with large neuroimaging datasets.
Highlights
Neuroimaging has taken a central role in the context of research in Alzheimer’s disease (AD) as in neuroscience in general
The capacity to evaluate the results of any workflow and the capacity to identify/navigate through them in a larger repository are both tightly coupled. This is especially relevant for workflows such as the ones used in neuroimaging studies, which typically combine high levels of complexity, heterogeneity on the one hand, and, on the other, a high degree of required expertise to assess their outputs
A few previous examples have built onto XNAT (Gee et al, 2010; Harrigan et al, 2016; Job et al, 2017), often leveraging its RESTful Application-Program Interface (API) (Schwartz et al, 2012; Gutman et al, 2014), to extend its standard features and present new ones. Such an approach stands out by its light footprint, relying on XNAT’s core features without needing to touch its codebase, to the mutual benefits of maintainability, dependability, portability, and usability. In line with this approach, this present paper describes a collection of lightweight solutions which together form an adaptive modular ecosystem focused on user experience and neuroimaging data quality control (QC)
Summary
Neuroimaging has taken a central role in the context of research in Alzheimer’s disease (AD) as in neuroscience in general. The capacity to evaluate the results of any workflow and the capacity to identify/navigate through them in a larger repository are both tightly coupled This is especially relevant for workflows such as the ones used in neuroimaging studies, which typically combine high levels of complexity, heterogeneity (e.g., in numbers of files, nature/structure of data) on the one hand, and, on the other, a high degree of required expertise to assess their outputs. Notwithstanding the preceding, it may still fail to address immediate down-to-earth needs from small to average-sized research groups, especially the ones dealing with self-acquired imaging data Implementing these frameworks or adapting them locally requires strong IT skills and a specialized labor force, making it technically out of reach for many groups with insufficient human and/or computational resources, or without connection to large consortia. Both if poorly executed may have a strong negative impact on reproducibility
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have