Year
Publisher
Journal
Institution
1
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Year
Publisher
Journal
Institution
1
Institution Country
Publication Type
Field Of Study
Topics
Open Access
Language
Filter 1
Export
Sort by: Relevance
Using normative models pre-trained on cross-sectional data to evaluate intra-individual longitudinal changes in neuroimaging data

Longitudinal neuroimaging studies offer valuable insight into intricate dynamics of brain development, ageing, and disease progression over time. However, prevailing analytical approaches rooted in our understanding of population variation are primarily tailored for cross-sectional studies. To fully harness the potential of longitudinal neuroimaging data, we have to develop and refine methodologies that are adapted to longitudinal designs, considering the complex interplay between population variation and individual dynamics.We build on normative modelling framework, which enables the evaluation of an individual’s position compared to a population standard. We extend this framework to evaluate an individual’s longitudinal change compared to the longitudinal change reflected by the (population) standard dynamics. Thus, we exploit the existing normative models pre-trained on over 58,000 individuals and adapt the framework so that they can also be used in the evaluation of longitudinal studies. Specifically, we introduce a quantitative metric termed “z-diff” score, which serves as an indicator of a temporal change of an individual compared to a population standard. Notably, our framework offers advantages such as flexibility in dataset size and ease of implementation.To illustrate our approach, we applied it to a longitudinal dataset of 98 patients diagnosed with early-stage schizophrenia who underwent MRI examinations shortly after diagnosis and one year later.Compared to cross-sectional analyses, which showed global thinning of grey matter at the first visit, our method revealed a significant normalisation of grey matter thickness in the frontal lobe over time. Furthermore, this result was not observed when using more traditional methods of longitudinal analysis, making our approach more sensitive to temporal changes.Overall, our framework presents a flexible and effective methodology for analysing longitudinal neuroimaging data, providing insights into the progression of a disease that would otherwise be missed when using more traditional approaches.

Read full abstract
Open Access Just Published
Uncertainty-Adjusted Recommendation via Matrix Factorization With Weighted Losses.

In a recommender systems (RSs) dataset, observed ratings are subject to unequal amounts of noise. Some users might be consistently more conscientious in choosing the ratings they provide for the content they consume. Some items may be very divisive and elicit highly noisy reviews. In this article, we perform a nuclear-norm-based matrix factorization method which relies on side information in the form of an estimate of the uncertainty of each rating. A rating with a higher uncertainty is considered more likely to be erroneous or subject to large amounts of noise, and therefore more likely to mislead the model. Our uncertainty estimate is used as a weighting factor in the loss we optimize. To maintain the favorable scaling and theoretical guarantees coming with nuclear norm regularization even in this weighted context, we introduce an adjusted version of the trace norm regularizer which takes the weights into account. This regularization strategy is inspired from the weighted trace norm which was introduced to tackle nonuniform sampling regimes in matrix completion. Our method exhibits state-of-the-art performance on both synthetic and real life datasets in terms of various performance measures, confirming that we have successfully used the auxiliary information extracted.

Read full abstract