Abstract

This paper presents an adaptive structure self-organizing finite mixture network for quantification of magnetic resonance (MR) brain image sequences. We present justification for the use of standard finite normal mixture model for MR images and formulate image quantification as a distribution learning problem. The finite mixture network parameters are updated such that the relative entropy between the true and estimated distributions is minimized. The new learning scheme achieves flexible classifier boundaries by forming winner-takes-in probability splits of the data allowing the data to contribute simultaneously to multiple regions. Hence, the result is unbiased and satisfies the asymptotic optimality properties of maximum likelihood. To achieve a fully automatic quantification procedure that can adapt to different slices in the MR image sequence, we utilize an information theoretic criterion that we have introduced recently, the minimum conditional bias/variance (MCBV) criterion. MCBV allows us to determine the suitable number of mixture components to represent the characteristics of each image in the sequence. We present examples to show that the new method yields very efficient and accurate performance compared to expectation-maximization, K-means, and competitive learning procedures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.