Abstract

A probabilistic neural network structure is designed for estimating the parameters of a standard finite normal mixture (SFNM) model in medical image analysis. This neural network employs an unsupervised learning scheme based on the unification of Bayesian and least relative entropy principles, and has Bayes and maximum likelihood neurons which adaptively update the local fuzzy variables in the classification space with the capability of achieving flexible boundary shapes. The optimal network size and hence the number of regions for the SFNM model are determined by various information theoretic criteria, and their performances are compared for images with different stochastic characterizations. A Lloyd-Max quantizer is used to improve the initialization of this self-learning procedure. The performance of this learning technique is tested with both simulated and real medical images, and is shown to be an efficient learning scheme. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call