Abstract

We have developed a semi-automatic method for multi-modality image segmentation aimed at reducing the manual process time via machine learning while preserving human guidance. Rather than reliance on heuristics, human oversight and expert training from images is incorporated into logistic regression models. The latter serve to estimate the probability of tissue class assignment for each voxel as well as the probability of tissue boundary occurring between neighboring voxels given the multi-modal image intensities. The regression models provide parameters for a Conditional Random Field (CRF) framework that defines an energy function with the regional and boundary probabilistic terms. Using this CRF, a max-flow/min-cut algorithm is used to segment other slices in the 3D image set automatically with options of addition user input. We apply this approach to segment visible tumors in multi-modal medical volumetric images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call