Abstract

Multispectral optoacoustic tomography (MSOT) is a fast-developing imaging modality, combining the high contrast from optical tissue excitation with the high resolution and penetration depth of ultrasound detection. Since light is subject to absorption and scattering when travelling through tissue, adequate knowledge of the spatial fluence distribution is required in order to ensure quantification accuracy of MSOT. In order to reduce the systematic errors in spectral recovery due to fluence and to provide a visually more homogeneous image, correction for fluence is commonly performed on reconstructed images using one of the state-of-the-art methods. These require, as input, information on illumination geometry (a priori known from the system design) as well as spatial reference of an object in a form of either a binary map (assuming uniform optical properties), or a label map, in a more complex scenario of multiple regions with different optical properties. In order to generate such a map, manual segmentation is commonly used by delineating the outer border of the mouse body or major organs present in the slice, which is a time-consuming procedure, not efficient procedure, prone to operator errors. Here we evaluate methods for semi- and fully-automatic segmentation of hybrid optoacoustic and ultrasound images and characterize the performance of the methods using quantitative metrics for evaluating medical image segmentation against the ground truth obtained by manual segmentation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.