Abstract

Recently, interactive segmentation models have achieved remarkable success in the field of biomedical images. However, these models rely on the accurate and high-quality interaction information provided by users, otherwise the segmentation performance will be seriously affected. This problem is more severe in multi-target biomedical images, which means that an image contains multiple targets of interest. It is extremely challenging for users to always maintain high-quality interactions. In this paper, we propose a novel two-stage segmentation model with robust interaction points for biomedical images. In the first stage, we implement robust interaction points based on user initial interaction points and the deep reinforcement learning (DRL) model. Specifically, we build a reinforcement learning environment to simulate the movement of interaction points with agents, and obtain improved interaction points (clue points) that are beneficial for segmentation. In the second stage, we use a convolutional neural network (CNN) model to achieve segmentation by combining clue points and biomedical image. We validate the performance of our approach on five public biomedical image datasets. The experimental results show that the proposed approach outperforms several SOTA methods in multiple metrics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.