Abstract
Objective.Automated segmentation of targets in ultrasound (US) images during US-guided liver surgery holds the potential to assist physicians in fast locating critical areas such as blood vessels and lesions. However, this remains a challenging task primarily due to the image quality issues associated with US, including blurred edges and low contrast. In addition, studies specifically targeting liver segmentation are relatively scarce possibly since studying deep abdominal organs under US is difficult. In this paper, we proposed a network named BAG-Net to address these challenges and achieve accurate segmentation of liver targets with varying morphologies, including lesions and blood vessels.Approach.The BAG-Net was designed with a boundary detection module together with a position module to locate the target, and multiple attention-guided modules combined with the depth supervision strategy to enhance detailed segmentation of the target area.Main Results.Our method was compared to other approaches and demonstrated superior performance on two liver US datasets. Specifically, the method achieved 93.9% precision, 91.2% recall, 92.4% Dice coefficient, and 86.2% IoU to segment the liver tumor. Additionally, we evaluated the capability of our network to segment tumors on the breast US dataset (BUSI), where it also achieved excellent results.Significance.Our proposed method was validated to effectively segment liver targets with diverse morphologies, providing suspicious areas for clinicians to identify lesions or other characteristics. In the clinic, the method is anticipated to improve surgical efficiency during US-guided surgery.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.