Abstract
With the widely applied computer-aided diagnosis techniques in cervical cancer screening, cell segmentation has become a necessary step to determine the progression of cervical cancer. Traditional manual methods alleviate the dilemma caused by the shortage of medical resources to a certain extent. Unfortunately, with their low segmentation accuracy for abnormal cells, the complex process cannot realize an automatic diagnosis. In addition, various methods on deep learning can automatically extract image features with high accuracy and small error, making artificial intelligence increasingly popular in computer-aided diagnosis. However, they are not suitable for clinical practice because those complicated models would result in more redundant parameters from networks. To address the above problems, a lightweight feature attention network (LFANet), extracting differentially abundant feature information of objects with various resolutions, is proposed in this study. The model can accurately segment both the nucleus and cytoplasm regions in cervical images. Specifically, a lightweight feature extraction module is designed as an encoder to extract abundant features of input images, combining with depth-wise separable convolution, residual connection and attention mechanism. Besides, the feature layer attention module is added to precisely recover pixel location, which employs the global high-level information as a guide for the low-level features, capturing dependencies of channel features. Finally, our LFANet model is evaluated on all four independent datasets. The experimental results demonstrate that compared with other advanced methods, our proposed network achieves state-of-the-art performance with a low computational complexity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.