Abstract
The hybridoma cell screening method is usually done manually by human eyes during the production process for monoclonal antibody drugs. This traditional screening method has certain limitations, such as low efficiency and subjectivity bias. Furthermore, most of the existing deep learning-based image segmentation methods have certain drawbacks, due to different shapes of hybridoma cells and uneven location distribution. In this paper, we propose a deep hybridoma cell image segmentation method based on residual and attention U-Net (RA-UNet). Firstly, the feature maps of the five modules in the network encoder are used for multi-scale feature fusion in a feature pyramid form and then spliced into the network decoder to enrich the semantic level of the feature maps in the decoder. Secondly, a dual attention mechanism module based on global and channel attention mechanisms is presented. The global attention mechanism (non-local neural network) is connected to the network decoder to expand the receptive field of the feature map and bring more rich information to the network. Then, the channel attention mechanism SENet (the squeeze-and-excitation network) is connected to the non-local attention mechanism. Consequently, the important features are enhanced by the learning of the feature channel weights, and the secondary features are suppressed, hence improving the cell segmentation performance and accuracy. Finally, the focal loss function is used to guide the network to learn the hard-to-classify cell categories. Furthermore, we evaluate the performance of the proposed RA-UNet method on a newly established hybridoma cell image dataset. Experimental results show that the proposed method has good reliability and improves the efficiency of hybridoma cell segmentation compared with state-of-the-art networks such as FCN, UNet, and UNet++. The results show that the proposed RA-UNet model has improvements of 0.8937%, 0.9926%, 0.9512%, and 0.9007% in terms of the dice coefficients, PA, MPA, and MIoU, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.