In recent years, UNET (Ronneberger et al. 2015) and its derivative models have been widely used in medical image segmentation with more superficial structures and excellent segmentation results. Due to the lack of modeling for the overall characteristics of the target, the division tasks of minor marks will produce some discrete noise points, resulting in a decline in model accuracy and application effects. We propose a multi-tasking medical image analysis model UoloNet, a YOLO-based (Redmon et al. 2016; Shafiee et al. 2017) object detection branch is added based on UNET. The shared learning of the two tasks through semantic segmentation and object detection has promoted the model’s mastery of the overall characteristics of the target. In the reasoning stage, merging the two functions of object detection and semantic segmentation can effectively remove discrete noise points in the division and enhance the accuracy of semantic segmentation. In the future, the object detection task will be the problem of excessive convergence of semantic segmentation tasks. The model uses CIOU (Zheng et al. 2020) losses instead of IOU losses in YOLO, which further improves the model’s overall accuracy. The effectiveness of the proposed model is verified both in the MRI dataset SEHPI, which we posted and in the public dataset LITS (Christ 2017).