Abstract

Quality control/assessment of ultrasound (US) images is an essential step in clinical diagnosis. This process is usually done manually, suffering from some drawbacks, such as dependence on operator's experience and extensive labors, as well as high inter- and intra-observer variation. Automatic quality assessment of US images is therefore highly desirable. Fetal US cardiac four-chamber plane (CFP) is one of the most commonly used cardiac views, which was used in the diagnosis of heart anomalies in the early 1980s. In this paper, we propose a generic deep learning framework for automatic quality control of fetal US CFPs. The proposed framework consists of three networks: (1) a basic CNN (B-CNN), roughly classifying four-chamber views from the raw data; (2) a deeper CNN (D-CNN), determining the gain and zoom of the target images in a multi-task learning manner; and (3) the aggregated residual visual block net (ARVBNet), detecting the key anatomical structures on a plane. Based on the output of the three networks, overall quantitative score of each CFP is obtained, so as to achieve fully automatic quality control. Experiments on a fetal US dataset demonstrated our proposed method achieved a highest mean average precision (mAP) of 93.52% at a fast speed of 101 frames per second (FPS). In order to demonstrate the adaptability and generalization capacity, the proposed detection network (i.e., ARVBNet) has also been validated on the PASCAL VOC dataset, obtaining a highest mAP of 81.2% when input size is approximately 300 × 300.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call