Abstract

Objective: Automated medical image analysis solutions should closely mimic complete human actions to be useful in clinical practice. However, more often an automated image analysis solution only represents part of a human task which restricts its practical utility. In the case of ultrasound-based fetal biometry, an automated solution should ideally recognize key fetal structures in free-hand video guidance, select a standard plane from a video stream and perform biometry. A complete automated solution should automate all three sub-actions.Methods: This paper considers how to automate the complete human action of first-trimester biometry measurement from real-world freehand ultrasound. In the proposed hybrid Convolutional Neural Network (CNN) architecture design, a classification-regression-based guidance model detects and tracks fetal anatomical structures (using visual cues) in the ultrasound video. Several high-quality standard planes which contain the mid-sagittal view of the fetus are sampled at multiple timestamps (using a custom-designed confident-frame detector) based on the estimated probability values associated with predicted anatomical structures that define the biometry plane. Automated semantic segmentation is performed on the selected frames to extract fetal anatomical landmarks. A crown-rump length (CRL) estimate is calculated as the mean CRL from these multiple frames.Results: Our fully automated method shows a high correlation with clinical expert CRL measurement (Pearson ρ=0.92, R-Squared (R2)=0.84) and a low mean absolute error of 0.834 (weeks) for fetal age estimation on a test data set of 42 videos.Conclusion: A novel algorithm for standard plane detection employs a quality detection mechanism defined by clinical standards and ensuring precise biometric measurements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call