To develop and compare methods to automatically estimate regional ultrasound image quality for echocardiography separate from view correctness. Three methods for estimating image quality were developed: (i) classic pixel-based metric: the generalized contrast-to-noise ratio (gCNR), computed on myocardial segments (region of interest) and left ventricle lumen (background), extracted by a U-Net segmentation model; (ii) local image coherence: the average local coherence as predicted by a U-Net model that predicts image coherence from B-mode ultrasound images at the pixel level; (iii) deep convolutional network: an end-to-end deep-learning model that predicts the quality of each region in the image directly. These methods were evaluated against manual regional quality annotations provided by three experienced cardiologists. The results indicated poor performance of the gCNR metric, with Spearman correlation to annotations of ρ = 0.24. The end-to-end learning model obtained the best result, ρ = 0.69, comparable to the inter-observer correlation, ρ = 0.63. Finally, the coherence-based method, with ρ = 0.58, out-performed the classical metrics and was more generic than the end-to-end approach. The deep convolutional network provided the most accurate regional quality prediction, while the coherence-based method offered a more generalizable solution. gCNR showed limited effectiveness in this study. The image quality prediction tool is available as an open-source Python library at https://github.com/GillesVanDeVyver/arqee.
Read full abstract