Abstract

Ultrasound (US) is a medical imaging modality widely used for diagnosis, monitoring, and guidance of surgical procedures. However, the accurate interpretation of US images is a challenging task. Recently, portable 2D US devices enhanced with Artificial intelligence (AI) methods to identify, in real-time, specific organs are widely spreading worldwide. Nevertheless, the number of available methods that effectively work in such devices is still limited. In this work, we evaluate the performance of the U-NET architecture to segment the kidney in 2D US images. To accomplish this task, we studied the possibility of using multiple sliced images extracted from 3D US volumes to achieve a large, variable, and multi-view dataset of 2D images. The proposed methodology was tested with a dataset of 66 3D US volumes, divided in 51 for training, 5 for validation, and 10 for testing. From the volumes, 3792 2D sliced images were extracted. Two experiments were conducted, namely: (i) using the entire database (WWKD); and (ii) using images where the kidney area is > 500 mm2 (500KD). As a proof-of-concept, the potential of our strategy was tested in real 2D images (acquired with 2D probes). An average error of 2.88 ± 2.63 mm in the testing dataset was registered. Moreover, satisfactory results were obtained in our initial proof-of-concept using pure 2D images. In short, the proposed method proved, in this preliminary study, its potential interest for clinical practice. Further studies are required to evaluate the real performance of the proposed methodology. Clinical Relevance- In this work a deep learning methodology to segment the kidney in 2D US images is presented. It may be a relevant feature to be included in the recent portable US ecosystems easing the interpretation of image and consequently the clinical analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call