Abstract
To determine the feasibility of using a deep learning (DL) algorithm to assess the quality of focused assessment with sonography in trauma (FAST) exams. Our dataset consists of 441 FAST exams, classified as good-quality or poor-quality, with 3161 videos. We first used convolutional neural networks (CNNs), pretrained on the Imagenet dataset and fine-tuned on the FAST dataset. Second, we trained a CNN autoencoder to compress FAST images, with a 20-1 compression ratio. The compressed codes were input to a two-layer classifier network. To train the networks, each video was labeled with the quality of the exam, and the frames were labeled with the quality of the video. For inference, a video was classified as poor-quality if half the frames were classified as poor-quality by the network, and an exam was classified as poor-quality if half the videos were classified as poor-quality. The results with the encoder-classifier networks were much better than the transfer learning results with CNNs. This was primarily because the Imagenet dataset is not a good match for the ultrasound quality assessment problem. The DL models produced video sensitivities and specificities of 99% and 98% on held-out test sets. Using an autoencoder to compress FAST images is a very effective way to obtain features that can be used to predict exam quality. These features are more suitable than those obtained from CNNs pretrained on Imagenet.
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have