Abstract

When assessing severity of COVID19 from lung ultrasound (LUS) frames, both anatomical phenomena (e.g., the pleural line, presence of consolidations), as well as sonographic artifacts, such as A-lines and B-lines are of importance. While ultrasound devices aim to provide an accurate visualization of the anatomy, the orientation of the sonographic artifacts differ between probe types. This difference poses a challenge in designing a unified deep artificial neural network capable of handling all probe types. In this work we improve upon Roy et al. (2020): We train a simple deep neural network to assess the severity of COVID-19 from LUS data. To address the challenge of handling both linear and convex probes in a unified manner we employed two strategies: First, we augment the input frames of convex probes with a “rectified” version in which A-lines and B-lines assume a horizontal/vertical aspect close to that achieved with linear probes. Second, we explicitly inform the network on the presence of important anatomical features and artifacts. We use a known Radon-based method for detecting the pleural line and B-lines and feed the detected lines as inputs to the network. Preliminary experiments yielded f1 = 68.7% compared to f1 = 65.1% reported by Roy et al.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call