Since the outbreak of COVID-19, efforts have been made towards semi-quantitative analysis of lung ultrasound (LUS) data to assess the patient’s condition. Several methods have been proposed in this regard, with a focus on frame-level analysis, which was then used to assess the condition at the video and prognostic levels. However, no extensive work has been done to analyze lung conditions directly at the video level. This study proposes a novel method for video-level scoring based on compression of LUS video data into a single image and automatic classification to assess patient’s condition. The method utilizes maximum, mean, and minimum intensity projection-based compression of LUS video data over time. This enables to preserve hyper- and hypo-echoic data regions, while compressing the video down to a maximum of three images. The resulting images are then classified using a convolutional neural network (CNN). Finally, the worst predicted score given among the images is assigned to the corresponding video. The results show that this compression technique can achieve a promising agreement at the prognostic level (81.62%), while the video-level agreement remains comparable with the state-of-the-art (46.19%). Conclusively, the suggested method lays down the foundation for LUS video compression, shifting from frame-level to direct video-level analysis of LUS data.
Read full abstract