Motion mode (M-mode) echocardiography is essential for measuring cardiac dimension and ejection fraction. However, the current diagnosis is time-consuming and suffers from diagnosis accuracy variance. This work resorts to building an automatic scheme through well-designed and well-trained deep learning to conquer the situation. That is, we proposed RAMEM, an automatic scheme of real-time M-mode echocardiography, which contributes three aspects to address the challenges: 1) provide MEIS, the first dataset of M-mode echocardiograms, to enable consistent results and support developing an automatic scheme; For detecting objects accurately in echocardiograms, it requires big receptive field for covering long-range diastole to systole cycle. However, the limited receptive field in the typical backbone of convolutional neural networks (CNN) and the losing information risk in non-local block (NL) equipped CNN risk the accuracy requirement. Therefore, we 2) propose panel attention embedding with updated UPANets V2, a convolutional backbone network, in a real-time instance segmentation (RIS) scheme for boosting big object detection performance; 3) introduce AMEM, an efficient algorithm of automatic M-mode echocardiography measurement, for automatic diagnosis; The experimental results show that RAMEM surpasses existing RIS schemes (CNNs with NL & Transformers as the backbone) in PASCAL 2012 SBD and human performances in MEIS.
Read full abstract