Breast cancer is the most prevalent cancer among women worldwide. B-mode ultrasound (US) is essential for early detection, offering high sensitivity and specificity without radiation exposure. This study introduces a semi-automatic method to streamline breast US report generation, aiming to reduce the burden on radiologists. Our method synthesizes comprehensive breast US reports by combining the extracted information from radiologists' annotations during routine screenings with the analysis results from deep learning algorithms on multimodal US images. Key modules in our method include image classification using visual features (ICVF), type classification via deep learning (TCDL), and automatic report structuring and compilation (ARSC). Experiments showed that the proposed method reduced the average report generation time to 3.8 min compared to manual processes, even when using relatively low-spec hardware. Generated reports perfectly matched ground truth reports for suspicious masses without a single failure on our evaluation datasets. Additionally, the deep-learning-based algorithm, utilizing DenseNet-121 as its core model, achieved an overall accuracy of 0.865, precision of 0.868, recall of 0.847, F1-score of 0.856, and area under the receiver operating characteristics of 0.92 in classifying tissue stiffness in breast US shear-wave elastography (SWE-mode) images. These improvements not only streamline the report generation process but also allow radiologists to dedicate more time and focus on patient care, ultimately enhancing clinical outcomes and patient satisfaction.
Read full abstract