Abstract

Glioma grading during surgery can help clinical treatment planning and prognosis, but intraoperative pathological examination of frozen sections is limited by the long processing time and complex procedures. Near-infrared fluorescence imaging provides chances for fast and accurate real-time diagnosis. Recently, deep learning techniques have been actively explored for medical image analysis and disease diagnosis. However, issues of near-infrared fluorescence images, including small-scale, noise, and low-resolution, increase the difficulty of training a satisfying network. Multi-modal imaging can provide complementary information to boost model performance, but simultaneously designing a proper network and utilizing the information of multi-modal data is challenging. In this work, we propose a novel neural architecture search method DLS-DARTS to automatically search for network architectures to handle these issues. DLS-DARTS has two learnable stems for multi-modal low-level feature fusion and uses a modified perturbation-based derivation strategy to improve the performance on the area under the curve and accuracy. White light imaging and fluorescence imaging in the first near-infrared window (650-900 nm) and the second near-infrared window (1,000-1,700 nm) are applied to provide multi-modal information on glioma tissues. In the experiments on 1,115 surgical glioma specimens, DLS-DARTS achieved an area under the curve of 0.843 and an accuracy of 0.634, which outperformed manually designed convolutional neural networks including ResNet, PyramidNet, and EfficientNet, and a state-of-the-art neural architecture search method for multi-modal medical image classification. Our study demonstrates that DLS-DARTS has the potential to help neurosurgeons during surgery, showing high prospects in medical image analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call