Abstract

3D object detection and recognition are crucial tasks for many spatiotemporal processing applications, such as computer-aided diagnosis and autonomous driving. Although prevalent 3D Convolution Nets (ConvNets) have continued to improve the accuracy and sensitivity, excessive computing resources are required. In this paper, we propose Leaky Integrate and Fire Networks (LIF-Nets) for 3D detection and recognition tasks. LIF-Nets have rich inter-frame sensing capability brought from membrane potentials, and low power event-driven mechanism, which make them excel in 3D processing and save computational cost at the same time. We also develop ResLIF Blocks to solve the degradation problem of deep LIF-Nets, and employ U-LIF structure to improve the feature representation capability. As a result, we carry out experiments on the LUng Nodule Analysis 2016 (LUNA16) public dataset for chest CT automated analysis and conclude that the LIF-Nets achieve 94.6% detection sensitivity at 8 False Positives per scan and 94.14% classification accuracy while the LIF-detection net reduces 65.45% multiplication operations, 65.12% addition operations, and 65.32% network parameters. The results show that LIF-Nets have extraordinary time-efficient and energy-saving performance while achieving comparable accuracy.

Highlights

  • Advanced computer-aided diagnosis systems (CADs) using deep learning analysis to detect and recognize lung nodules have been conducted in recent years, which helps free radiologists from the time-consuming work and reduce interobserver variability [1]

  • DATASET AND PREPROCESSING The LUng Nodule Analysis 2016 dataset (LUNA16) is used in this work, which includes 1186 nodule labels in 888 patients annotated by radiologists

  • This is the first Spiking Neural Networks (SNNs) that is exploited to 3D object detection and 3D computer-aided diagnosis, which achieves comparable accuracy while significantly saves computational costs up to 65%

Read more

Summary

Introduction

Advanced computer-aided diagnosis systems (CADs) using deep learning analysis to detect and recognize lung nodules have been conducted in recent years, which helps free radiologists from the time-consuming work and reduce interobserver variability [1]. Solving the automated CT analysis problems in the real world would certainly require a more sophisticated model with a vast number of parameters and result in a substantial amount of computation overhead and power consumption This technical difficulty is related to the fact that nodules have variable sizes and shapes and have similar appearances to normal tissues. To extract better nodule-sensitive features from 3D CT images, state-of-the-art frameworks often utilize the 3D region proposal network (RPN) [10] for nodule screening [11]–[13], followed by a 3D classifier for false positive reduction [2], [14] or malignancy evaluation [15], [16]

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.