<span>Lung cancer is a leading cause of cancer deaths worldwide with an estimated 2 million new cases and 1·76 million deaths yearly. Early detection can improve survival, and CT scans are a precise imaging technique to diagnose lung cancer. However, analyzing hundreds of 2D CT slices is challenging and can cause false alarms. 3D visualization of lung nodules can aid clinicians in detection and diagnosis. The MobileNet model integrates multi-view and multi-scale nodule features using depthwise separable convolutional layers. These layers split standard convolutions into depthwise and pointwise convolutions to reduce computational cost. Finally, the 3D pulmonary nodular models were created using a ray-casting volume rendering approach. Compared to other state-of-the-art deep neural networks, this factorization enables MobileNet to achieve a much lower computational cost while maintaining a decent degree of accuracy. The proposed approach was tested on an LIDC dataset of 986 nodules. Experiment findings reveal that MobileNet provides exceptional segmentation performance on the LIDC dataset, with an accuracy of 93.3%. The study demonstrates that the MobileNet detects and segments lung nodules somewhat better than other older technologies. As a result, the proposed system proposes an automated 3D lung cancer tumor visualization.</span>
Read full abstract