ABSTRACT In recent years, skin cancer has been the most dangerous disease noticed among people worldwide. Skin cancer should be identified earlier to reduce the rate of mortality. Employing dermoscopic images can identify and categorise skin cancer effectively. But, the visual evaluation is a complex procedure to be done in the dermoscopic image. However, Deep learning (DL) is an efficient method for skin cancer detection; however, segmenting the skin lesion and automatic localisation in an earlier stage is complicated. In this paper, a novel Ladybug Beetle Optimization-Double Attention Based Multilevel 1-D CNN (LBO-DAM 1-D CNN) technique is proposed to detect and classify skin cancer. To improve skin lesion type discriminability, the two types of attention modules are introduced. The Ultra-Lightweight Subspace Attention Module (ULSAM) is utilised for classifying the feature maps into different stages to validate the frequency from different image samples. However, the multilayer perceptron attention module (MLPAM) is determined to provide information regarding skin cancer classification and diminish the noise and unwanted data. To minimise data loss, it is then combined with hierarchical complementarity during classification. Second, a modified MLPAM is used to extract significant feature spaces for network learning, select the most important information, and reduce feature space redundancy. The Ladybug Beetle Optimization (LBO) algorithm provides the optimal classification solution by minimising the loss rate of DAM 1-D CNN architecture. The experimentation is conducted on three different datasets such as ISIC2020, HAM10000, and the melanoma detection dataset. The experimental results revealed that the proposed method is compared with different existing methods such as IMFO-KELM, Mask RCNN, M-SVM, DCNN-9, and TL-CNN with different datasets. These methods attained 94.56, 92.65, 90.56, 88.65, and 95.5 for the ISIC2020 dataset but the proposed method enhanced the classification performance by attaining 97.02. Also, the validation is based on metrics, namely, accuracy, precision, sensitivity, and F1-score of 97.03%, 97.05%, 97.58%, and 97.27% for a total of 500 epochs.