Abstract

Technological advancements have significantly progressed and innovated across various industries. However, these advancements have also led to an increase in cyberattacks using malware. Researchers have developed diverse techniques to detect and classify malware, including a visualization-based approach that converts suspicious files into color or grayscale images, eliminating the need for domain-specific expertise. Nonetheless, converting files to color and grayscale images may result in the loss of texture details due to noise, adversely affecting the performance of machine learning models. The aim of this study is to present to assess the texture features and noise contributions of the red, green, and blue channels in color images. We propose a novel method to enhance model performance in terms of accuracy, precision, recall, f1-score, memory utilization, and computing cost during testing and training. This study introduces an approach involves separating color channels into individual red, green, and blue datasets and using various Discrete Wavelet Transform levels to reduce dimensions and remove noise. The extracted texture characteristics are then used as input for machine learning classifiers for training and prediction. Through comprehensive evaluation, we compare the performance of grayscale images with that of the red, green, and blue datasets. The results show that grayscale images significantly lose critical textural artifacts and perform worse than the color channels. Notably, employing extra tree classifiers yielded the best results, achieving an accuracy of 98.37%, precision of 98.64%, recall of 97.60%, and an f1-score of 98.04% with the red channel of color dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call