Abstract

Deep convolutional neural networks have shown remarkable performance in the image classification domain. However, Deep Learning models are vulnerable to noise and redundant information encapsulated into the high-dimensional raw input images, leading to unstable and unreliable predictions. Autoencoders constitute an unsupervised dimensionality reduction technique, proven to filter out noise and redundant information and create robust and stable feature representations. In this work, in order to resolve the problem of DL models’ vulnerability, we propose a convolutional autoencoder topological model for compressing and filtering out noise and redundant information from initial high dimensionality input images and then feeding this compressed output into convolutional neural networks. Our results reveal the efficiency of the proposed approach, leading to a significant performance improvement compared to Deep Learning models trained with the initial raw images.

Highlights

  • Nowadays, convolutional neural networks (CNNs) have considerably flourished mainly because they have shown noticeable classification performance in image classification and computer vision tasks [1,2]

  • The measurement of quality is based on the well-known widely used evaluation metrics: Accuracy (Acc), Geometric Mean (GM), and the Area Under the Curve (AUC) [41]

  • Notice that the performance metrics GM and AUC present the information provided by a confusion matrix in compact form [42,43]; these two metrics constitute the proper ones to evaluate if a prediction model has not overfitted the training data

Read more

Summary

Introduction

Convolutional neural networks (CNNs) have considerably flourished mainly because they have shown noticeable classification performance in image classification and computer vision tasks [1,2]. In Machine Learning (ML) image classification tasks when dealing with high-dimensional data, which usually contain a lot of redundant information and noise, the reliable knowledge feature extraction procedure deteriorates [5]. Small pixel changes can lead the model to change its predictions, which implies that it has not exploited the information in the training data, and it exhibits poor and inefficient performance [7]. By taking into consideration these difficulties and constraints, the application of a preproccessing step, which will attempt to reduce the noise in the image data while simultaneously reduce their dimension is considered essential for improving the performance of the DL model

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call