Abstract

Efficient modeling of high-dimensional data requires extracting only relevant dimensions through feature learning. Unsupervised feature learning has gained tremendous attention due to its unbiased approach, no need for prior knowledge or expensive manual processing, and ability to handle exponential data growth. Deep Autoencoder (AE) is a state-of-the-art deep neural network for unsupervised feature learning, which learns embedded-representations using a series of stacked layers. However, as the AE network gets deeper, these learned embedded-representations can deteriorate due to vanishing gradient, leading to performance degradation. This article presents ResNet Autoencoder (RAE) and its convolutional version (C-RAE) for unsupervised feature learning. The advantage of RAE and C-RAE is that it enables the user to add residual connections for increased network capacity without incurring the cost of degradation for unsupervised feature learning compared to standard AEs. While RAE and C-RAE inherit all the advantages of AEs, such as automated non-linear feature extraction and unsupervised learning, they also allow users to design larger networks without adverse effects on feature learning performance. We performed classification on learned embedded-representation to evaluate RAE and C-RAE. RAE and C-RAE were compared against AEs on MNIST, Fashion MNIST, and CIFAR10 datasets. When increasing the number of layers, C-RAE outperformed AE by showing significantly lower performance degradation of classification accuracy (less than 3%) compared to AE (33% to 65%). Further, C-RAE exhibited higher mean accuracy and lower variance of accuracy than standard AE. When comparing RAE and C-RAE with widely used feature learning methods (Convolutional AE, PCA, ICA, LLE, Factor Analysis, and SVD), C-RAE showed the highest accuracy.

Highlights

  • I N this era of industrial big data, a massive amount of data is available to the public through various industries such as intelligent transportation [1] [2], power grids [3], cloud computing [4], and finance [5]

  • Through a comprehensive comparison of widely used unsupervised dimensionality reduction methods in Table III, we demonstrated that the C-ResNet Autoencoder (RAE) outperforms widely used feature learning methods such as standard AE, K Nearest Neighbor (KNN), Principle component analysis (PCA), Locally Linear Embedding (LLE), Independent component analysis (ICA), Factor Analysis, and Singular Value Decomposition (SVD) by 1%-3% improvements of classification accuracy

  • We introduced an unsupervised deep learning framework, consisting of ResNet Autoencoder (RAE) and its convolutional version C-RAE, that allows making deeper neural networks while not sacrificing its dimensionality reduction performance

Read more

Summary

Introduction

I N this era of industrial big data, a massive amount of data is available to the public through various industries such as intelligent transportation [1] [2], power grids [3], cloud computing [4], and finance [5]. The first subsection discusses widely used traditional unsupervised dimensionality reduction techniques. The second section discusses Autoencoder based deep learning approaches for dimensionality reduction. Two types of dimensionality reduction based feature learning techniques exist, namely feature selection and feature transformation [23]. A subset of features from the original space is selected in feature selection, whereas in feature transformation (Dimension reduction), it generates an entirely new set of features. Both try to keep as much information in the data as possible while reducing the dimension. Used such dimension reduction techniques are discussed below

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call