Abstract

Many Deep Learning approaches are based on variations of standard multi-layer feed-forward neural networks. These are also referred to as deep networks. The basic idea is that each hidden neural layer accomplishes a data transformation which is expected to make the data representation “somewhat more linearly separable” than the previous one to obtain a final data representation which is as linearly separable as possible. However, determining the optimal network parameters for these transformations is a crucial challenge. In this study, we propose a Deep Neural Network architecture (Hidden Classification Layer, HCL) which induces an error function involving the output values of all the network layers. The proposed architecture leads toward solutions where the data representations in the hidden layers exhibit a higher degree of linear separability between classes compared to conventional methods. While similar approaches have been discussed in prior literature, this paper presents a new architecture with a novel error function and conducts an extensive experimental analysis. Furthermore, the architecture can be easily integrated into existing frameworks by simply adding densely connected layers and making a straightforward adjustment to the loss function to account for the output of the added layers. The experiments focus on image classification tasks using four well-established datasets, employing as baselines three widely recognized architectures in the literature. The findings reveal that the proposed approach consistently enhances accuracy on the test sets across all considered cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call