Abstract

Convolutional neural networks (CNNs) have powerful representation learning capabilities by automatically learning and extracting features directly from inputs. In classification applications, CNN models are typically composed of: convolutional layers, pooling layers, and fully connected (FC) layer(s). In a chain-based deep neural network, the FC layers contain most of the parameters of the network, which affects memory occupancy and computational complexity. For many real-world problems, speeding up inference time is an important matter because of the hardware design implications. To deal with this problem, we propose the replacement of the FC layers with a Hopfield neural network (HNN). The proposed architecture combines both a CNN and an HNN: A pretrained CNN model is used for feature extraction, followed by an HNN, which is considered as an associative memory that saves all features created by the CNN. Then, to deal with the limitation of the storage capacity of the HNN, the proposed work uses multiple HNNs. To optimize this step, the knapsack problem formulation is proposed, and a genetic algorithm (GA) is used solve it. According to the results obtained on the Noisy MNIST Dataset, our work outperformed the state-of-the-art algorithms.

Highlights

  • In the last decade, convolutional neural networks (CNNs) have become the standard methods for pattern recognition and image analysis

  • A CNN architecture was combined with an Hopfield neural network (HNN) for pattern recognition

  • The aim of this proposal was to reduce the number of parameters of the CNN while increasing or at least keeping the same accuracy

Read more

Summary

Introduction

Convolutional neural networks (CNNs) have become the standard methods for pattern recognition and image analysis. One of main problems regarding their use in industrial systems comes from the computation time and the use of memory resources This is because classic CNN-based architectures have at least one fully connected layer (FC) depending on the architecture’s depth [16,17]. The storage capacity of the memory scheme remains a serious problem to be solved To deal with this problem, many authors have proposed alternatives to the classical neural network learning rules, such as in [21,22,23], and they have provided valuable information on the properties of attention heads in transformer architectures [24]. The use of an associative memory bank (the Hopfield neural network) allows the replacement of the fully connected layer and, the large number of trainable weights (parameters) while preserving the performance.

General Description of the Method
Recall of the Hopfield Neural Network
Knapsack Model for Pattern Recognition
Similarity Measures
Heuristic Approach
Pattern Distribution
Knapsack Selection
Results and Discussion
Defining Weights and Values of the Knapsack
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.