Abstract

Theories inspired by the working and the structure of the human brain have been applied in various problems in computer vision, such as Artificial Neural Networks (ANNs). The concept of receptive and inhibitory fields have been adopted with success to improve the capability of the ANNs and are present in the deep learning models, such as ConvNet and LIPNet, that have been successfully used in many computer vision tasks. However, both shallow and deep ANN models need some expertise to define their architecture, as well as the definition of some of their parameters. A recently introduced ANN, called CANet, embeds the concepts of receptive fields, lateral inhibition, and autoassociative memory using a constructive algorithm that requires few parameters to perform its learning process. This paper presents a new constructive-pruning algorithm for CANet that contains even fewer parameters and can self-choose the quantity of neurons in the constructive layer of the model, called CANet-2. Also, we analyze the behavior of the model with the use of activation functions from the ReLU family in its constructive layer. Experiments in facial expression recognition showed that the proposed constructive algorithm along with the SoftPlus activation function improved CANet-2 in relation to the original version.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call