Abstract
Abstract Pattern recognition (recognizing a pattern from inputs) and recall (describing or predicting the inputs associated with a recognizable pattern) are essential for neural-symbolic processing and cognitive capacities. Without them the brain cannot interact with the world e.g.: form internal representations and recall memory upon which it can perform logic and reason. Neural networks are efficient, biologically plausible algorithms that can perform large scale recognition. However, most neural network models of recognition perform recognition but not recall: they are sub-symbolic. It remains difficult to connect models of recognition with models of logic and emulate fundamental brain functions, because of the symbolic recall limitation. I introduce a completely symbolic neural network that is similar in function to standard feedforward neural networks but uses feedforward-feedback connections similar to auto-associative networks. However, unlike auto-associative networks, the symmetrical feedback connections are inhibitory not excitatory. Initially it may seem counterintuitive that recognition can even occur because the top-down connections are self-inhibitory. The self-inhibitory configuration is used to implement a gradient descent mechanism that functions during recognition not learning. The purpose of the gradient-descent is not to learn weights (weights are still learned during learning) but to find neuron activation. The advantage of this approach is the weights can now be symbolic (representing prototypes of expected patterns) allowing recall within neural networks. Moreover, considering the costs of both learning and recognition, this approach may be more efficient than feedforward recognition. I show that this model mathematically emulates standard feedforward model equations in single layer networks without hidden units. Comparisons that include more layers are planned in the future.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.