Abstract

Biological vision provides an excellent promise in the design of automated object recognition (AOR) system. A particularly important unresolved issue is that of learning. This topic is also explored in Chapters (Choe) and (Perlovsky), where learning is related to actions (Choe) and to the knowledge instinct (Perlovsky). Reinforcement learning (RL) is part of procedural learning that is routinely employed in biological vision. The RL in biology appears to be crucial for attentive decision making process in a stochastic dynamic environment. RL is a learning mechanism that does not need explicit teacher or training samples, but learns from an external reinforcement. The idea of RL is related to the knowledge instinct explored in Chapter (Perlovsky), which provides internal motivations for matching models to sensor signals. The model in this chapter implements RL through neural networks in an adaptive critic design (ACD) framework in automatic recognition of objects. An ACD approximates the neuro-dynamic programming employing an action and a critic network, respectively. Two ACDs such as Heuristic Dynamic Programming (HDP) and Dual Heuristic dynamic Programming (DHP) are both exploited in implementing the RL model. We explore the plausibility of RL for distortion-related object recognition inspired by principles of biological vision. We test and evaluate these two designs using simulated transformations as well as face authentication problems. Our simulations show promising results for both designs for transformation-invariantAOR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call