Cognitive dynamic systems provide a broadly defined platform, whereby engineering learns from cognitive neuroscience, and by the same token, cognitive neuroscience learns from engineering. The first part of the paper is of a tutorial nature, addressing recent advances in cognitive perception and cognitive control, which are the dual of each other. The study of cognitive perception, viewed from the perspective of Bayesian inference, starts with sparse coding, well known in neuroscience. However, sparse coding could become ill-posed, particularly when the signal-to-noise ratio is low. In such situations, stability is a necessary requirement, which can only be satisfied if there is sufficient information in the observables. To satisfy this requirement, the sparse-coding algorithm is augmented by the addition of information filtering (i.e., a special case of Bayesian filtering). Accordingly, the performance of sparse coding is improved under the influence of perceptual attention. This improvement enhances the cognitive perceptor to separate relevant information from irrelevant information. Next, moving into cognitive control, viewed from the perspective of Bellman's dynamic programming, two ideas are exploited: entropic state of the perceptor, and the definition of reward as an invertible function of two entropic states, namely, the current state and its immediate past value. The net result of building on these two ideas is a modified form of Bellman's dynamic programming, and, therefore, a new reinforcement learning algorithm, which not only outperforms traditional reinforcement learning algorithms, but also offers some highly desirable properties. Among them is a linear law of computational complexity, which is the best that it could be. The second part of the paper addresses two challenging problems: first, how to mediate between cognitive control and cognitive perception and, second, how to formulate a procedure for risk control. The first problem is resolved by making use of probabilistic reasoning, a branch of probability theory, which leads into the formulation of a probabilistic reasoning machine. With this mediation in place, the conditions for overall system stability are derived, thereby confirming the probabilistic reasoning machine as the overall system stabilizer. The second challenge is risk control, which is by far the most challenging of them all: In the presence of an unexpected disturbance in the environment, risk is brought under control by mimicking the predict and preadapt function, which is considered to be the overarching function in the prefrontal cortex of the brain. To be specific, motor control is expanded by the inclusion of a new preadaptive control mechanism, which involves two different sets of actions: One set is made up of possible actions identified by the policy in the motor control. The other set involves a window of experiences (i.e., optimal actions) gained in the past. In a novel way, by exploiting these two sets, we end up with a preadaptive control mechanism in the form of a closed-loop feedback structure, which brings with it control (executive) attention.
Read full abstract