Abstract

This paper presents a self-learning strategy for an artificial cognitive control based on a reinforcement learning strategy, in particular, an on-line version of a Q-learning algorithm. One architecture for artificial cognitive control was initially reported in [1], but without an effective self-learning strategy in order to deal with nonlinear and time variant behavior. The anticipation mode (i.e., inverse model control) and the single loop mode are two operating modes of the artificial cognitive control architecture. The main goal of the Q-learning algorithm is to deal with intrinsic uncertainty, nonlinearities and noisy behavior of processes in run-time. In order to validate the proposed method, experimental works are carried out for measuring and control the microdrilling process. The real-time application to control the drilling force is presented as a proof of concept. The performance of the artificial cognitive control system by means of the reinforcement learning is improved on the basis of good transient responses and acceptable steady-state error. The Q-learning mechanism built into a low-cost computing platform demonstrates the suitability of its implementation in an industrial setup.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call