This contribution presents a semi-active control technique intended for mitigation of structural vibrations. The control law is derived in a repeated trial-and-error interaction between the control agent and a simulated environment. The experience-based training approach is used which is the defining feature of the machine learning techniques of reinforcement learning (RL). In particular, a specific modification of the Deep Q Learning (DQN) approach is applied. The involved artificial neural network not only approximates the expected reward of the control (which defines the control action and quantifies its performance), but additionally keeps track of structural damages. This requires a specific architecture, which allows the network to be damage-aware, and a specific training procedure, where the memory pool preserved for the RL stage of experience replay is populated with not only the observations, control actions, and rewards, but also with the momentary status of structural damage. Such an approach explicitly promotes the damage-awareness of the control agent. The proposed technique is tested and verified in a numerical example of a shear-type building model subjected to a seismic excitation. A tuned mass damper (TMD) with a controllable level of viscous damping is used to implement the semi-active actuation, and the optimally tuned classical TMD provides the reference response.
Read full abstract