Abstract

A contemporary policy gradient-based optimization scheme is presented for active structural control by exerting the concept of reinforcement learning (RL). The RL-based control algorithm is demonstrated both in proportional (P) state, and proportional-integral (PI) state-output feedback approaches, wherein, the latter strategy is deliberated based on the theory of servo-mechanism. The search for optimal P and PI controller parameters in a training sequence is attained by engaging an efficient gradient descent-based optimization strategy. The utilization of gradient-based sequence within the RL framework accelerates the learning protocol, ensuring effective dissipation of structural energy, and achieving suboptimal control. The proposed algorithms are validated through numerical experiments on two different structural systems: (i) a quarter car model subjected to periodic road excitation, having a single actuator and presented in continuous time, and (ii) an 8-story building model subjected to random seismic excitation, having multiple actuators and presented in discrete time. Practical implementation concerns of control strategies are thoroughly investigated by considering perturbations in model parameters and input forces. The findings from this study indicate that the RL-based P and PI controllers exhibit high stabilizing performance, and are applicable in both analog and digital domains. Finally, it has affirmed that the proposed RL algorithms exhibit relatively low computational complexity in real-time, and thus, open up a wide range of perspectives for its application in large-scale complex structures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call