Abstract

Abstract Recently, it was demonstrated that the design synthesis of truss structures can be modeled as a Markov decision process (MDP) and solved using a tabular reinforcement learning method. In this setting, each state corresponds to a specific design configuration represented as a finite graph. However, when the structural design domain is relatively large, and depending on the constraints, the dimensionality of the state space becomes quite large rendering tabular reinforcement learning algorithms inefficient. Hence, in this study, the design synthesis MDP framework is significantly extended to solve structural design problems with large state spaces, by integrating deep reinforcement learning (DRL) into the general MDP framework. This is beneficial because with DRL, a deep neural network can be used to approximate the state-action value function, such that the network has much fewer parameters than the cardinality of the state space. This parameterization relies upon a problem relevant set of features and reward function. Thus, for this extended DRL design synthesis (DRLDS) framework, a compact set of features and a reward function are devised that are suitable for structural design problems where structural configurations are represented as finite graphs. Through the application of seven different structural design synthesis examples, the DRLDS framework is demonstrated to be capable of adeptly learning optimal policies that synthesize high, if not the highest, performing design solutions more frequently. The DRLDS framework does this with fewer finite element model evaluations than other considered alternative methods, further demonstrating the effectiveness of the developed set of features and reward function.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.