Abstract

AbstractEngineering systems provide essential services to society, e.g., power generation, transportation. Their performance, however, is directly affected by their ability to cope with uncertainty, especially given the realities of climate change and pandemics. Standard design methods often fail to recognize uncertainty in early conceptual activities, leading to rigid systems that are vulnerable to change. Real options and flexibility in design are important paradigms to improve a system’s ability to adapt and respond to unforeseen conditions. Existing approaches to analyze flexibility, however, do not leverage sufficiently recent developments in machine learning enabling deeper exploration of the computational design space. There is untapped potential for new solutions that are not readily accessible using existing methods. Here, a novel approach to analyze flexibility is proposed based on deep reinforcement learning (DRL). It explores available datasets systematically and considers a wider range of adaptability strategies. The methodology is evaluated on an example waste-to-energy (WTE) system. Low and high flexibility DRL models are compared against stochastically optimal inflexible and flexible solutions using decision rules. The results show highly dynamic solutions, with action space parametrized via artificial neural network (ANN). They show improved expected economic value up to 69% compared with previous solutions. Combining information from action space probability distributions along expert insights and risk tolerance helps make better decisions in real-world design and system operations. Out of sample testing shows that the policies are generalizable, but subject to tradeoffs between flexibility and inherent limitations of the learning process.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call