Abstract

LatticeNet is a hierarchical deep learning architecture which has been developed for predicting pin powers and other parameters within single or multiple 2D pressurized water reactor assemblies. It has been shown to be effective at the task of predicting distributions of reactor parameters such as normalized pin powers under changing thermal hydraulics conditions. However, deep learning models are prone to overfitting and to learning rules in the dataset which are either not applicable to the real world or not what the researchers intended. When developing deep learning architectures for reactor modeling and simulation applications it is important to investigate how well these models perform on unseen data which lies outside of their training distribution. In this work, we develop multiple thermal hydraulic datasets which are specifically tailored to provide challenging inference examples, and evaluate these datasets using existing trained LatticeNet models in order to determine how well these models perform on classes of examples which are either statistically unlikely or impossible to be in the training data. We show that these models exhibit surprising generalization capabilities for data outside of their training distribution, and moreover that the error of these examples is not entirely random but semi-continuous. We also show that at least some variants of LatticeNet are particularly vulnerable to adversarial inputs which cause them to produce non-physical answers, and demonstrate a simple method to detect these non-physical regions which requires no generation of new data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call