Abstract

When simulation–optimization (S/O) is used for groundwater contamination source identification (GCSI), to reduce the calculation load and time generated by calling the simulation model, a surrogate model is often used instead of the simulation model. However, when the conversion relationship between the simulation model inputs and outputs is complex and the model is highly nonlinear, the commonly used approach of building a surrogate model may lose its advantage. This study use and check a deep learning method with the long-short term memory (LSTM) network which has great potential for characterizing the input–output conversion relationship of complex nonlinear numerical simulations, to a surrogate model of the simulation model. The accuracy of the surrogate model developed by the LSTM method was compared with that of commonly used surrogate models developed by the Kriging method, the radial basis function network method, and the kernel extreme learning machine method. The surrogate model with the highest accuracy was linked to the optimization model, and the optimization model was solved to identify the contamination source information. Results show that compared with the other three methods, the surrogate model constructed by the LSTM method had the best accuracy and generalization performance. LSTM is therefore an effective method of building a surrogate model. Linking the LSTM surrogate model to an optimization model and then solving the optimization model can save approximately 99% of the computing load and time otherwise required.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call