Abstract
Solution to groundwater inverse problems is an important if not an essential step in developing efficient groundwater remediation and management strategies. But application of groundwater inverse modeling to field problems has been limited due to the lack of efficient and/or flexible algorithms and immense computational requirements. We have developed an efficient optimization based framework for solving three dimensional groundwater inverse problems in a parallel computing environment. Our implementation is based on a hybrid genetic algorithm - local search (GA-LS) optimizer that drives a parallel finite-element groundwater simulator. The MPI (Message Passing Interface) communication library is employed to exploit data parallelism within the groundwater simulator and task parallelism within the optimizer. Data parallelism in the groundwater simulator is achieved through a domain decomposition strategy and task parallelism in the optimizer is achieved through a dynamic selfscheduling algorithm. Two types of GAs, (i) binary/integer encoded GA (BGA/IGA), and (ii) real encoded GA (RGA), and three local search approaches, (i) Nelder-Mead simplex method (NMS), (ii) Hookes and Jeeves pattern search method (HKJ), and (iii) Powell’s method of conjugate directions (PWL) have been implemented. A flexible interface is provided that allows for the choice of any combination of GA and LS approaches as well as standalone applications of each approach. Also, our implementation permits the simultaneous application of multiple local searches in parallel starting from multiple initial guesses provided by GA. Three types of groundwater inverse problems are tested: multiple biological activity zone identification, contaminant source identification (location and concentration), and multiple source release history reconstruction. The combination of parallel computing and efficient optimization algorithms has allowed us to test problems of magnitude and complexity that have not been attempted before. It is emphasized here that many of the problems tested required only a few hours of computation time on an IBM SP3 supercomputer using 256 processors. The same problems would have required over a month of computation time on a high end PC. In almost all problems tested, solutions were achieved with over 90% accuracy even in the presence of moderate noise in the observations. Collaborations are underway with North Carolina Department of Environment and Natural Resources (NCDENR) to apply our methodology to solve field problems where multiple source release history reconstructions are needed to identify responsible parties in several contamination incidents in North Carolina.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have