Abstract

Discrete fracture networks (DFN) are often used to model flow and transport in fractured porous media. The accurate resolution of flow and transport behavior on a large DFN involving thousands of fractures is computationally expensive. This makes uncertainty quantification studies of quantities of interest such as travel time through the network computationally intractable, since hundreds to thousands of runs of the DFN model are required to get bounds on the uncertainty of the predictions. Prior works on the subject demonstrated that the complexity of a DFN could be reduced by considering a sub-network of it (often termed a “backbone” sub-network), one whose flow and transport properties were then shown to be similar to that of the full network. The technique is tantamount to partitioning the complete set of fractures of a network into two disjoint sets, one of which is the backbone sub-network while the other its complement. It is in this context that we present a system-reduction technique for DFNs using supervised machine learning via a Random Forest Classifier that selects a backbone sub-network from the full set of fractures. The in-sample errors (in terms of precision and recall scores) of the trained classifier are found to be very accurate indicators of the out-of-sample errors, thus exhibiting that the classifier generalizes well to test data. Moreover, this system-reduction technique yields sub-networks as small as 12% of the full DFN that still recover transport characteristics of the full network such as the peak dosage and tailing behavior for late times. Most importantly, the sub-networks remain connected, and their size can be controlled by a single dimensionless parameter. Furthermore, measures of KL-divergence and KS-statistic for the breakthrough curves of the sub-networks with respect to the full network show physically realistic trends in that the measures decrease monotonically as the size of the sub-networks increase. The computational efficiency gained by this technique depends on the size of the sub-network, but large reductions in computational time can be expected for small sub-networks, yielding as much as 90% computational savings for sub-networks that are as small as 10-12% of the full network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call