Abstract

In the last few years, studies have demonstrated the potential of downhole geophysical logging in estimating ore grades in the metalliferous mining environment. Geophysical logs provide valuable, relatively inexpensive information that can further be linked to various aspects of a mining operation, such as orebody modelling, mine design and planning, grade control and production. Although financially attractive, downhole geophysical measurements provide only indirect estimates of ore grades and require integration with traditional assay data, which are measured on different support. A related issue is optimal data collection based on the value of the information downhole logs generate and other cost related considerations.This study describes the conditional simulation of ore grades in an application stressing the integration of diamond drilling assay data and downhole geophysical logs. More specifically, conductivity logs and assay data are integrated to assess the variability of copper grades at the Kidd Creek base metal mine, Ontario, Canada. Using the conductivity logs in their raw form presents problems since this data was collected every five centimetres downhole while the copper assays were acquired from drill core that can be up to 1.5m in length. This implies that a ‘representative’ value for the conductivity is needed in each of the copper assay intervals. These representative values are obtained using composites generated from a generalised power averaging which aims to maximise information extraction from conductivity data.Co-indicator sequential indicator simulation (with the Markov-Bayes shortcut) is presented as one way that conductivity logs can be integrated with the copper assays. The technique does this not only by accounting for the spatial correlation of the random variable of interest (the so-called ‘hard’ data), but also by providing a means of incorporating secondary or ‘soft’ information. Specifically using a Bayesian formalism, all the data whether hard or soft are encoded as local prior distributions. These prior distributions are then updated to posterior distributions that take into account all nearby sample data. The modelling of these posterior distributions is achieved through cokriging. A simple Markov-type hypothesis is assumed which allows the variogram of the soft data and cross-variogram of the hard and soft data to be derived directly from the variogram associated with the hard data. This avoids the task of inferring and modelling a series of cross-variograms. The simulation algorithm is then used to generate copper simulations that are based on different combinations of copper assays and conductivity logs. The various parameters required by the algorithm are discussed and values derived from the experimental data. Noteworthy among these parameters is the set of indicator cutoffs used and the specification of the indicator variograms at the very high cutoffs. This first set is chosen so that the quantity of metal is adequately characterised, while for indicator variograms an iterative calibration procedure is used to ensure convergence to declustered copper assay statistics. The validation of the simulations suggests excellent performance of the technique. It is shown to be practical for medium to large simulation sizes.A Bayesian data worth methodology that aids in deciding between alternative courses of actions in the face of uncertainty is presented. It addresses issues such as the action to be taken given an existing set of information and whether more information should be collected before an action is decided on. This general methodology is then demonstrated in two case studies using data from the Kidd Creek study area. In the first study the methodology is used to find the most profitable location for a mining stope in the study area. The study shows that, in this instance, the most profitable stope location partially depends on the sampling campaign that is followed, that is the type and quantity of information collected. However, despite this data dependency, the study demonstrates that the most cost-effective way of locating the stope is by means of a sampling campaign that consists of a mixture of copper assays and conductivity logs. In the second case study, the data worth methodology is used to examine the efficacy of using conductivity logs to classify blocks in a mining stope. It shows that, even when sampling costs are taken into account, it is slightly more cost effective to base the block classifications solely on copper assay information – despite the greater relative cost of collecting this type of information.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.