Fault core materials (fault breccia and fault gouge) exhibit heterogeneous particle size distributions due to many factors, including the type of cataclasis, the degree of weathering, and the scale and mechanism of the fault system. When studying particle size distributions in fault core materials, there is no clear standard of the sample size that should typically be used for testing and analysis. In this study, we present a method to establish the ideal sample size by statistically assessing a suite of laboratory tests on 451 fault-core samples from 21 locations in South Korea. These samples were divided into five different classes according to grain size. Weight ratios of gravel, sand, and silt/clay were calculated from laboratory tests on each sample, and the means and standard deviations were subsequently assessed via analysis of variance and multiple comparison analysis. The results of the analysis of variance suggested that classes 1–5 are different from each other in at least one factor. Tukey’s HSD tests and Duncan’s LSR tests were also applied to identify groups within the classes that might be statistically similar to each other. In this manner, it was found that classes 1 and 2 could be grouped together (group A), as could classes 3, 4, and 5 (group B). Standard deviation means and distribution ranges within groups were used to deduce that group B, rather than group A, contained the sample sizes that best represented the site. Furthermore, class 3 (which had the smallest weight among the different classes in group B) was determined as being the representative elementary volume (REV). This deduction is consistent with the recommendation for reference sample size used in soil particle size analysis (as defined by the American Society for Testing and Materials).
Read full abstract