Abstract
Objective evaluation (OE) methods provide quantitative insight into how well time history data from computational models match data from physical systems. Two feature specific techniques commonly used for this purpose are cora and the ISO/TS 18571 standards. These ostensibly objective techniques have differences in their algorithms that lead to discrepancies when interpreting their results. The objectives of this study were (1) to apply both techniques to a dataset from a computational model, and compare the scores and (2) conduct a survey of subject matter experts (SMEs) to determine which OE method compares more consistently with SME interpretation. The GHBMC male human model was used in simulations of biomechanics experiments, producing 58 time history curves. Because both techniques produce scores based on specific features of the signal comparison (phase, size, and shape), 174 pairwise comparisons were made. Statistical analysis revealed significant differences between the two OE methods for each component rating metric. SMEs (n = 40) surveyed scored how well the computational traces matched the experiments for the three rating metrics. SME interpretation was found to statistically agree with the ISO shape and phase metrics, but was significantly different from the ISO size rating. SME interpretation agreed with the cora size rating. The findings suggest that when possible, engineers should use a mixed approach to reporting objective ratings, using the ISO shape and phase methods, and size methods of cora. We recommend to weight metrics greatest to least for shape, phase, and size. Given the general levels of agreement observed and the sample size, the results require a nuanced interpretation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have