Abstract
Reliable uncertainty quantification (UQ) in machine learning (ML) regression tasks is becoming the focus of many studies in materials and chemical science. It is now well understood that average calibration is insufficient, and most studies implement additional methods testing the conditional calibration with respect to uncertainty, i.e. consistency. Consistency is assessed mostly by so-called reliability diagrams. There exists however another way beyond average calibration, which is conditional calibration with respect to input features, i.e. adaptivity. In practice, adaptivity is the main concern of the final users of a ML-UQ method, seeking for the reliability of predictions and uncertainties for any point in features space. This article aims to show that consistency and adaptivity are complementary validation targets, and that a good consistency does not imply a good adaptivity. Adapted validation methods are proposed and illustrated on a representative example.
Full Text
Topics from this Paper
Quantification Metrics
Average Calibration
Regression Tasks
Complementary Targets
Machine Learning
+ Show 5 more
Create a personalized feed of these topics
Get StartedSimilar Papers
Matter
Mar 1, 2023
Journal of Chemical Information and Modeling
Aug 4, 2021
Progress in Aerospace Sciences
Aug 1, 2015
Nuclear Science and Engineering
May 4, 2023
Jan 1, 2022
Stem Cell Reports
Oct 1, 2022
Mechanical Systems and Signal Processing
Jan 1, 2022
IEEE Transactions on Artificial Intelligence
Jan 1, 2023
Journal of Electronic Imaging
Jul 9, 2018
iScience
Aug 1, 2022
Jun 26, 2022
arXiv: Chemical Physics
Sep 30, 2020