Abstract

In this study we developed an effective novel method for reducing the variability in the output of different artificial neural network (ANN) configurations that have the same overall performance as measured by the area under their receiver operating characteristic (ROC) curves. This variability can lead to inaccuracies in the interpretation of results when the outputs are employed as classification predictors. We extended a method previously proposed to reduce the variability in the performance of a classifier with data sets from different institutions to the outputs of ANN configurations. Our approach is based on histogram shaping of the outputs of all ANN configurations to resemble the output histogram of a baseline ANN configuration. We tested the effectiveness of the technique using synthetic data generated from two two-dimensional isotropic Gaussian distributions and 100 ANN configurations. The proposed output calibration technique significantly reduced the median standard deviation of the ANN outputs from 0.010 before calibration to 0.006 after calibration. The standard deviation of the sensitivity of the 100 ANN configurations at the same decision threshold reduced significantly from 0.005 before calibration to 0.003 after calibration. Similarly the standard deviation of their specificity values decreased significantly from 0.016 before calibration to 0.006 after calibration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call