Abstract
This paper proposes a way of providing transparent and interpretable results for ELM models by adding confidence intervals to the predicted outputs. In supervised learning, outputs are often random variables because they may depend on information that is unavailable, due to the presence of noise, or the projection function itself may be stochastic. Probability distribution of outputs is input dependent, and the observed output values are samples from that distribution. However, ELM predicts deterministic outputs. The proposed method addresses that problem by estimating predictive Confidence Intervals (CIs) at a confidence level α, such that random output values fall between these intervals with probability α.Assuming that the outputs are normally distributed, only a standard deviation is needed to compute CIs of a predicted output (the predicted output itself is a mean). Our method provides CIs for ELM predictions by estimating standard deviation of a random output for a particular input sample. It shows good results on both toy and real skin segmentation datasets, and compares well with the existing Confidence-weighted ELM methods. On a toy dataset, the predicted CIs accurately represent the variable variance of outputs. On a real dataset, CIs improve the precision of a classification task at a cost of recall.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.