Abstract

Deep neural networks have been applied to big data scenarios and have achieved good results. However, trust in high-accuracy deep neural networks is not necessarily achieved when facing high-risk decision scenarios. Every wrong decision has incalculable consequences in medicine, autonomous driving, and other fields. For an image recognition model based on big data to gain trust, it is necessary to simultaneously solve the interpretability of decisions and risk predictability. This paper introduces uncertainty into a self-explanatory image recognition model to show that the model can interpret decisions and predict risk. This approch enables the model to trust its decisions and explanations and to provide early warning and detailed analysis of risky decisions. In addition, this paper introduces the process of using uncertainty and explanations to achieve model optimization, which will significantly improve the application value of the model in high-risk scenarios. The scheme proposed in this paper solves the trust crisis caused by using black box image recognition models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call