Abstract
In this paper, we present a theoretical discussion on AI deep learning neural network uncertainty investigation based on the classical Rademacher complexity and Shannon entropy. First it is shown that the classical Rademacher complexity and Shannon entropy is closely related by quantity by definitions. Secondly based on the Shannon mathematical theory on communication [3], we derive a criteria to ensure AI correctness and accuracy in classifications problems. Last but not the least based on Peter Barlette's work in [1], we show both a relaxing condition and a stricter condition to guarantee the correctness and accuracy in AI classification. By elucidating in this paper criteria condition in terms of Shannon entropy based on Shannon theory, it becomes easier to explore other criteria in terms of other complexity measurements such as Vapnik-Cheronenkis, Gaussian complexity by taking advantage of the relations studies results in other references such as in [2]. A close to 1/2 criteria on Shannon entropy is derived in this paper for the theoretical investigation of AI accuracy and correctness for classification problems.
Submitted Version (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have