Abstract
Phenomenon of stochastic separability was revealed and used in machine learning to correct errors of Artificial Intelligence (AI) systems and analyze AI instabilities. In high-dimensional datasets under broad assumptions each point can be separated from the rest of the set by simple and robust Fisher’s discriminant (is Fisher separable). Errors or clusters of errors can be separated from the rest of the data. The ability to correct an AI system also opens up the possibility of an attack on it, and the high dimensionality induces vulnerabilities caused by the same stochastic separability that holds the keys to understanding the fundamentals of robustness and adaptivity in high-dimensional data-driven AI. To manage errors and analyze vulnerabilities, the stochastic separation theorems should evaluate the probability that the dataset will be Fisher separable in given dimensionality and for a given class of distributions. Explicit and optimal estimates of these separation probabilities are required, and this problem is solved in the present work. The general stochastic separation theorems with optimal probability estimates are obtained for important classes of distributions: log-concave distribution, their convex combinations and product distributions. The standard i.i.d. assumption was significantly relaxed. These theorems and estimates can be used both for correction of high-dimensional data driven AI systems and for analysis of their vulnerabilities. The third area of application is the emergence of memories in ensembles of neurons, the phenomena of grandmother’s cells and sparse coding in the brain, and explanation of unexpected effectiveness of small neural ensembles in high-dimensional brain.
Highlights
Data mining in post-classical worldBig data ‘revolution’ and the growth of the data dimension are commonplace
Tools of the first choice are Principal Component Analysis with retaining of major components, the correlation transformation, that transforms the data set into its Gram matrix, or their combination (for a case study see (Moczko et al, 2016))
Other examples of post-classical phenomena are exponentially large sets of quasiorthogonal random vectors we have already mentioned and stochastic separation in exponentially large datasets: with high probability, any sample point is linearly separable from other points and this separation could be performed by the simple and explicit Fisher discriminant (Gorban et al, 2018; Gorban and Tyukin, 2017; Gorban et al, 2016b)
Summary
Big data ‘revolution’ and the growth of the data dimension are commonplace. some implications of this growth are not so well known. Other examples of post-classical phenomena are exponentially large sets of quasiorthogonal (almost orthogonal) random vectors we have already mentioned and stochastic separation in exponentially large datasets: with high probability, any sample point is linearly separable from other points and this separation could be performed by the simple and explicit Fisher discriminant (Gorban et al, 2018; Gorban and Tyukin, 2017; Gorban et al, 2016b) This is a strengthening of the statements (Barany & Furedi, 1988; Donoho, 2000; Donoho & Tanner, 2009)) that random points are extreme ones. Fundamental open questions, are: 1. Are there quantitatively accurate estimates of the boundary between the “classical” and the “post-classical” cases?
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.