Abstract

Statistical Learning Theory based on random samples on probability space is considered as the best theory about small samples statistics learning at present and has become a new hot field in machine learning after neural networks. However, the theory can not handle the small samples statistical learning problems on set-valued probability space which widely exists in real world. This paper discussed statistical learning theory on a special kind of set-valued probability space. Firstly, we shall give the definition of random vectors and the definition of the distributed function and the expectation of random vectors, and then we will give the definition of the expected risk functional, the empirical risk functional and the definition of the consistency of the principle (method) of empirical risk minimization (ERM) on set-valued probability space. Finally, we will give and prove the key theorem of learning theory on set-valued probability space, which has laid the theoretical foundation for us to establish the statistical learning theory on probability space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call