THE KEY THEOREM OF LEARNING THEORY ON SET-VALUED PROBABILITY SPACE
Statistical Learning Theory based on random samples on probability space is considered as the best theory about small samples statistics learning at present and has become a new hot field in machine learning after neural networks.However, the theory can not handle the small samples statistical learning problems on set-valued probability space which widely exists in real world.This paper discussed statistical learning theory on a special kind of set-valued probability space.Firstly, we shall give the definition of random vectors and the definition of the distributed function and the expectation of random vectors, and then we will give the definition of the expected risk functional, the empirical risk functional and the definition of the consistency of the principle (method) of empirical risk minimization (ERM) on set-valued probability space.Finally, we will give and prove the key theorem of learning theory on set-valued probability space, which has laid the theoretical foundation for us to establish the statistical learning theory on probability space.
Set-valued probability Hausdorff metric The principle of empirical risk minimization The key theorem
JI-QIANG CHEN MING-HU HA LI-FANG ZHENG
College of Mathematics and Computer Science, Hebei University, Baoding 071002, China
国际会议
2007 International Conference on Machine Learning and Cybernetics(IEEE第六届机器学习与控制论国际会议)
香港
英文
2778-2783
2007-08-19(万方平台首次上网日期,不代表论文的发表时间)