Multiple Kernel Learning Improved by MMD
When training and testing data are drawn from different distributions, the performance of the classification model will be low. Such a problem usually comes from sample selection bias or transfer learning scenarios. In this paper, we propose a novel multiple kernel learning framework improved by Maximum Mean Discrepancy (MMD) to solve the problem. This new model not only utilizes the capacity of kernel learning to construct a nonlinear hyperplane which maximizes the separation margin, but also reduces the distribution discrepancy between training and testing data simultaneously, which is measured by MMD. This approach is formulated as a bi-objective optimization problem. Then an efficient optimization algorithm based on gradient descent and quadratic programming13is adopted to solve it.Extensive experiments on UCI and text datasets show that the propsed model outperforms traditional multiple kernel learning model in sample selection bias and transrer learning scenarios.
Kernel Learning Maximum Mean Discrepancy
Jiangtao Ren Zhou Liang Shaofeng Hu
School of Software, Sun Yat-sen University, China Department of Computer Science, Sun Yat-sen University, China
国际会议
6th International Conference on Advanced Data Mining and Applications(第六届先进数据挖掘及应用国际会议 ADMA 2010)
重庆
英文
63-74
2010-11-19(万方平台首次上网日期,不代表论文的发表时间)