Feature Selection via Vectorizing Features Discriminative Information
Feature selection is a popular technology for reducing dimensionality.Commonly features are evaluated with univariate scores according to their classification abilities,and the high-score ones are preferred and selected.However,there are two flaws for this strategy.First,feature complementarity is ignored.A subspace constructed by the partially predominant but complementary features is suitable for recognition task,whereas this feature subset cannot be selected by this strategy.Second,feature redundancy for classification cannot be measured accurately.This redundancy weakens the subsets discriminative performance,but it cannot be reduced by this strategy.In this paper,a new feature selection method is proposed.It assesses features discriminative information for each class and vectorizes this information.Then,features are represented by their corresponding discriminative information vectors,and the most distinct ones are selected.Both feature complementarity and classification redundancy can be easily measured by comparing the differences between these new vectors.Experimental results on both low-dimensional and high-dimensional data testify the new methods effectiveness.
Feature selection Discriminative information Feature complementarity Feature redundancy
Jun Wang Hengpeng Xu Jinmao Wei
College of Computer and Control Engineering,Nankai University,Tianjin 300350,China
国际会议
International Asia-Pacific Web Conference(第18届国际亚太互联网大会)
苏州
英文
493-505
2016-09-23(万方平台首次上网日期,不代表论文的发表时间)