会议专题

Extensions of LDA by PCA Mixture Model and Class-wise Features

LDA (Linear Discriminant Analysis) is α data discrimination technique that seeks transformation to maximize the ratio of the between-class scatter and the within-class scatter. While it has been successfully applied to several applications, it has two limitations, both concerning the underfitting problem. First, it fails to discriminate data with complex distributions since all data in each class are assumed to be distributed in the Gaussian manner; and second, it can lose class-wise information, since it produces only one transformation over the entire range of classes. We propose three extensions of LDA to overcome the above problems. The first extension overcomes the first problem by modeling the within-class scatter using a PCA mixture model that can represent more complex distribution. The second extension overcomes the second problem by taking different transformation for each class in order to provide class-wise features. The third extension combines these two modifications by representing each class in terms of the PCA mixture model and taking different transformation for each mixture component. It is shown that all our proposed extensions of LDA outperform LDA concerning classification errors for handwritten digit recognition and alphabet recognition.

Hyun-Chul Kim Daijin Kim Sung Yang Bang

Department of Computer Science and Engineering Pohang University of Science and Technology San 31, Hyoja-Dong, Nam-Gu, Pohang, 790-784, Korea

国际会议

8th International Conference on Neural Information Processing(ICONIP 2001)(第八届国际神经信息处理大会)

上海

英文

425-430

2001-11-14(万方平台首次上网日期,不代表论文的发表时间)