会议专题

Online Incremental EM Training of GMM and its Application to Speech Processing Applications

The traditional Expectation-Maximization (EM) training of Gaussian Mixture Model (GMM) is essentially a batch mode procedure which requires the multiple data samples with the sufficient size to update the model parameters. This severely limits the deployment and adaptation of GMM in many real-time online systems since the newly observed data samples are expected to be incorporated into the system upon available via retraining the model. This paper presents a new online incremental EM training procedure of GMM, which aims to perform the EM training incrementally and so can adapt GMM online sample by sample. The proposed method is extended on two kinds of EM algorithms for GMM, namely, Splitand-Merge EM and the traditional EM. Experiments on both the synthetic data and a speech processing task show the advantages and efficiency of the new method.

EM GMM adaptation unsupervised adaptation

Yongxin Zhang Lixian Chen Xin Ran

Qualcomm Incorporate R&D, 5575 Morehouse Dr.San Diego, CA, 92121, U.S.A. Department of Mathematics, California State University San Marcos, San Marcos, CA 92096, U.S.A. Merchant Marine College, Shanghai Maritime University, Shanghai 200135, China PRC

国际会议

2010 IEEE 10th International Conference on Signal Processing(第十届信号处理国际会议 ICSP 2010)

北京

英文

1309-1312

2010-08-24(万方平台首次上网日期,不代表论文的发表时间)