会议专题

Speech Visualization Simulation Research for Deaf-mute

Deaf-mutes have the stronger advantage of visual identification ability and visual memory ability for color, a new speech visualization method for deaf-mute was proposed,it created readable patterns by integrating different speech features into a single picture. Firstly, series preprocessing of speech signals were done. Secondly, extracting features were done, among them, using three formant features mapped principal color information, using the length of pronunciation mapped width information, using harmonic intensity mapped length information, and then all features used as the inputs of neural network, the outputs of neural network mapped the texture information. We evaluated the visualized speech in a preliminary test, the test result shows that the method has very good robustness, the correct answer rate reaches 94.56% of vowel and consonant, 85.75% of two-bopomofo and 78.05% of three-bopomofo.

speech signal integrating feature neural network visualization

WANG Jian HAN Zhi-qiang GAO Li SHEN Sheng-bao HAN Zhi-yan

College of Information Science & Engineering Bohai University Jinzhou Liaoning, China Power Workshop of Golmud Oil Refinery PetroChina Qinghai Oilfield Company Geermu Qinghai, China N0.2 Workshop of Golmud Oil Refinery PetroChina Qinghai Oilfield Company Geermu Qinghai, China College of Information Science & Engineering Bohai University

国际会议

The 2010 International Conference on Computer Application and System Modeling(2010计算机应用与系统建模国际会议 ICCASM 2010)

太原

英文

618-621

2010-10-22(万方平台首次上网日期,不代表论文的发表时间)