会议专题

real-world audititory perception based on auditory feature binding

When we designedly listen to a target, perhaps its signal is in the presence of competing signals or noise. An effective solution is to extract target signals from background noise and interference based on either location attributes or source attributes. Location attributes typically involve arrival angles and distance between listener and the sound source. Source attributes include characteristics that are specific to a signal, such as pitch, intensity, timbre, or statistical properties that differentiate signals. This paper describes a novel approach to analyze and comprehend real-world audition, and at the same time, it provides an effective way to auditory scene analysis. This idea and approach is named auditory feature binding. It can be used in machine perception, which has the ability of extract the focused target from its competing signals or noise. The elements and segments can be organized in the non-linear way to over-fly from the primary audition to senior audition perception. Besides, we will combine it with optic feature binding to realize cooperating management of audio-visual cross-modal signals which is emphasized in the National Science Fund of China Project Research on Synergic Learning Algorithm of Audition-vision Cross-modal Coherence (#60873139).

feature binding real-world audition auditory scene analysis perception cross-modal cognitive

Guangping Zhuo Xueli Yu

College of Computer and Software in Taiyuan University of Technology 79 Yingze West Street, Taiyuan College of Computer and Software in Taiyuan University of Technology 79 Yingze West Street, Taiyuan

国际会议

International Conference on Computational Aspects of Social Networks(国际社会网络计算会议 CASoN 2010)

太原

英文

351-354

2010-09-26(万方平台首次上网日期,不代表论文的发表时间)