会议专题

MULTIMODAL EMOTION ESTIMATION AND EMOTIONAL SYNTHESIZE FOR INTERACTION VIRTUAL AGENT

  In this study,we create a 3D interactive virtual character based on multi-modal emotional recognition and rule based emotional synthesize techniques.This agent estimates users.emotional state by combining the information from the audio and facial expression with CART and boosting.For the output module of the agent,the voice is generated by TTS (Text-to-Speech)system by freely given text.The synchronous visual information of agent,including facial expression,head motion,gesture and body animation,are generated by multi-modal mapping from motion capture database.A kind of high level behavior markerup language(hBML) which contains five keywords is used to drive the animation of virtual agent for emotional expression.Experiments show that this virtual character is considered natural and realistic in multimodal interaction environments.

Interactive virtual character Multi-modal Face animation Body movements CART Boosting

Minghao Yang Jianhua Tao Hao Li Kaihui Mu

The National Laboratory of Pattern Recognition,Institute of Automation,Chinese Academy of Sciences 100195,China

国际会议

2012 2nd IEEE International Conference on Cloud Computing and Intelligence Systems (2012年第2届IEEE云计算与智能系统国际会议(IEEE CCIS2012))

杭州

英文

239-244

2012-10-30(万方平台首次上网日期,不代表论文的发表时间)