A Vision-Based Real time Motion Synthesis System
This paper introduces the design of a real time vision-based motion synthesis system. The system requires user to wear the markers in a certain color. Based on that, several novel algorithms were used for feature detection and feature tracking under occlusion by estimating the velocity of missing features based on the prior, smoothness and fitness term. These algorithms ensured the accuracy and low computation cost of reconstruction of the 3D points in real time. The low-dimensional control signals from users marker points were first used to construct a series of local models. When constructing these local models, we preprocess motion capture data to K-nearest neighborhood graph and store these data in KD-tree to ensure model building is real-time. In animation synthesis phase, we used an approach named locally weighted linear regression to synthesis the animation data closest to current pose. Results showed that our system can successfully synthesize three kinds of motion: running, walking and jumping.
feature tracking velocity estimation 3D points reconstruction KD-tree locally weighted linear regression
Xin Wang Minqian Liu Sheng Liu Qing Ma
College of Computer Science and Technology Zhejiang University of Technology Hangzhou, China
国际会议
杭州
英文
1126-1131
2012-10-28(万方平台首次上网日期,不代表论文的发表时间)