会议专题

Research on Spatial Transformation in Image Based on Deep Learning

  In the field of computer graphics,synthesizing a new view of 3D objects in images from a single perspective image is an important problem.A part of the object is unobservable,since 3D objects mapping to image space will result in partial occlusion or self-occlusion of objects.The synthesis needs to infer spatial structure and posture of the object.The uncertainty due to occlusion is a problem in the synthesis.In this paper,the problem is solved by establishing a convolutional neural network(CNN),which uses images including multiple chairs as dataset.First of all,we study related networks to propose a novel multi-parallel and multi-level encoding-decoding network,which implements the transformation from a single perspective image and angle semantic information to a new perspective synthetic image in an end-to-end way.Secondly,the network is trained by establishing a dataset.Finally,it is proved the neural network performs better edge smoothing effect and higher precision in image synthesis than state-of-the-art networks.

image space spatial transformation,semantic information,image synthesis

Peng Gao Qingxuan Jia

Laboratory of Space Robot,Beijing University of Posts and Telecommunications,No.10 Xitucheng Road,Haidian District,Beijing,China

国际会议

2019 2nd International Conference on Mechanical, Electronic and Engineering Technology (MEET 2019) 2019年第二届机电与工程技术国际会议

西安

英文

471-480

2019-01-19(万方平台首次上网日期,不代表论文的发表时间)