会议专题

Content-Aware Face Blending by Label Propagation

  Facial blending is critical for various facial editing applications,whose goal is to transfer the facial appearance of the reference to the target in seamless manners.However,when there are significant illumination or color differences between the reference and the target,visual artifacts may be probably introduced into the result.To tackle this problem,we propose content-aware masks that adaptively adjust the facial lighting and blended region to achieve seamless face blending.To generate the content-aware masks with good visual consistency,we formulate it as a label propagation process from a semi-supervised learning perspective,where the intensity of the initialized masks are propagated to the whole masks based on the local visual similarity of the images.Then,we construct a content-aware face blending framework that consists of three stages.Firstly,the facial region of the reference and the target are aligned according to the detected facial landmarks.Secondly,a facial quotient image and a binary mask are obtained as the initialized masks,and the content-aware masks for illumination and region adjustment are generated using the label propagation model with different guided feature.Finally,we combine the reference to the target using the generated masks to produce the face blending effects.Experimental results show the effectiveness and robustness of our methods for different image-based facial rendering tasks.

Image-based rendering Label propagation Face transfer

Lingyu Liang Xinglin Zhang

South China University of Technology,Guangzhou,China

国际会议

中国模式识别与计算机视觉大会(PRCV2018)

广州

英文

173-182

2018-11-23(万方平台首次上网日期,不代表论文的发表时间)