会议专题

Multilevel Residual Learning for Single Image Super Resolution

  Single image super-resolution (SISR) methods based on deep learning techniques, especially convolutional neural networks (CNNs) and residual learning, have made great achievements compared with traditional methods. Most of the current work focuses on the structural design to increase the depth of the entire network and thus improve the performance of the models. However, it is also important to improve the efficiency of model parameters, especially in the case of limited resources. To improve the performance of the models when the number of model parameters keeps relatively small and fixed, we propose a novel multilevel residual learning pattern for SISR in this work. The proposed method shows a stable performance improvement over the compared structures on several benchmark datasets with equal model parameters. Besides, we empirically show that simply increasing the number of building blocks (e.g. various residual blocks) to increase the depth of the networks will not obtain the expected improvements of performance, which may imply that the optimal performance of different network depths corresponds to different structures of building blocks.

Convolutional Neural Networks Residual learning Image Super Resolution Skip connection

Xiaole Zhao Hangfei Liu Tao Zhang Wei Bian Xueming Zou

School of Life Science and Technology, University of Electronic Science and Technology of China (UES School of Life Science and Technology, University of Electronic Science and Technology of China (UES

国际会议

中国模式识别与计算机视觉大会(PRCV2018)

广州

英文

537-549

2018-11-23(万方平台首次上网日期,不代表论文的发表时间)