Abstract: Previous work in single-image super-resolution aims to either maintain high fidelity relative to a target high-resolution image or sacrifice PSNR to synthesize more photorealistic results. However, both types of approaches have issues handling large upsamping factors (e.g. 8x) and the results are either overblurred or the generated content is not believable. To avoid such issues, we propose to use a multi-scale, progressive generative adversarial network (GAN) that is trained gradually to produce convincing results for these difficult cases. Also crucial is our proposed loss function, which allows for more freedom in the synthesis of high frequency content without deviating drastically from the original low-resolution input. We compare favorably against state-of-the-art approaches for large upsampling factors on standard benchmarks as evaluated perceptually and numerically using sliced Wasserstein distance. Our method is also the first GAN approach to demonstrate plausible results at 16x SR for images in the DIV2K dataset.
Date:Friday, June 8, 2018 - 1:00pm
Title:A Multi-Scale, Progressive GAN Approach for Photorealistic Single-Image Super-Resolution
Committee:Pradeep Sen (chair), Matthew Turk