We propose a scale-invariant deep neural network model based on wavelets for single\nimage super-resolution (SISR). The wavelet approximation images and their corresponding wavelet\nsub-bands across all predefined scale factors are combined to form a big training data set.\nThen, mappings are determined between the wavelet sub-band images and their corresponding\napproximation images. Finally, the gradient clipping process is used to boost the training speed of\nthe algorithm. Furthermore, stationary wavelet transform (SWT) is used instead of a discrete wavelet\ntransform (DWT), due to its up-scaling property. In this way, we can preserve more information\nabout the images. In the proposed model, the high-resolution image is recovered with detailed\nfeatures, due to redundancy (across the scale) property of wavelets. Experimental results show that\nthe proposed model outperforms state-of-the algorithms in terms of peak signal-to-noise ratio (PSNR)\nand structural similarity index measure (SSIM).
Loading....