Video Super-Resolution
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Video super-resolution becomes significant desire recently to provide high-resolution contents for ultra high definition displays. Recent advances in video super-resolution have shown that convolutional neural networks combining with motion compensation, which can merge information from multiple low-resolution frames, to generate high-quality frames. But it has been demonstrated that most deep learning based video super-resolution methods heavily dependent on the accuracy of motion estimation and compensation. Other than before, here proposed a different end-to-end deep neural network that inexplicit compensates motion through the generates dynamic filters. The dynamic filters are computed depending on the local spatio-temporal neighborhood of each pixel. With this approach, a high-resolution frame has reconstructed directly from the low-resolution input frames by using a series networks combining with a dynamic local filter network. The proposed network can generate much sharper high-resolution videos with temporal consistency, compared to the previous methods.