Abstract

Real-time image and video processing is a challenging problem in smart surveillance applications. It is necessary to trade off between high frame rate and high resolution to meet the limited bandwidth requirement in many specific applications. Thus, image super-resolution become one commonly used techniques in surveillance platform. The existing image super-resolution methods have demonstrated that making full use of image prior can improve the algorithm performance. However, the previous deep-learning-based image super-resolution methods rarely take image prior into account. Therefore, how to make full use of image prior is one of the unsolved problems for deep-network-based single image super-resolution methods. In this paper, we establish the relationship between the traditional sparse-representation-based single-image super-resolution methods and the deep-learning-based ones and use transfer learning to make our proposed deep network take the image prior into account. Another unresolved problem of the deep-learning-based single-image super-resolution method is how to avoid neurons compromise to different image contents. In this paper, the image patches are anchored to the dictionary atoms to group into various categories. As a result, each neuron will work on the same types of image patches that have similar details, which makes the network more accurate to recover high-frequency details. By solving these two problems, we propose an anchored neighborhood deep network for single-image super-resolution. Experimental results show that our proposed method outperforms many state-of-the-art single-image super-resolution methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.