Abstract

We propose a deep learning method for single image super-resolution using a siamese-based neural network. Our method directly learns an end-to-end mapping between high-resolution image and low-resolution image. Siamese structure is usually used in patch matching and object tracking. This is a try at image super-resolution. We propose that it also perform well on image super-resolution. Our structure, consist of a deep convolutional network(e.g., SRCNN structure) and a three “tower” feature filter with shared parameters. The deep convolutional network is used to get the final model for generating the high-resolution image and the three “tower” filter is used to extract features from the low-resolution patches, generated high-resolution patches and original high-resolution patches. We further explore weights settings to achieve a satisfactory performance. We also use standard images test our model and compare it to the traditional method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call