Abstract

In single image super-resolution problems, the recent feed forward deep learning architectures use residual connections in order to preserve local features and carry them through the next layer. In a simple residual skip connection, all the features of the earlier layer are concatenated with the features of the current layer. A simple concatenation of the features does not exploit the fact that some features may be more useful than other features and vice versa. To overcome this limitation, we propose an extended architecture (baby neural network) which will have input as the features learned from the previous layer and output a multiplication factor. This multiplication factor will give importance to the given feature and thus help in training the current layer’s features more accurately. The proposed model clearly outperforms the existing works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call