Abstract

Deep learning has shown prominent superiority over other machine learning algorithms in Single Image Super-Resolution (SISR). In order to reduce the efforts and resources cost on manually designing deep architecture, we use differentiable neural architecture search (DARTS) on SISR. Since neural architecture search was originally used for classification tasks, our experiments show that direct usage of DARTS on super-resolutions tasks will give rise to many skip connections in the search architecture, which results in the poor performance of final architecture. Thus, it is necessary for DARTS to have made some improvements for the application in the field of SISR. According to characteristics of SISR, we remove redundant operations and redesign some operations in the cell to achieve an improved DARTS. Then we use the improved DARTS to search convolution cells as a nonlinear mapping part of super-resolution network. The new super-resolution architecture shows its effectiveness on benchmark datasets and DIV2K dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.