Abstract

Deep convolutional neural networks (CNNs) are widely used to improve the performance of image restoration tasks, including single-image super-resolution (SISR). Generally, researchers are manually designing more complex and deeper CNNs to further increase the given problems’ performance. Instead of this hand-crafted CNN architecture design, neural architecture search (NAS) methods have been developed to find an optimal architecture for a given task automatically. For example, NAS-based SR methods find optimized network connections and operations by reinforcement learning (RL) or evolutionary algorithms (EA). These methods enable finding an optimal system automatically, but most of them need a very long search time. In this paper, we propose a new search method for the SISR that can significantly reduce the overall design time by applying a weight-sharing scheme. We also employ a multi-branch structure to enlarge the search space for capturing multi-scale features, resulting in better reconstruction on the textured region. Experiments show that the proposed method finds an optimal SISR network about twenty times faster than the existing methods, while showing comparable performance in terms of PSNR vs. parameters. Comparison of visual quality validates that the obtained SISR network reconstructs texture areas better than the previous methods because of the enlarged search space to find multi-scale features.

Highlights

  • Deep convolutional neural networks (CNNs) are widely used to improve the performance of image restoration tasks, including single-image super-resolution (SISR)

  • In this paper, we employ multi-branch architecture and propose an automated multi-branch SISR network design based on the neural architecture search (NAS) scheme [19], unlike the conventional manual design of single-branch or multi-branch networks

  • We include the multi-branch networks to expand the search space and propose a new NAS-based SISR network design while existing search methods attempted to find the optimal connection within the single-branch networks

Read more

Summary

CONTROLLER AND COMPLEXITY-BASED PENALTY

We use a two-layer LSTM as our controller as shown in the lower part of Fig. 2 It generates a sequence for creating a child network at the end of the fully connected layer (FC). Sb = {(sb)m,n}, 0 < m ≤ M, 0 ≤ n < N, in the case that the child network consists of B branches, M PSNs in one multi-scale block (MSB), and each node has N layers. The example sequence and the constructed block architecture are shown, which generate eight outputs for a twobranch structure (B = 2) with two PSNs (M = 2) that have two layers (N = 2). Nmax where nmax denotes the number of the model’s parameters, which uses all candidates in the search space, and nc is the number of parameters of the designed child network. To set a trade-off between the parameters and the performance, we multiply λ to the complexity-based penalty

MBNASNET
MULTI-SCALE BLOCK WITH PARTIALLY SHARED NODES
EFFECT OF THE COMPLEXITY-BASED PENALTY TO THE PERFORMANCE OF CONTROLLER
Findings
EFFECT OF MULTI-BRANCH STRUCTURE AND PARTIAL PARAMETER SHARING SCHEME
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.