Abstract

The aim of Neuroevolution is to find neural networks and convolutional neural network (CNN) architectures automatically through evolutionary algorithms. A crucial problem in neuroevolution is search time, since multiple CNNs must be trained during evolution. This problem has led to fitness acceleration approaches, generating a trade-off between time and fitness fidelity. Also, since search spaces for this problem usually include only a few parameters, this increases the human bias in the search. In this work, we propose a novel two-level genetic algorithm (GA) for addressing the fidelity-time trade-off problem for the fitness computation in CNNs. The first level evaluates many individuals quickly, and the second evaluates only those with the best results more finely. We also propose a search space with few restrictions, and an encoding with unexpressed genes to facilitate the crossover operation. This search space allows CNN architectures to have any sizes, shapes, and skip-connections among nodes. The two-level GA was applied to the pattern recognition problem on seven datasets, five MNIST-Variants, Fashion-MNIST, and CIFAR-10, achieving significantly better results than all those previously published. Our results show an improvement of 39.89% (4.2% error reduction) on the most complex dataset of MNIST (MRDBI), and on average 30.52% (1.35% error reduction) on all the five datasets. Furthermore, we show that our algorithm performed as well as precise-training GA, but took only the time of a fast-training GA. These results can be relevant and useful not only for image classification problems but also for GA-related problems.

Highlights

  • N EURAL networks (NNs) and convolutional networks (CNNs) have been inspired, since their beginnings, by mammal brain structures including those of the visual system [1]–[5]

  • We propose a new coding, with mutation and crossover, through genes that are not expressed in the decoded CNN

  • (4) On the MNISTVariants dataset, we show that the 2LGA effectively reduces the adverse aspects of fitness approximation techniques, by reaching the performance of an ordinary genetic algorithm (GA) that fully trains CNNs, but taking a slightly longer time than one that approximates fitness

Read more

Summary

Introduction

N EURAL networks (NNs) and convolutional networks (CNNs) have been inspired, since their beginnings, by mammal brain structures including those of the visual system [1]–[5]. These networks were designed to learn from examples, with the learning process strongly depending on the architecture of the network [6]–[8]. CNNs had success in improving face recognition, with DeepFace (97.35% on LFW) [19] and FaceNet (99.63% on LFW) [20] He et al developed ResNet [21]. We use the concept of reproduction to encompass the processes of selection, mutation, Description

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call