Abstract

The object of research is the ability to combine a previously trained model of a deep neural network of direct propagation with user data when used in problems of determining the class of one object in the image. That is, the processes of transfer learning in convolutional neural networks in classification problems are considered. The conducted researches are based on application of a method of comparison of theoretical and practical results received at training of convolutional neural networks. The main objective of this research is to conduct two different learning processes. Traditional training during which in each epoch of training there is an adjustment of values of all weights of each layer of a network. After that there is a process of training of a neural network on a sample of the data presented by images. The second process is learning using transfer learning methods, when initializing a pre-trained network, the weights of all its layers are «frozen» except for the last fully connected layer. This layer will be replaced by a new one with the number of outputs, which should be equal to the number of classes in the sample. After that, to initialize its parameters by the random values distributed according to the normal law. Then conduct training of such convolutional neural network on the set sample. When the training was conducted, the results were compared. In conclusion, learning from convolutional neural networks using transfer learning techniques can be applied to a variety of classification tasks, ranging from numbers to space objects (stars and quasars). The amount of computer resources spent on research is also quite important. Because not all model of a convolutional neural network can be fully taught without powerful computer systems and a large number of images in the training sample.

Highlights

  • The concept of transfer learning is to transfer knowledge gained in one or more initial tasks and use them to improve learning in the current task

  • It has become possible to retrain an artificial neural network trained in a single sample of data to perform tasks on a new data set, which significantly speeds up the learning process of the network

  • The aim of this paper is to investigate the ability of convolutional neural networks to transfer between different tasks of classification of tagged images

Read more

Summary

Introduction

The concept of transfer learning is to transfer knowledge gained in one or more initial tasks and use them to improve learning in the current task. It has become possible to retrain an artificial neural network trained in a single sample of data to perform tasks on a new data set, which significantly speeds up the learning process of the network. Much research is currently underway into transfer learning. Work is being done to transfer knowledge between texts and images [1], knowledge is being transferred from unallocated data [2], and so on. For the task of recognizing diabetic retinopathy, some participants used transfer training [3, 4]. It is worth noting that there are already many different architectures of artificial neural networks that perform well image classification tasks. When solving problems on new data it is easier and more efficient to choose one of the existing neural networks don’t building it from scratch

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call