Abstract

Transfer learning algorithms have been widely studied for machine learning in recent times. In particular, in image recognition and classification tasks, transfer learning has shown significant benefits, and is getting plenty of attention in the research community. While performing a transfer of knowledge among source and target tasks, homogeneous dataset is not always available, and heterogeneous dataset can be chosen in certain circumstances. In this article, we propose a way of improving transfer learning efficiency, in case of a heterogeneous source and target, by using the Hebbian learning principle, called Hebbian transfer learning (HTL). In computer vision, biologically motivated approaches such as Hebbian learning represent associative learning, where simultaneous activation of brain cells positively affect the increase in synaptic connection strength between the individual cells. The discriminative nature of learning for the search of features in the task of image classification fits well to the techniques, such as the Hebbian learning rule—neurons that fire together wire together. The deep learning models, such as convolutional neural networks (CNN), are widely used for image classification. In transfer learning, for such models, the connection weights of the learned model should adapt to new target dataset with minimum effort. The discriminative learning rule, such as Hebbian learning, can improve performance of learning by quickly adapting to discriminate between different classes defined by target task. We apply the Hebbian principle as synaptic plasticity in transfer learning for classification of images using a heterogeneous source-target dataset, and compare results with the standard transfer learning case. Experimental results using CIFAR-10 (Canadian Institute for Advanced Research) and CIFAR-100 datasets with various combinations show that the proposed HTL algorithm can improve the performance of transfer learning, especially in the case of a heterogeneous source and target dataset.

Highlights

  • The biological structure and behavior of real animal brain neurons has inspired the neural networks [1], and backpropagation [2] has evolved to be one of the most effective standard learning rules for artificial neural networks

  • We present an algorithm called Hebbian Transfer Learning (HTL), which performs transfer learning on convolutional neural networks with synaptic plasticity in connection weights

  • The results show that standard transfer learning (STL) and Hebbian transfer learning (HTL) get almost similar accuracies for homogenous source-target pairs, but HTL clearly outperforms STL for heterogeneous source-target pairs

Read more

Summary

Introduction

The biological structure and behavior of real animal brain neurons has inspired the neural networks [1], and backpropagation [2] has evolved to be one of the most effective standard learning rules for artificial neural networks. The supervised learning of neural networks utilize training datasets and a global loss function. The gradient provided by the loss function [3] is back propagated from the output layer to hidden layers to update the parameters of the network. Many advanced optimizing techniques have been developed for gradient descent [4], and various neural network models have been proposed and successfully applied for image classification tasks, including the Convolutional. Neural Networks (CNN), such as AlexNet [5] and VGGNet [6].

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call