Abstract

We explore a type of transfer learning in Convolutional Neural Networks where a network trained on a primary representation of examples (e.g. photographs) is capable of generalizing to a secondary representation (e.g. sketches) without fully training on the latter. We show that the network is able to improve classification on classes for which no examples in the secondary representation were provided, an evidence that the model is exploiting and generalizing concepts learned from examples in the primary representation. We measure this lateral representation learning on a CNN trained on the ImageNet dataset and use overlapping classes in the TU-Berlin and Caltech- 256 datasets as secondary representations, showing that the effect can’t be fully explained by the network learning newly specialized kernels. This phenomenon can potentially be used to train classes in domain adaptation tasks for which few examples in a target representation are available.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.