Abstract

We propose compact and effective network layer Rotational Duplicate Layer (RDLayer) that takes the place of regular convolution layer resulting up to 128\(\times \) in memory saving. Along with network accuracy, memory and power constraints affect design choices of computer vision tasks performed on resource-limited devices such as FPGAs (Field Programmable Gate Array). To overcome this limited availability, RDLayers are trained in a way that whole layer parameters are obtained from duplication and rotation of smaller learned kernel. Additionally, we speed up the forward pass via partial decompression methodology for data compressed with JPEG(Joint Photograpic Expert Group)2000. Our experiments on remote sensing scene classification showed that our network achieves \(\sim \)4\(\times \) reduction in model size in exchange of \(\sim \)4.5\(\%\) drop in accuracy, \(\sim \)27\(\times \) reduction with the cost of \(\sim \)10\(\%\) drop in accuracy, along with \(\sim \)2.6\(\times \) faster evaluation time on test samples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call