Abstract

ConformalLayers are sequential Convolutional Neural Networks (CNNs) that use activation functions defined as geometric operations in the conformal model for Euclidean geometry. Such construction turns the layers of sequential CNNs associative, leading to a significant reduction in the use of computing resources at inference time. After the layer’s association, both the processing time and memory used per batch entry become independent (i.e., constant) of the network’s depth. They depend only on the size of the input and output. The cost of conventional sequential CNNs, on the other hand, presents linear growth. This paper evaluates the robustness of the classification of ConformalLayers-based CNNs against different kinds of corruptions typically found in natural images. Our results show that the mean top-1 error rates of vanilla CNNs are smaller than ConformalLayer-based CNNs on clean images. Still, our approach outperforms other optimization techniques based on network quantization, and the relative difference to vanilla networks tends to reduce in the presence of image corruptions. Furthermore, we show that processing time and CO2 emission rates are much lower for ConformalLayer-based CNNs with a depth greater than seven and two, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call