Abstract

Edge computing can take full advantage of data-driven models only if the eventual inference function can be deployed on low-power, resource-constrained digital devices. In this regard, single layer feedforward neural networks (SLFNs) based on the threshold activation function represent a suitable option, as they can suitably balance generalization performances and computational costs. Their robustness to perturbations at the node level, though, is an issue: a small perturbation affecting the input to the activation might result in a sign inversion at the output of the neuron. This in turn may severely affect the accuracy of the inference function when implemented on hardware. This paper shows that the robustness of this class of SLFNs can be improved by introducing in the cost function a regularization term specifically designed to limit the impacts of perturbations at the node level. The novel cost function indeed admits a closed-form solution. Experimental validation involved six real world benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call