Abstract

In this paper we present a novel data-driven subsampling method that can be seamlessly integrated into any neural network architecture to identify the most informative subset of samples within the original acquisition domain for a variety of tasks that rely on deep learning inference from sampled signals. In contrast to existing methods that require signal transformation into a sparse basis, expensive signal reconstruction as an intermediate step, and that can support a single predefined sampling rate only, our approach allows the sampling inference pipeline to adapt to multiple sampling rates directly in the original signal domain. The key innovations enabling such operation are a custom subsampling layer and a novel training mechanism. Through extensive experiments with four data sets and four different network architectures, our method demonstrates a simple yet powerful sampling strategy that allows the given network to be efficiently utilized at any given sampling rate, while the inference accuracy degrades smoothly and gradually as the sampling rate is reduced. Experimental comparison with state-of-the-art sparse sensing and learning techniques demonstrates competitive inference accuracy at different sampling rates, coupled with a significant improvement in computational efficiency, and the crucial ability to operate at arbitrary sampling rates without the need for retraining.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call