Abstract

In data-intensive applications, it is advantageous to perform some partial processing close to the data, and communicate to a central processor the partial results instead of the data itself. When the communication medium is noisy, one must mitigate the resulting degradation in computation quality. We study this problem for the setup of binary classification performed by an ensemble of functions communicating real-valued confidence levels. We propose a noise-mitigation solution that works by optimizing the aggregation coefficients at the central processor. Toward that, we formulate a post-training gradient algorithm that minimizes the error probability given the dataset and the noise parameters. We further derive lower and upper bounds on the optimized error probability, and show empirical results that demonstrate the enhanced performance achieved by our scheme on real data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call