Abstract

In data-intensive applications, it is advantageous to perform partial processing close to the data, and communicate intermediate results to a central processor, instead of the data itself. When the communication or computation medium is noisy, the resulting degradation in computation quality at the central processor must be mitigated. We study this problem for the setup of binary classification performed by an ensemble of base functions communicating real-valued confidence levels. We propose a noise-mitigation solution that optimizes the transmission gains and aggregation coefficients of the base functions. Toward that, we formulate a post-training gradient-based optimization algorithm that minimizes the error probability given the training dataset and the noise parameters. We further derive lower and upper bounds on the optimized error probability, and show empirical results that demonstrate the enhanced performance achieved by our approach on real data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call