Abstract

Suppose Y n is obtained by observing a uniform Bernoulli random vector Xn through a binary symmetric channel. Courtade and Kumar asked how large the mutual information between Y n and a Boolean function b(Xn) could be, and conjectured that the maximum is attained by the dictator function. An equivalent formulation of this conjecture is that dictator minimizes the prediction cost in sequentially predicting Y n under logarithmic loss, given b(Xn). In this paper, we study the question of minimizing the sequential prediction cost under a different (proper) loss function - the quadratic loss. In the noiseless case, we show that majority asymptotically minimizes this prediction cost among all Boolean functions. We further show that for weak noise, majority is better than dictator, and that for strong noise dictator outperforms majority. We conjecture that for quadratic loss, there is no single Boolean function that is simultaneously optimal at all noise levels.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call