Abstract

The theory of prediction with expert advice usually deals with countable or finite-dimensional pools of experts. In this paper we give similar results for pools of decision rules belonging to an infinite-dimensional functional space which we call the Fermi–Sobolev space. For example, it is shown that for a wide class of loss functions (including the standard square, absolute, and log loss functions) the average loss of the master algorithm, over the first N steps, does not exceed the average loss of the best decision rule with a bounded Fermi–Sobolev norm plus O(N − − 1/2). Our proof techniques are very different from the standard ones and are based on recent results about defensive forecasting. Given the probabilities produced by a defensive forecasting algorithm, which are known to be well calibrated and to have high resolution in the long run, we use the Expected Loss Minimization principle to find a suitable decision.KeywordsDecision RuleLoss FunctionPrediction AlgorithmChoice FunctionDecision StrategyThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.