We study a repeated information design problem faced by an informed sender who tries to influence the behavior of a self-interested receiver, through the provision of payoff-relevant information. We consider settings where the receiver repeatedly faces a sequential decision making (SDM) problem. At each round, the sender observes the realizations of random events in the SDM problem, which are only partially observable by the receiver. This begets the challenge of how to incrementally disclose such information to the receiver to persuade them to follow (desirable) action recommendations. We study the case in which the sender does not know random events probabilities, and, thus, they have to gradually learn them while persuading the receiver. We start by providing a non-trivial polytopal approximation of the set of the sender's persuasive information-revelation structures. This is crucial to design efficient learning algorithms. Next, we prove a negative result which also applies to the non-sequential case: no learning algorithm can be persuasive in high probability. Thus, we relax the persuasiveness requirement, studying algorithms that guarantee that the receiver's regret in following recommendations grows sub-linearly. In the full-feedback setting—where the sender observes the realizations of all the possible random events—, we provide an algorithm with O˜(T) regret for both the sender and the receiver. Instead, in the bandit-feedback setting—where the sender only observes the realizations of random events actually occurring in the SDM problem—, we design an algorithm that, given an α∈[1/2,1] as input, guarantees O˜(Tα) and O˜(Tmax{α,1−α2}) regrets, for the sender and the receiver respectively. This result is complemented by a lower bound showing that such a regret trade-off is tight for α∈[1/2,2/3].
Read full abstract