Abstract
Technological evolution, so central to the progress of humanity in recent decades, is the process of constantly introducing new technologies to replace old ones. A new technology does not necessarily mean a better technology and so should not always be embraced. How can society learn which novelties present actual improvements over the existing technology? Whereas the quality of status-quo technology is well known, the new one is a pig in a poke. With sufficiently many individuals willing to explore the new technology society can learn whether it is indeed an improvement. However, self motivated agents, often, do not agree to explore. This is true, in particular, if agents observed some predecessors that were disappointed from the new technology. Inspired by the classical multi-armed bandit model we study a setting where agents arrive sequentially and must pull one of two arms in order to receive a reward - a risky arm (representing the new technology) and a safe arm (representing the existing one). A central planner must induce sufficiently many agents to experiment with the risky arm. The central planner observes the actions and rewards of all agents while the agents themselves have partial observation. For the setting where each agent observes his predecessor we provide the central planner with a recommendation algorithm that is (almost) incentive compatible and facilitates social learning.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.