Decision-making is undergoing rapid changes due to the introduction of artificial intelligence (AI), as AI recommender systems can help mitigate human flaws and increase decision accuracy and efficiency. However, AI can also commit errors or suffer from algorithmic bias. Hence, blind trust in technologies carries risks, as users may follow detrimental advice resulting in undesired consequences. Building upon research on algorithm appreciation and trust in AI, the current study investigates whether users who receive AI advice in an uncertain situation overrely on this advice — to their own detriment and that of other parties. In a domain-independent, incentivized, and interactive behavioral experiment, we find that the mere knowledge of advice being generated by an AI causes people to overrely on it, that is, to follow AI advice even when it contradicts available contextual information as well as their own assessment. Frequently, this overreliance leads not only to inefficient outcomes for the advisee, but also to undesired effects regarding third parties. The results call into question how AI is being used in assisted decision making, emphasizing the importance of AI literacy and effective trust calibration for productive deployment of such systems.