Algorithms are capable of assisting with, or making, critical decisions in many areas of consumers’ lives. Algorithms have consistently outperformed human decision-makers in multiple domains, and the list of cases where algorithms can make superior decisions will only grow as the technology evolves. Nevertheless, many people distrust algorithmic decisions. One concern is their lack of transparency. For instance, it is often unclear how a machine learning algorithm produces a given prediction. To address the problem, organizations have started providing post-hoc explanations of the logic behind their algorithmic decisions. However, it remains unclear to what extent explanations can improve consumer attitudes and intentions. Five experiments demonstrate that algorithmic explanations can improve perceptions of transparency, attitudes, and behavioral intentions – or they can backfire, depending on the explanation method used. The most effective explanations highlight concrete and feasible steps consumers can take to positively influence their future decision outcomes.