ABSTRACTThis study examines the dynamics of human reliance on algorithmic advice in a situation with strategic interaction. Participants played the strategic game of Rock–Paper–Scissors (RPS) under various conditions, receiving algorithmic decision support while facing human or algorithmic opponents. Results indicate that participants often underutilize algorithmic recommendations, particularly after early errors, but increasingly rely on the algorithm following successful early predictions. This behavior demonstrates a sensitivity to decision outcomes, with asymmetry: rejecting advice consistently reinforces rejecting advice again while accepting advice leads to varied reactions based on outcomes. We also investigate how personal characteristics, such as algorithm familiarity and domain experience, influence reliance on algorithmic advice. Both factors positively correlate with increased reliance, and algorithm familiarity significantly moderates the relationship between outcome feedback and reliance. Facing an algorithmic opponent increases advice rejection frequencies, and the determinants of trust and interaction dynamics differ from those with human opponents. Our findings enhance the understanding of algorithm aversion and reliance on AI, suggesting that increasing familiarity with algorithms can improve their integration into decision‐making processes.