In reinforcement-learning studies, the environment is typically object-based; that is, objects are predictive of a reward. Recently, studies also adopted rule-based environments in which stimulus dimensions are predictive of a reward. In the current study, we investigated how people learned (1) in an object-based environment, (2) following a switch to a rule-based environment, (3) following a switch to a different rule-based environment, and (4) following a switch back to an object-based environment. To do so, we administered a reinforcement-learning task comprising of four blocks with consecutively an object-based environment, a rule-based environment, another rule-based environment, and an object-based environment. Computational-modeling results suggest that people (1) initially adopt rule-based learning despite its suboptimal nature in an object-based environment, (2) learn rules after a switch to a rule-based environment, (3) experience interference from previously-learned rules following a switch to a different rule-based environment, and (4) learn objects after a final switch to an object-based environment. These results imply people have a hard time adjusting to switches between object-based and rule-based environments, although they do learn to do so.