We introduce the Category Abstraction Learning (CAL) model, a cognitive framework formally describing category learning built on similarity-based generalization, dissimilarity-based abstraction, two attention learning mechanisms, error-driven knowledge structuring, and stimulus memorization. Our hypotheses draw on an array of empirical and theoretical insights connecting reinforcement and category learning. The key novelty of the model is its explanation of how rules are learned from scratch based on three central assumptions. (a) Category rules emerge from two processes of stimulus generalization (similarity) and its direct inverse (category contrast) on independent dimensions. (b) Two attention mechanisms guide learning by focusing on rules, or on the contexts in which they produce errors. (c) Knowing about these contexts inhibits executing the rule, without correcting it, and consequently leads to applying partial rules in different situations. The model is designed to capture both systematic and individual differences in a broad range of learning paradigms. We illustrate the model's explanatory scope by simulating several benchmarks, including the classic Six Problems, the 5-4 problem, and linear separability. Beyond the common approach of predicting average response probabilities, we also propose explanations for more recently studied phenomena that challenge existing learning accounts, regarding task instructions, individual differences in rule extrapolation in three different tasks, individual attention shifts to stimulus features during learning, and other phenomena. We discuss CAL's relation to different models, and its potential to measure the cognitive processes regarding attention, abstraction, error detection, and memorization from multiple psychological perspectives. (PsycInfo Database Record (c) 2023 APA, all rights reserved).