Distributed connectionist networks have difficulty learning incrementally because the representations in the network overlap. Therefore, it is necessary to reduce the overlaps of representations for incremental learning. At the same time, the representational overlaps give these networks the ability to generalize. In this study, we use a modified multilayered neural network to numerically examine the trade-off between incremental learning and generalization abilities, and then we propose a novel network model with structural lateral inhibitions to reconcile the two abilities. We also analyze the behavior of the proposed model using Formal Concept Analysis, which reveals that the network implements “conceptualization”: differentiation and meditation between intensional and extensional representations. This study suggests a new paradigm for the traditional question, whether representations in the brain are distributed or not.