Abstract Grounded language processing is a crucial component in many artificial intelligence systems, as it allows agents to communicate about their physical surroundings. State-of-the-art approaches typically employ deep learning techniques that perform end-to-end mappings between natural language expressions and representations grounded in the environment. Although these techniques achieve high levels of accuracy, they are often criticized for their lack of interpretability and their reliance on large amounts of training data. As an alternative, we propose a fully interpretable, data-efficient architecture for grounded language processing. The architecture is based on two main components. The first component comprises an inventory of human-interpretable concepts learned through task-based communicative interactions. These concepts connect the sensorimotor experiences of an agent to meaningful symbols that can be used for reasoning operations. The second component is a computational construction grammar that maps between natural language expressions and procedural semantic representations. These representations are grounded through their integration with the learned concepts. We validate the architecture using a variation on the CLEVR benchmark, achieving an accuracy of 96 %. Our experiments demonstrate that the integration of a computational construction grammar with an inventory of interpretable grounded concepts can effectively achieve human-interpretable grounded language processing in the CLEVR environment.