Artificial curiosity, based on developmental psychology concepts wherein an agent attempts to maximize its learning progress, has gained much attention in recent years. Similarly, social robots are slowly integrating into our daily lives, in schools, factories, and in our homes. In this contribution, we integrate recent advances in artificial curiosity and social robots into a single expressive cognitive architecture. It is composed of artificial curiosity and social expressivity modules and their unique link, i.e., the robot verbally and non-verbally communicates its internally estimated learning progress, or learnability, to its human companion. We implemented this architecture in an interaction where a fully autonomous robot took turns with a child trying to select and solve tangram puzzles on a tablet. During the curious robot’s turn, it selected its estimated most learnable tangram to play, communicated its selection to the child, and then attempted at solving it. We validated the implemented architecture and showed that the robot learned, estimated its learnability, and improved when its selection was based on its learnability estimation. Moreover, we ran a comparison study between curious and non-curious robots, and showed that the robot’s curiosity-based behavior influenced the child’s selections. Based on the artificial curiosity module of the robot, we have formulated an equation that estimates each child’s moment-by-moment curiosity based on their selections. This analysis revealed an overall significant decrease in estimated curiosity during the interaction. However, this drop in estimated curiosity was significantly larger with the non-curious robot, compared to the curious one. These results suggest that the new architecture is a promising new approach to integrate state-of-the-art curiosity-based algorithms to the growing field of social robots.