The human eye has three rotational degrees of freedom: azimuthal, elevational, and torsional. Although torsional eye movements have the most limited excursion, Hering and Helmholtz have argued that they play an important role in optimizing visual information processing. In humans, the relationship between gaze direction and torsional eye angle is described by Listing’s law. However, it is still not clear how this behavior initially develops and remains calibrated during growth. Here we present the first computational model that enables an autonomous agent to learn and maintain binocular torsional eye movement control. In our model, two neural networks connected in series: one for sensory encoding followed by one for torsion control, are learned simultaneously as the agent behaves in the environment. Learning is based on the active efficient coding (AEC) framework, a generalization of Barlow’s efficient coding hypothesis to include action. Both networks adapt by minimizing the prediction error of the sensory representation, subject to a sparsity constraint on neural activity. The policies that emerge follow the predictions of Listing’s law. Because learning is driven by the sensorimotor contingencies experienced by the agent as it interacts with the environment, our system can adapt to the physical configuration of the agent as it changes. We propose that AEC provides the most parsimonious expression to date of Hering’s and Helmholtz’s hypotheses. We also demonstrate that it has practical implications in autonomous artificial vision systems, by providing an automatic and adaptive mechanism to correct orientation misalignments between cameras in a robotic active binocular vision head. Our system’s use of fairly low resolution (100 × 100 pixel) image windows and perceptual representations amenable to event-based input paves a pathway towards the implementation of adaptive self-calibrating robot control on neuromorphic hardware.