With the increasing use and complexity of robotic devices, the requirements for the design of human–robot interfaces are rapidly changing and call for new means of interaction and information transfer. On that scope, the discussed project—being developed by the Hybrid Things Lab at the University of Applied Sciences Augsburg and the Design Research Lab at Bauhaus-Universität Weimar—takes a first step in characterizing a novel field of research, exploring the design potentials of non-mimetic sonification in the context of human–robot interaction. Featuring an industrial seven-axis manipulator and collecting multiple information (for instance, the position of the end-effector, joint positions and forces) during manipulation, these datasets are being used for creating a novel augmented audible presence and thus allowing new forms of interaction. As such, this article considers (1) research parameters for non-mimetic sonification (such as pitch, volume, and timbre); (2) a comprehensive empirical pursuit, including setup, exploration, and validation; (3) the overall implications of integrating these findings into a unifying human–robot interaction process. The relation between machinic and auditory dimensionality is of particular concern.