Abstract

To understand how robots, such as self-driving cars, drones, or rovers, generate spaces of interaction, it is important to move away from the idea that they simply represent the world through their sensors and then decide upon an appropriate action to reach a predefined goal. The question is how they generate their own worlds in the first place. This article reassesses the concept of virtuality as an analytical tool to describe the world-making capacities of autonomous, environmentally adaptive machines. Such robots employ the probabilities of virtual world models, generated by algorithmically filtering sensor data, as statistical specifications. This sensor-algorithmic virtuality enables them to decide upon actions to be actualized in the real world by playing through a multiplicity of virtual options. They don’t have access to an outside view but rather operate based on a multiplicity of virtual, possible, and more or less probable worlds. In a case study on the Mars rover Perseverance, this article describes in detail the technical procedures and the underlying epistemological challenges of machinic virtuality and shows that this virtuality—as a multiplicity of virtual worlds—rests upon the algorithmic filtering of sensor data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.