I walk into the room and the smell of burning wood hits me immediately. The warmth from the fireplace grows as I step nearer to it. The fire needs to heat the little cottage through night so I add a log to the fire. There are a few sparks and embers. I throw a bigger log onto the fire and it drops with a thud. Again, there are barely any sparks or embers. The heat and the smell stay the same. They don’t change and I do not become habituated to it. Rather, they are just a steady stream, so I take off my VR headset and give my recommendations to the team programming the gameified world of the virtual museum of the future (one depicting an ancient Turkish settlement, being built now at the institution where one of us works). As much as this technological world seems almost too futuristic, it actually retrieves obsolete items from the past—a heater, a piece of wood, and a spray bottle—in keeping with McLuhan’s (1973) insights regarding media that provide strong participation goals and the rubric for achieving them. Moreover, the VR world extends the progression of game AI that occasioned the love-hate relationship with the “walking sim.” The stronger the AI, the more clearly defined the rubric for participation. In the VR interactive museum the designers want people to be able to “play” with haptic devices—like the smell, smoke, and heat generators—in order to heighten not only the immersion but also the perception of being there, or what Bolter and Grusin (1999) call “immediacy.” Indeed, Bolter and Grusin argue that the need for immediacy overwhelmingly takes over, regardless of the media’s intrusion. However, in the example above, the system fell short because the designers had not figured for someone laying down the “log” on the virtual fire and having it send a representative—that is, a perceptual, based on experience, intuition, etc.—amount of sparks and heat. Someone else could throw the log as hard as they want. The machine only senses log in or log out. This corresponds precisely with how we feel about phenomena, for machines and AI are based upon a model of intelligence which prioritises mental representation and symbolic manipulation. For Laird and van Lent (2001), in their field defining presentation, the “killer app” of human-level AI was going to be computer games. Writing a decade later in the same conference proceedings, Weber, Mateas, and Jhala (2011), are still responding to this original position, by way of AI in strategy games. Writing for this year’s, IEEE meeting Petrović (2018) also makes the case for human-level AI in games. What becomes clear, then, is that as much as we have wanted games to offer human behaviours, perception has taken a backseat in the extant models.As phenomenology makes clear, the emphasis on behaviour over perception leaves out the crucial, indeed foundational mode of intelligence: affective intentionality. Simply put, how we feel about phenomena impacts how we perceive phenomena as significant, inconsequential, interesting, etc. Thus, we should be asking if machines can understand significance? Can they feel any particular way about a game, a move, or the phenomenon of play? This becomes important when mental representation provides the mode of symbolic manipulation and vice versa. This occurs in and through a given game or gameified world’s ability to instill, simulate, or otherwise produce affective intentionality. We would argue that herein lies the crux of the mixed reactions to Red Dead Redemption 2. Similarly, the example from the virtual museum highlights the ongoing omission. Human-level AI should not just reproduce a human’s response to inputs, but should produce responses that a human would perceive. In short human-level AI needs to perceive perception itself. Indeed, this is the primary cognitive and affective response. Phenomenology does not tell us that; first principles semiotics tells us that. However, phenomenology gives us the means and methods to understand the response to affective intentionality and, more importantly, to develop the contingent hermeneutic (Merleau-Ponty, 2013). Moreover, semiotics will never encompass the materiality required of such a system, let alone the simulated materiality that exists through the interaction with the AI device and its interface, a device that bears the mark of the maker, just as surely as a bespoke shirt does. Thus, our paper will consider the production of affective intentionality and the ways VR games and gameified systems, like the virtual museum and Red Dead Redemption 2, facilitate, impede, and especially teach the perception of perception. As a corollary, then, our paper necessarily considers meta-cognitive processes—that is, the strategies for learning about learning—that occur in and through interaction with AI in games and devices (cf Hacker, 1998, 2016). Indeed, meta-cognition becomes a contingent component for instilling affective intentionality.
Read full abstract