Abstract

We introduce a scheme for immersing real human users in urban simulations, and for enabling them to transpose their embodied behavior into models. We achieve this by inverse augmentation, flipping traditional philosophies of augmented reality. Rather than beginning with real-world scenes and embellishing them with graphics, we proceed from a base of synthetic, modeled, streetscapes filled with agent characters, which we augment with real human users. Participants are then allowed to use their natural abilities to explore the simulation scenarios. We achieve this by employing mobile virtual reality to allow users to build dynamic presence in a fused geosimulation and virtual geographic environment that they can physically view and walk around in. Our central argument is that inversion of this kind allows for the detail and nuances of human behavior to be brought directly into simulation, where they would traditionally be difficult to capture and represent. We show that close matches between real physical activity on the ground and actions in the model world can be achieved, as measured by spatial analysis and encephalography of user brain activity. We demonstrate the usefulness of the approach with an application to studying pedestrian road-crossing behavior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call