Abstract

Most existing multi-user Augmented Reality (AR) systems only support multiple co-located users to view a common set of virtual objects but lack the ability to enable each user to directly interact with other users appearing in his/her view. Such multi-user AR systems should be able to detect the human keypoints and estimate device poses (for identifying different users) in the meantime. However, due to the stringent low latency requirements and the intensive computation of the preceding two capabilities, previous research only enables either of the two capabilities for mobile devices even with the aid of the edge server. Integrating the two capabilities is promising but non-trivial in terms of latency, accuracy, and matching. To fill this gap, we propose DiTing to achieve real-time ID-aware multi-device visual interaction for multi-user AR applications, which contains three key innovations: Shared On-device Tracking to merge the similar computation for optimized latency, Tightly Coupled Dual Pipeline to enhance the accuracy of each task through mutual assistance, and Body Affinity Particle Filter to precisely match device poses with human bodies. We implement DiTing on four types of mobile AR devices and develop a multi-user AR game as a case study. Extensive experiments show that DiTing can provide high-quality human keypoint detection and pose estimation in real time (30fps) for ID-aware multi-device interaction and outperform the state-of-the-art baseline approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call