Abstract

People usually form a social structure (e.g., a leader-follower, companion, or independent group) for better interactions among them and thus share similar perceptions of visible scenes and invisible wireless signals encountered while moving. Many mobility-driven applications have paid much attention to recognizing trajectory relationships among people. This work models visual and wireless data to quantify the trajectory similarity between a pair of users. We design a visual and wireless sensor fusion system, called ViWise, which incorporates the first-person video frames collected by a wearable visual device and the wireless packets broadcast by a personal mobile device for recognizing finer-grained trajectory relationships within a mobility group. When people take similar trajectories, they usually share similar visual scenes. Their wireless packets observed by ambient wireless base stations (called wireless scanners in this work) usually contain similar patterns. We model the visual characteristics of physical objects seen by a user from two perspectives: micro-scale image structure with pixel-wise features and macro-scale semantic context. However, we model characteristics of wireless packets based on the encountered wireless scanners along the user’s trajectory. Given two users’ trajectories, their trajectory characteristics behind the visible video frames and invisible wireless packets are fused together to compute the visual-wireless data similarity that quantifies the correlation between trajectories taken by them. We exploit modeled visual-wireless data similarity to recognize the social structure within user trajectories. Comprehensive experimental results in indoor and outdoor environments show that the proposed ViWise is robust in trajectory relationship recognition with an accuracy of above 90%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call