Abstract

Having the ability to generate, share, and use ambient multimedia and sensory data in real-time using both traditional sensors as well as non-traditional ones (such as smart device users) is a pioneering practice that requires specialized network capabilities and visualization metaphors. The network must support both service discovery and cache sharing to allow users to generate real-time sensory data, upload them or share them with end-users searching for the same data. Visualization and coloring schemes must support both streaming and stored sensory data to allow users to interact with either recent or up-to-the-minute ambient sensory data on either smart devices or the server. This article describes the design and reports on the simulation performance of a social network application that allows a group of users on an ad-hoc network to share real-time multimedia and ambient data with respect to venues of potential interest. At the graphical interface level, we present an intuitive interface that allows users to capture and share, often with a single hand, an array of sensory data comfortably and efficiently using touch screen smart devices. At the network level, we describe an architectural model that is supported by a specific design strategy for service discovery and caching to facilitate data sharing. The performance of the architectural model is then evaluated to show that it can efficiently handle bulk of sensory data, when accessed using smart devices in a peer-to-peer environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call