Abstract

Sensors of various modalities and capabilities, especially cameras, have become ubiquitous in our environment. Their intended use is wide ranging and encompasses surveillance, transportation, entertainment, education, healthcare, emergency response, disaster recovery, and the like. Technological advances and the low cost of such sensors enable deployment of large-scale camera networks in large metropolises such as London and New York. Multimedia algorithms for analyzing and drawing inferences from video and audio have also matured tremendously in recent times. Despite all these advances, large-scale reliable systems for media-rich sensor-based applications, often classified as situation-awareness applications, are yet to become commonplace. Why is that? There are several forces at work here. First, the system abstractions are just not at the right level for quickly prototyping such applications on a large scale. Second, while Moore's law has held true for predicting the growth of processing power, the volume of data that applications are called upon to handle is growing similarly, if not faster. Enormous amount of sensing data is continually generated for real-time analysis in such applications. Further, due to the very nature of the application domain, there are dynamic and demanding resource requirements for such analyses. The lack of right set of abstractions for programing such applications coupled with their data-intensive nature have hitherto made realizing reliable large-scale situation-awareness applications difficult. Incidentally, situation awareness is a very popular but ill-defined research area that has attracted researchers from many different fields. In this paper, we adopt a strong systems perspective and consider the components that are essential in realizing a fully functional situation-awareness system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call