Camera sensors are different from traditional scalar sensors, as cameras at different positions can form very different views of the object. However, traditional coverage model does not consider this intrinsic property of camera sensors. To address this issue, a novel model called full-view coverage is proposed. It uses the angle between the object's facing direction and the camera's viewing direction to measure the quality of coverage. An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction. An efficient method is proposed for full-view coverage detection in any given camera sensor networks, and a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment is derived. In addition, the article shows a necessary and sufficient condition on the sensor density for full-view coverage in a triangular lattice-based deployment. Based on the full-view coverage model, the article further studies the barrier coverage problem. Existing weak and strong barrier coverage models are extended by considering direction issues in camera sensor networks. With these new models, weak/strong barrier coverage verification problems are introduced, and new detection methods are proposed and evaluated.
Read full abstract