Abstract

Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

Highlights

  • A visual sensor network (VSN) is a distributed wireless system that interacts with the physical environment by observing it through a visual sensor

  • In [25], a Bayesian-based SR scheme was presented that computes wavelet coefficients of the target image from: (1) multiple images observed in VSN and (2) a prior image that is imposed by the component exponential mixture model

  • The dictionary was trained from a set of 106 examples of patches of size 9×9 randomly sampled from images obtained from various data-sets

Read more

Summary

Introduction

A visual sensor network (VSN) is a distributed wireless system that interacts with the physical environment by observing it through a visual sensor. Most of the researchers in the area of VSNs agree that transmitting all the visual data to BS is impractical because of two major constraints: energy and bandwidth [3] For this purpose, various schemes based on key-frame extraction have been developed to produce summaries of the videos captured by camera sensors [4,5,6]. SR algorithms are usually implemented on work-stations where energy consumption is not a problem In this approach, frames of low-quality are transmitted from visual sensors to VPH and to BS and as a result consume less energy and bandwidth than the first approach [3]. The onboard processing of such resource-expensive algorithms causes too much energy consumption To cope with these constraints, this paper presents an effective SR framework for summarized frames generated from the frame-sequences captured by the visual-sensors.

Related Work
Framework of the Proposed System
Summarization of Video Sequences Received from Visual Sensors
Dictionary Learning Process
D T v - D I D I v
End While
DA F V - d ja j 1
Image Super-Resolution
Experimental Results and Discussion
Quantitative Evaluation
Energy Consumption Analysis
Subjective Evaluation
Limitation of the Proposed Scheme
Example-Based Study
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.