Abstract

Today, low-cost radar/optical visual sensing for monitoring of environment and remote sensing through a network of unmanned aerial vehicles (UAVs) has received more attention such that many different intelligent and pervasive computing techniques are being used to make vision-assisted UAV Networks more reliable and dependable in order to distributed surveillance. On the other hand, the issue of visual information reconstruction has been an interesting topic of machine vision and UAV-borne remote sensing in recent years. To do this, a non-linear blind edge-guided spatial filter based on linear minimum mean square error-estimation (LMMSE) theory has recently been proposed for video sequence reconstruction problems in green monitoring with radars. Although performance of this technique is acceptable compared to the conventional linear techniques in the area, the strategy behind this visual estimator is not generally flexible on which it is not applicable in scalable zooming and its corresponding reconstruction templates. The main objective of this research is to introduce a new version of this filter towards scalable zooming and flexibility in different templates with arbitrary scales. Thus, the proposed algorithm can simultaneously benefit from both edge-directed and distance-weighted adaptation. The proposed approach is theoretically proven, and with using experiments through real UAV visual data, its suitability for low-cost sensing in green Internet of multimedia things (G-IoMT) is shown.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call