Abstract

Object segmentation of monitor systems based on the Internet of drones plays an important role in the practical applications of wide-area smart-city intelligent monitoring systems. It is an important step for extracting objects from remote-sensing images, and provides a reliable theoretical basis for key property monitoring, environmental monitoring, disaster monitoring, and agricultural monitoring. To improve the accuracy of object segmentation and to solve the problem of inadequate edge recognition, a joint-learning segmentation scheme was designed that combines the conditional random field (CRF) model with an improved U-net model. It employs the improved U-net model as the front-end model of the joint-learning framework for feature fusion and the CRF model as the back-end of the joint-learning framework for transforming to gradient optimization-based recurrent neural networks. The joint-learning framework enables the front and back parts to interact with each other to obtain the location of the target and its classification information accurately. The joint-learning framework was realized on open datasets and compared with state-of-the-art remote-sensing image segmentation algorithms. The experiment results show that the accuracy of the ground object segmentation improved to 86.1%, which is an encouraging improvement.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call