Abstract

Combining image segmentation with Web technology lays a good foundation for lightweight, cross-platform, and pervasive Web artificial intelligence applications, and further improves the capability of Web-of-Things (WoT) applications. However, no matter whether we use a Web real-time communication media server for advanced processing that views camera inputs as a video stream, or transfer continuous camera frames to the remote cloud for processing, we are unable to obtain a satisfactory real-time experience due to high resource consumption and unacceptable latency. In this article, we present EdgeBooster, a computational-efficient architecture that leverages a common edge server to minimize the communication costs, accelerates the camera frame segmentation, and guarantees an acceptable segmentation accuracy with the prior knowledge. EdgeBooster provides real-time segmentation by developing parallel technology that enables segmentation on slices of a camera frame and using presegmentation based on superpixels to accelerate the graph-based segmentation. It also introduces recent DNN-based segmentation results as the prior knowledge to improve the performance of the graph-based segmentation, especially in nonideal scenes, such as dark light and weak contrast. Finally, it creates a pure frontend segmentation that can provide continuous and stable services for mobile users in unstable networks, such as a weak network or with an unstable edge server. The experimental results show that EdgeBooster is able to achieve a considerable accuracy for the mobile Web, running at no less than 30 frames per second in real scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call