Abstract

Multisensor data fusion combines various information sources to produce a more accurate or complete description of the environment. This article studies an object identification (OID) system using multiple distributed cameras and Internet-of-Things (IoT) devices for better visualizability and reconfigurability. We first propose a data processing and fusing method to merge the detection results of different IoT devices and video cameras, in order to locate, identify, and track target objects in the monitored area. Then, we develop the FusionTalk system by integrating the data fusion techniques with IoTtalk, an IoT device management platform. FusionTalk is designed with flexibility, modularity, and expansibility, where cameras, IoT devices, and network applications are modularized and can be conveniently plugged in/out, reconfigured, and reused through graphical user interfaces. In FusionTalk, the scope and the target of surveillance can be flexibly configured and associated, and administrators can be warned and easily visualize the movement and behavior of specific objects. Our experimental evaluation of the data fusion algorithm in various scenarios shows an identification accuracy above 95%. Finally, theoretical and numerical analyses on the failure probability of pairing IoT devices with video objects by FusionTalk are presented. Extensive experiments are performed to demonstrate the pairing effectiveness in real-world scenarios with failure probability less than 0.01%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call