Abstract

In recent years, various indoor social security emergencies have brought major changes to people’s lives. However, the state-of-art surveillance technology achieving object tracking alone is insufficient through a single visual tracking method and may cause inaccurate tracking (such as burred faces, loss target, and occlusion) and is time-consuming for a long video sequence. Accurate tracking and localization systems are essential for developing indoor localization systems. This paper designs a fusing communication system called the Internet of Electronic-Visual Things Localization system (IoEVT), which simultaneously tracks targets assisting with an Adaptive Kalman Filter (AKF). The data fusion indoor localization approach using AKF fuses multi-data from electronic (E) and visual (V) data, such as WiFi and cameras. The fusing method significantly improves EV data sensing abilities for solving occluded visual information when only using single V data in occlusion scenarios and reduces the localization error rate. In addition, our real-world experimental validation shows that the proposed IoEVT system achieves better accuracy, scalability, and robustness with low network bandwidth consumption compared with the single E or V data in many scenarios, such as occlusion and occlusion-free.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.