Abstract

Object perception location data come from a wide range of sources with different structures and different presentation methods. It is worthwhile to study how to fuse the data from different sources of perception and location data to improve the accuracy of location data. We have proposed a data fusion method based on RFID virtual reference tags and video multi-feature matching and perform data fusion in an edge computing environment. Firstly, two types of data are processed separately. The positioning accuracy of the tags is improved by inserting virtual reference tags when using RFID data to obtain positioning information. When using video data to obtain positioning information, the valuable segments are extracted by keyframes, and the target detection is carried out by combining three-frame differencing and background differences. Then, the location information of the items to be positioned is obtained according to the multi-feature matching and coordinate transformation. The location information of the item to be located is obtained based on multi-feature matching and coordinate transformation. Finally, the Kalman filter is used to fuse the separately processed RFID positioning data with the video positioning data. The method combines the observed values with the predicted values calculated by the model for state updating, and the positioning data with higher accuracy is obtained after several iterations. Experimental results show that data fusion at the edge server better meets the real-time demand of video data. The fused localization data has higher accuracy than RFID or video data alone for localization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call