Abstract

Obtaining fine-grained spatial information is of practical importance in Radio Frequency Identification (RFID)-based systems for enabling multi-object identification. However, as high-precision positioning remains impractical in commercial-off-the-shelf (COTS)-RFID systems, researchers propose to combine computer vision (CV) with RFID and turn the positioning problem into a matching problem. Promising though it seems, current methods fuse CV and RFID through converting traces of tagged objects extracted from videos by CV into phase sequences for matching, which is a dimension-reduced procedure causing loss of spatial resolution. Consequently, they fail in harsh conditions like small tag intervals and low reading rates. To address the limitation, we propose TagFocus to achieve fine-grained multi-object identification with visual aids in RFID systems. The key observation is that traces generated through different methods shall be compatible if they are of one identical object. Accordingly, a Transformer-based sequence-to-sequence (seq2seq) model is trained to generate a simulated trace for each candidate tag-object pair. And the trace of the right pair shall best match the observed trace directly extracted by CV. A prototype of TagFocus is implemented and extensively assessed in lab environments. Experimental results show that our system maintains a matching accuracy of over 91% in harsh conditions, outperforming state-of-the-art schemes by 27%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call