Abstract

In this paper, we propose Anchor-agnostic Transformers (AaTs) that can exploit the attention mechanism for Received Signal Strength (RSS) based fingerprinting localization. In real-world applications, the RSS modality is inherently well-known for its extreme sensitivity to dynamic environments. Since most machine learning algorithms applied to the RSS modality do not possess any attention mechanism, they can only capture superficial representations, yet subtle but distinct ones characterizing specific locations, thereby leading to significant degradation in the testing phase. In contrast, AaTs are enabled to focus exclusively on relevant anchors at every Received Signal Strength (RSS) sequence for these subtle but distinct representations. This also facilitates the model to neglect redundant clues formed by noisy ambient conditions, thus achieving better accuracy in fingerprinting localization. Moreover, explicitly resolving collapse problems at the feature level (i.e., none-informative or homogeneous features) can further invigorate the self-attention process, by which subtle but distinct representations to specific locations are radically captured with ease. To this end, we enhance our proposed model with two sub-constraints, namely covariance and variance losses that are mediated with the main task within the representation learning stage towards a novel multi-task learning manner. To evaluate our AaTs, we compare the models with the state-of-the-art (SoTA) methods on three benchmark indoor localization datasets. The experimental results confirm our hypothesis and show that our proposed models could provide much higher accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call