Abstract

With the growth of IoT services, there has been an increased demand for indoor localization-based services. Wi-Fi access is omnipresent; its high accuracy and ability to use commodity devices makes it suitable to be widely adopted for localization in indoor environments. Recent sequence transduction models such as recurrent neural network (RNN) and long short-term memory (LSTM) mostly rely on recurrent and convolution. Both LSTM and RNN have achieved tremendous results in the localization tasks, but their sequential character prevents them from effectively performing parallel computing, therefore, limiting their performance in processing extremely long sequences. Lately, there have been different models developed for natural language processing transduction tasks that relied solely on attention mechanism, and they performed remarkably well with less computation. This paper is the first to propose the sole utilization of self-attention mechanism for localization time-series modeling. We introduce a self-attention mechanism fingerprinting-based model (SAMFI) which uses positional encoding and masking mechanism. To capture the temporal ordering information, we used the extended symbolic aggregate approximation strategy. Moreover, the proposed model utilizes calibrated channel state information as location fingerprints. SAMFI's pivotal concept is simple and empirically potent. The obtained results significantly minimized the location error on the collected dataset with an accuracy level score of 86.5% outperforming both RNN and LSTM models which scored 82.6 and 67.5%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call