Abstract

It is difficult to learn meaningful representations of time-series data since they are sparsely labeled and unpredictable. Hence, we propose bootstrap inter–intra modality at once (BIMO), an unsupervised representation learning method based on time series. Unlike previous works, the proposed BIMO method learns both inter-sample and intra-temporal modality representations simultaneously without negative pairs. BIMO comprises a main network and two auxiliary networks, namely inter-auxiliary and intra-auxiliary networks. The main network is trained to learn inter–intra modality representations sequentially by regulating the use of each auxiliary network dynamically. Thus, BIMO thoroughly learns inter–intra modality representations simultaneously. The experimental results demonstrate that the proposed BIMO method outperforms the state-of-the-art unsupervised methods and achieves comparable performance to existing supervised methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call