Abstract

With the rapid development of mobile Internet techniques, using the sensor-rich smartphones to sense various contexts attracts much attention, such as transportation mode recognition. The transportation mode information can help to improve urban planning, traffic management and journey planning. Though much work has been done on the transportation mode recognition using classic machine learning algorithms, the performance of these methods is not reasonable and heavily relies on the effectiveness of handcrafted features. In this paper, we leverage the strong representation ability of deep learning method and present a deep-learning-based algorithm for transportation mode recognition, namely CL-TRANSMODE, which is capable of accurately detecting multiple transportation modes. The algorithm first uses a convolutional neural network (CNN) to learn appropriate and robust feature representations for transportation modes recognition. Then, an LSTM network performs a further learning of the temporal dependencies characteristics on the feature vectors of CNN output. To further enhance the accuracy of transportation mode recognition, several artificial segments and peak features are extracted from the raw sensor measurements. These features characterize the transportation modes over a much long period of time (minutes or hours). By combining the CNN-extracted features and handcrafted features, our proposed CL-TRANSMODE transportation mode recognition algorithm can accurately differentiate eight transportation modes, i.e., walking, running, bicycling, driving a car, riding a bus, taking a metro, taking a train, or being stationary. Extensive experiments on both the SHL and HTC datasets demonstrate that use our proposed CL-TRANSMODE transportation mode recognition algorithm which outperforms the state-of-the-art comparative algorithms. On the SHL dataset, which contain barometric data, the accuracy using the CL-TRANSMODE algorithm can reaches 98.1%.

Highlights

  • With the ever-increasing computing and perception capabilities of smartphones, fine-grained activity recognition attracts much attention

  • To further improve the accuracy of transportation mode identification, some handcrafted semantic features covering a much longer time interval are used and concatenated with the features learnt by the convolutional neural network (CNN) model

  • Experimental results show that using CL-TRANSMODE algorithm can obtain 98.1% recognition accuracy for the eight transportation modes and the recognition performance shows good robustness

Read more

Summary

Introduction

With the ever-increasing computing and perception capabilities of smartphones, fine-grained activity recognition attracts much attention. As a special kind of activity, the transportation mode recognition can accurately differentiate various. Though much work has been conducted on transportation mode recognition, most work uses classic machine learning algorithms to determine transportation modes, such as treebased algorithms [1], [2] support vector machine (SVM), and Adaboost [3]–[5]. Qin et al.: Toward Transportation Mode Recognition Using Deep Convolutional and LSTM Recurrent Neural Networks reasonable and mainly depends on the validity of handcrafted features, which is time-consuming and requires quality expert knowledge [6], [7]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call