Abstract

With the availability of various sensors in the smartphone, identifying a locomotion mode becomes convenient and effortless in recent years. Information about locomotion mode helps to improve journey planning, travel time estimation, and traffic management. Though there exists a significant amount of work towards locomotion mode recognition, the performance of these work is not pertinent and heavily depends on the labeled training instances. As it is impractical to gather a prior information (labeled instances) about all types of locomotion modes, the recognition model should be able to identify a new or unseen locomotion mode without having any corresponding training instance. This paper proposes a sensors based deep learning model to identify a locomotion mode by using labeled training instances. The approach also incorporates a concept of Zero-Shot learning to identify an unseen locomotion mode. The model obtains an attribute matrix based on the fusion of three semantic matrices. It also constructs a feature matrix by extracting the deep learning and hand-crafted features from the training instances. Later, the model builds a classifier by learning a mapping between attribute and feature matrices. Finally, this work evaluates the performance of the approach on collected and existing datasets using accuracy and F1 score.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.