Abstract

This paper aims to create an original context recognition system on smartphones by using acoustic data. In this paper, the context represents that how the smartphone users move or how their contiguous environment is. In current systems, the context refers to the place where they are. It means that it is difficult to recognize the context when they are in a new place for which the system does not have the data. Our system simply categorizes the situation: “Train” when the user on the train, “Quiet Work Place” when they in a place like a library, PC room, or laboratory, and so on. This system features sound because it would be containing a lot of information. From volume to spectrum, we analyze the data from a built-in microphone and abstract multiple feature values. In addition to this, accelerator and luminance data are also used as feature values. Then these feature values are classified into context categories. This situation based categorizing system realizes that wherever the user goes, the context can be understood. It would be a flexible platform for smartphone applications which need the context recognition system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call