Abstract

As an essential branch of web service applications, the location-based service (LBS) plays an irreplaceable role in our daily lives. Usually, the LBS is time-sensitive, which requires the system to process trajectory data in a real-time manner. Due to the sensitivity of trajectory data, LBS services may violate users’ privacy. Furthermore, trajectory compression plays a crucial auxiliary role in analyzing and mining massive trajectory raw data such as trajectory clustering and trajectory similarity calculation and can help keep users’ privacy. In other words, trajectory compression serves as the prerequisite for privacy-preserved trajectory data mining, which retains points with high-information content and removes redundant approximate points with low information value under the premise of protecting users’ privacy. We can speed up the applications’ response speed and save computing resources if we take advantage of trajectory compression and provide lightweight data support for big-data-driven web page extraction, convenient for fast and accurate response. Unfortunately, trajectory compression’s current real-time processing capacity is still not big enough and not cost-effective. In terms of the implementation principle, most of the existing works are micro-batch processing. Consequently, the system will overly consume resources and respond with a high latency with the trajectory data inputting. In addition, it is difficult for users to understand and set compression parameters correctly. In this context, we propose an algorithm to incrementally compress the trajectory in real-time based on the azimuth change, and two kinds of user-perceivable parameters are proposed to facilitate real-time specific compression. For verification, our study uses real-world data sets, such as GeoLife Trajectory data. We also found that compared with the current OPW-TR algorithm with better all-around performance, our algorithm dramatically improves the processing speed with a minimal loss of accuracy. Furthermore, thanks to the maintenance of incremental stateful computation, our memory consumption was reduced by 28.5% when processing about 400k records. The memory advantage will become more pronounced as the number of data increases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call