With the emergence of Cloud computing and Internet of Things, Context-aware applications face new challenges. One of them is big data from huge context application and sources. The main stream of applications have used not only realtime versions but also history versions of context data. This paper concerned about optimization techniques of storage and reasoning in the CMS (context management system). For our storage of context data from different sources, FCA Lattice has been employed as a kind of storage schema to support modeling and fusion of these different context data. Further, context conditions about data are essential to logical reasoning. Under different context conditions, context data can be promoted to be knowledge, which makes context reasoning readily. In the dynamic environment, to get reasonable results, reasoning services require their input to keep consistent in the changeable conditions. The changeable conditions can be represented as context attributes, intervals and relations etc. To make consistent knowledge available in the conditions, our pervious works have analyzed incremental cache and check of consistent intervals, and proposed a context lattice-based distributed optimized update algorithm. In this paper, based on the algorithm, our problem is to optimize the split function. The split is needed when current lattice has no condition making knowledge consistent. The main aim of this paper is to improve time performance of splitting attributes or intervals or fuzzy relations that could be detailed. We propose a new parallel split algorithm. This algorithm computes the priorities of candidates. To reduce time cost, it decreases the split scope by choosing the split candidate with the highest priority value. To decrease the full lattice update time in the split process, it generates the sub lattices split by the candidates concurrently and merges them after. On the theory, we analyze the feasibility of the algorithm. On the test, as a new part of the whole update algorithm, it is compared with the nai ve one, and it shows the better time performance. What’s more, it makes multi-threads execute on the same lattice to avoid producing more memory cost caused by copying the lattice for an independent thread.
Read full abstract