Abstract In a collaborative multi-user model-driven engineering context, it becomes important to track who changed what model part how and why. Operation-based versioning addresses this need by persisting a meaningful edit history which enables a single user to navigate through a model’s evolution over time, to analyze arbitrary previous model versions, or to trace the impact of an operation. However, to load a distinct prior version, it must be restored by reapplying all previous operations, which is time-consuming and, thus, interrupts a user’s workflow. Caching with a fixed distance between caches helps to overcome this problem to the cost of increasing memory requirements. Further, there is no caching approach supporting branches, merges, and possibly resolved conflicts. We propose two advanced caching strategies for operation-based versioning capable of the previously mentioned features: zonal and adaptive caching. Both strategies reduce the memory in use by not applying the same static distance between two caches across the whole edit history. Instead, the distance increases depending on a version’s age and its distance to a branch’s head. Both strategies aim to reduce the restoration time of arbitrary prior versions below a threshold to not interrupt a user’s flow of thought. Zonal caching employs predefined distances compatible with a broad range of model sizes. In contrast, adaptive caching derives the distances individually depending on the initial time to load the model on a user’s computer and the model’s size.We conducted controlled experiments with models of varying sizes and compared the time to restore model versions and the memory in use for no caching, caching with static distances, zonal, and adaptive strategies on different computers. The developed strategies decrease the time to restore a version remarkably while using less memory than static caching. Our results show that for all considered systems and models individual adaptive caching reduces memory usage even further compared to zone-based caching while still satisfying application responsiveness requirements.
Read full abstract