Abstract

Recent research in scalable model-driven engineering now allows very large models to be stored and queried. Due to their size, rather than transferring such models over the network in their entirety, it is typically more efficient to access them remotely using networked services (e.g. model repositories, model indexes). Little attention has been paid so far to the nature of these services, and whether they remain responsive with an increasing number of concurrent clients. This paper extends a previous empirical study on the impact of certain key decisions on the scalability of concurrent model queries on two domains, using an Eclipse Connected Data Objects model repository, four configurations of the Hawk model index and a Neo4j-based configuration of the NeoEMF model store. The study evaluates the impact of the network protocol, the API design, the caching layer, the query language and the type of database and analyses the reasons for their varying levels of performance. The design of the API was shown to make a bigger difference compared to the network protocol (HTTP/TCP) used. Where available, the query-specific indexed and derived attributes in Hawk outperformed the comprehensive generic caching in CDO. Finally, the results illustrate the still ongoing evolution of graph databases: two tools using different versions of the same backend had very different performance, with one slower than CDO and the other faster than it.

Highlights

  • Model-driven engineering (MDE) has received considerable attention due to its demonstrated benefits of improving productivity, quality and maintainability

  • We present the design of an empirical study that evaluates the impact of several factors in the performance of the remote model querying services of multiple tools: a model repository (CDO), several configurations of a model index (Hawk with Neo4j/OrientDB backends and Epsilon Object Language (EOL)/Epsilon Pattern Language (EPL) queries) and a database-backed model storage layer (NeoEMF)

  • In order to provide answers for the above research questions, a networked environment was set up to emulate increasing numbers of clients interacting with a model repository (CDO 4.4.1.v20150914-0747), a model index (Hawk 1.0.0.201609151838) or a graph-based model persistence layer (NeoEMF on commit 375e077 combined with Mogwaï on commit 543fec9) and collect query response times

Read more

Summary

Introduction

Model-driven engineering (MDE) has received considerable attention due to its demonstrated benefits of improving productivity, quality and maintainability. It is important to stress-test these networked services, as solutions may exhibit various issues in high-load situations In this empirical study, we will evaluate the impact of several design decisions in the remote model querying services offered by multiple existing solutions (CDO, Hawk and Mogwai). We will evaluate the impact of several design decisions in the remote model querying services offered by multiple existing solutions (CDO, Hawk and Mogwai) While these tools have different goals in mind, they all offer this same functionality, and they all had to choose a particular network protocol, messaging style, caching/indexing style, query language and persistence mechanism. The rest of this work is structured as follows: Sect. 2 provides a discussion on existing work on model stores, Sect. 3 introduces the research questions and the design of the experiment, Sect. 4 discusses the obtained results, and Sect. 5 presents the conclusions and future lines of work

File-based model persistence
Database-backed model persistence
Model repositories
Heterogeneous model indexing
Experiment design
Research questions
Experiment setup
Queries under study
Singletons in Java models
Railway model validation
Results and discussion
Measurements obtained
RQ1: impact of protocol
RQ2: impact of API design
RQ3: impact of caching and indexing
GraBaTs’09 queries
Train Benchmark
RQ4: impact of mapping from query to backend
RQ5: scalability with demand
Threats to validity
Conclusions and further work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call