Abstract

Models are used mainly to communicate among humans the most relevant aspects of the item being modelled. Moreover, for achieving impact in modern complex applications, modelling languages and tools must support some level of composition. Furthermore, executable models are the foundations of model-driven development; therefore, it is crucial that we study the understandability of executable behaviour models, especially from the perspective of modular composition. We examine the match between the delicate semantics of executable models for applications such as reactive- and real-time systems and developers’ usually simple conception. Performing a series of experiments with UML statecharts and logic-labelled finite-state machines (LLFSMs), we explore understandability of event-driven vs. logic-labelled state machines as well as the architectural options for modular composition. We find that expertise in model manipulation is essential, and that clarification of the semantics of LLFSMs is necessary for them to remain formally verifiable and suitable for robotic and embedded systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call