Automatic Generation of Logical Specifications for Behavioural Models

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Logical specifications for behavioural models are crucial for the formal analysis of complex system designs. The automation of obtaining such a specification is essential particularly for promoting logical and deductive methods in software development. This article replicates earlier methods for automatically generating logical specifications equivalent to behavioural models, while also extending the approach to include workflow mining processes. Various and effective interactions with existing theorem provers are also proposed. We conducted straightforward, yet comprehensive, experiments covering multiple stages, which include workflow extraction, automatic logical specification generation, and theorem prover based analysis and the evaluation of these specifications.

Similar Papers
  • Conference Article
  • Cite Count Icon 2
  • 10.1145/2695664.2695914
Concurrent streams in Markov chain usage models for statistical testing of complex systems
  • Apr 13, 2015
  • Daniel Homm + 2 more

Model-based statistical testing with Markov chain usage models (MCUMs) represents a highly automated test approach. However, specifying the usage model by hand is not a trivial task, especially if the system under test (SUT) allows concurrent streams of use. Modeling concurrent streams leads to a state space explosion and therefore is an error-prone task. Relevant usage scenarios may not be tested as they are overlooked during the specification of the model. In this paper we show how composite states with regions can be used to formalize concurrent streams in usage models. This allows a simplified and reasonable specification of usage models as it hides the state space explosion. We further elaborate on the required steps for an automatic and configurable test case generation from such models. We also provide a suitable analysis method taking advantage of the model structure for computing relevant parameters to guide the test process.

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.jlamp.2019.02.005
Pattern-based and composition-driven automatic generation of logical specifications for workflow-oriented software models
  • Feb 19, 2019
  • Journal of Logical and Algebraic Methods in Programming
  • Radosław Klimek

Pattern-based and composition-driven automatic generation of logical specifications for workflow-oriented software models

  • Research Article
  • 10.18461/pfsd.2010.1024
Modelling Pricing Behavior with Weak A‐Priori Information: Exploratory Approach
  • Oct 1, 2010
  • Proceedings in Food System Dynamics
  • Carlo Russo + 1 more

In the absence of reliable a priori information, choosing the appropriate theoretical model to describe an industry’s behavior is a critical issue for empirical studies about market power. A wrong choice may result in model misspecification and the conclusions of the empirical analysis may be driven by the wrong assumption about the behavioral model. This paper develops a methodology aimed to reduce the risk of misspecification bias. The approach is based on the sequential application of a sliced inverse regression (SIR) and a nonparametric Nadaraya/ Watson regression (NW). The SIR‐NW algorithm identifies the factors affecting pricing behavior in an industry and provides a nonparametric characterization of the function linking these variables to price. This information may be used to guide the choice of the model specification for a parametric estimation of market power. The SIR NW algorithm is designed to complement the estimation of structural models of market behavior, rather than to replace it. The value of this methodology for empirical industrial organization studies lies in its data driven approach that does not rely on prior knowledge of the industry. The method reverses the usual hypothesis testing approach. Instead of first choosing the model based on a priori information and then testing if it is compatible with the data, the econometrician selects a theoretical model based on the observed data. Thus, the methodology is particularly suited for those cases where the researcher has no a priori information about the behavioral model, or little confidence in the information that is available.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 11
  • 10.1186/1471-2105-13-251
The Process-Interaction-Model: a common representation of rule-based and logical models allows studying signal transduction on different levels of detail
  • Sep 28, 2012
  • BMC Bioinformatics
  • Katrin Kolczyk + 4 more

BackgroundSignaling systems typically involve large, structured molecules each consisting of a large number of subunits called molecule domains. In modeling such systems these domains can be considered as the main players. In order to handle the resulting combinatorial complexity, rule-based modeling has been established as the tool of choice. In contrast to the detailed quantitative rule-based modeling, qualitative modeling approaches like logical modeling rely solely on the network structure and are particularly useful for analyzing structural and functional properties of signaling systems.ResultsWe introduce the Process-Interaction-Model (PIM) concept. It defines a common representation (or basis) of rule-based models and site-specific logical models, and, furthermore, includes methods to derive models of both types from a given PIM. A PIM is based on directed graphs with nodes representing processes like post-translational modifications or binding processes and edges representing the interactions among processes. The applicability of the concept has been demonstrated by applying it to a model describing EGF insulin crosstalk. A prototypic implementation of the PIM concept has been integrated in the modeling software ProMoT.ConclusionsThe PIM concept provides a common basis for two modeling formalisms tailored to the study of signaling systems: a quantitative (rule-based) and a qualitative (logical) modeling formalism. Every PIM is a compact specification of a rule-based model and facilitates the systematic set-up of a rule-based model, while at the same time facilitating the automatic generation of a site-specific logical model. Consequently, modifications can be made on the underlying basis and then be propagated into the different model specifications – ensuring consistency of all models, regardless of the modeling formalism. This facilitates the analysis of a system on different levels of detail as it guarantees the application of established simulation and analysis methods to consistent descriptions (rule-based and logical) of a particular signaling system.

  • Book Chapter
  • Cite Count Icon 3
  • 10.1016/bs.aecr.2019.03.002
Modelling land use dynamics in socio-ecological systems: A case study in the UK uplands
  • Jan 1, 2019
  • Mette Termansen + 6 more

Modelling land use dynamics in socio-ecological systems: A case study in the UK uplands

  • Book Chapter
  • Cite Count Icon 15
  • 10.1007/978-3-642-31788-0_12
Behavioral Modeling of Urban Freight Transport
  • Sep 7, 2013
  • Edoardo Marcucci + 1 more

Decision makers in urban goods movement (UGM) typically need to assess the impact new policy interventions might have on freight distribution. The effects of policy changes are inextricably related with the extant regulatory framework that also influences the relationships among the various actors interacting along the supply chain. The operators commonly considered important, given the crucial role they play in UGM, are: retailers, transport providers, and own-account. Notwithstanding the admittedly important role that a detailed knowledge of these three agent categories has for a correct policy implementation there is a limited knowledge concerning the specific preferences and behavior of each agent-type. It is de facto assumed that retailers, own-account and transport providers have homogenous preferences and can be seamlessly treated. The upsurge of behavioral models and the acquisition of data necessary to predict goods and vehicle flows both under the current and, more importantly, under altered policy/regulatory conditions explains the progressive importance that is attributed to an agent-based perspective. This research reports the result of a stated ranking exercise conducted in the Limited Traffic Zone in 2009 in the city center of Rome focusing on retailers which demand freight transport services and play an important role in extended supply chains. This paper proposes a comparison between two different Multinomial Logit model specifications where non-linear effects for the variations of the levels of the attributes considered are studied and detected. A meaningful comparison between willingness to pay measures derived by the two model specifications is proposed so to avoid known scale problems. The results obtained are very interesting and meaningful from a policy perspective since they show potentially differentiated effects of the policy implemented in deep contrast with the, often assumed, homogenous effect hypothesis.

  • Research Article
  • Cite Count Icon 100
  • 10.1111/j.1467-8306.2007.00577.x
Accuracy Assessment for a Simulation Model of Amazonian Deforestation
  • Nov 13, 2007
  • Annals of the Association of American Geographers
  • Robert Gilmore Pontius + 6 more

This article describes a quantitative assessment of the output from the Behavioral Landscape Model (BLM), which has been developed to simulate the spatial pattern of deforestation (i.e. forest fragmentation) in the Amazon basin in a manner consistent with human behavior. The assessment consists of eighteen runs for a section of the Transamazon Highway in the lower basin, where the BLM's simulated deforestation map for each run is compared to a reference map of 1999. The BLM simulates the transition from forest to non-forest in a spatially explicit manner in 20-m × 20-m pixels. The pixels are nested within a hierarchical stratification structure of household lots within larger development rectangles that emanate from the Transamazon Highway. Each of the eighteen runs derives from a unique combination of three model parameters. We have derived novel methods of assessment to consider (1) the nested stratification structure, (2) multiple resolutions, (3) a simpler model that predicts deforestation near the highway, (4) a null model that predicts forest persistence, and (5) a uniform model that has accuracy equal to the expected accuracy of a random spatial allocation. Results show that the model's specification of the overall quantity of non-forest is the most important factor that constrains and correlates with accuracy. A large source of location agreement is the BLM's assumption that deforestation within household lots occurs near roads. A large source of location disagreement is the BLM's less than perfect ability to simulate the proportion of deforestation by household lot. This article discusses implications of these results in the context of land change science and dynamic simulation modeling. Eugenio Arima and Marcellus Caldas were affiliated with Michigan State University during the time the work reported in this article was done.

  • Conference Article
  • Cite Count Icon 8
  • 10.1061/9780784413616.042
Representing Requirements of Construction from an IFC Model
  • Jun 17, 2014
  • K W Yeoh + 1 more

This paper presents a generalized, flexible and formal framework for representing various requirements to support the needs of the construction process using the Industry Foundation Classes (IFC) model specification. These are termed construction requirements. The importance of considering construction requirements as a representation of construction knowledge within the context of construction planning and scheduling will be discussed, allowing readers to gain an understanding of the applicability of construction requirements. An ontological model for describing these construction requirements will be proposed in this paper, which will aid in formulating a uniform representation schema for construction requirements. This model will define the attributes of the construction requirements ontology from the perspectives of spatial, temporal and ordinal characteristics. From these attributes, various construction requirements may be represented as construction requirement entities. These construction requirement entities demonstrate how the functional and non-functional characteristics of a building element system may be captured for constructability analysis. This paper concludes by explaining how the construction requirements may be extended to represent construction methods, and underlines its applicability to automated constructability analysis, as well as automatic schedule generation. INTRODUCTION AND REVIEW OF REQUIREMENTS MODELLING Construction Requirements are the capabilities and conditions which the construction process system and the in-progress facility product must conform to (Song and Chua 2006). In other words, construction requirements represent the key pre-conditions for construction (Chua and Yeoh 2011). This then forms the basis for representing construction knowledge; the knowledge embedded within the construction requirements drive the planning process by providing a key tool for constructability analysis of a construction project. Despite the aforementioned importance of construction requirements, little attention has been accorded to the impact of construction requirements on project schedules through associated schedule (temporal) constraints. This is largely due to a 331 COMPUTING IN CIVIL AND BUILDING ENGINEERING ©ASCE 2014

  • Research Article
  • Cite Count Icon 7
  • 10.11128/sne.29.tn.10497
Python-based eSES/MB Framework: Model Specification and Automatic Model Generation for Multiple Simulators
  • Dec 1, 2019
  • SNE Simulation Notes Europe
  • Hendrik Folkerts + 3 more

This paper proposes a Python-based infrastructure for studying the characteristics and behavior of families of systems. The infrastructure allows automatic execution of simulation experiments with varying system structures as well as with varying parameter sets in different simulators. Possible system structures and parameterizations are defined using a System Entity Structure (SES). The SES is a high level approach for variability modeling, particularly in simulation engineering. An SES describes a set of system configurations, i.e. different system structures and parameter settings of system components. In combination with a Model Base (MB), executable models can be generated from an SES. Based on an extended SES/MB approach, an enhanced software framework is introduced that supports variability modeling and automatic model generation for different simulation environments. By means of an engineering application it is shown, how a set of Python-based open source software tools can be used to model an SES and to automatically generate and execute signal-flow oriented models.

  • Research Article
  • 10.18757/ejtir.2022.22.2.5662
Building latent segments of goods to improve shipment size modelling: Confirmatory evidence from France
  • Jan 1, 2022
  • European Journal of Transport and Infrastructure Research
  • Raphael Piendl + 3 more

Freight transport demand models are generally based on administrative commodity type segmentation which are usually not tailored to behavioral freight transport demand modelling. Recent literature has explored new approaches to segment freight transport demand, notably based on latent class analysis, with promising results. In particular, empirical evidence from road freight transport modelling in Germany hints at the importance of conditioning and handling constraints as a sound basis for segmentation. However, this literature is currently sparse and based on small samples. Before it can be accepted that conditioning should be integrated in the state-of-the-art doctrine of freight data collection and model specification, more evidence is required. The objective of this article is to contribute to the issue. Using detailed data on shipments transported in France, a model of choice of shipment size with latent classes is estimated. The choice of shipment size is modelled as a process of total logistic cost minimization. Latent class analysis leverages the wide range of variables available in the dataset, to provide five categories of shipments which are both contrasted, internally homogenous, and directly usable to update freight collection protocols. The groups are: "‘standard temperature-controlled food products"’, "‘special transports"’, "‘bulk cargo"’, "‘miscellaneous standard cargo in bags"’, "‘palletised standard cargo"’. This segmentation is highly consistent with the empirical evidence from Germany and also leads to better estimates of shipment size choices than administrative segmentation. As a conclusion, the finding that conditioning and handling information is essential to understanding and modelling freight transport can be regarded as more robust.

  • Research Article
  • Cite Count Icon 373
  • 10.1016/0191-2615(83)90023-1
Discrete choice theory, information theory and the multinomial logit and gravity models
  • Feb 1, 1983
  • Transportation Research Part B: Methodological
  • Alex Anas

Discrete choice theory, information theory and the multinomial logit and gravity models

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/icci-cc.2018.8482062
Formal Specification of Cognitive Models in CARINA
  • Jul 1, 2018
  • Alba J Jeronimo + 2 more

Cognitive modeling is a fundamental tool used to understand the processes that underlying behavior, and has become a standard technique in the cognitive sciences. The central goals of cognitive modeling are: to describe, to predict and to prescribe human behavior through computational models of cognitive processes commonly called cognitive models. Cognitive modeling depends on the use of cognitive architectures. A cognitive architecture is a general framework for specifying computational behavioral models of human cognitive performance. CARINA is a cognitive architecture for the development of cognitive agents in digital educational environments. This paper presents a formal representation of a cognitive model for cognitive architecture CARINA. Denotational mathematics was used to formally describe the specification of cognitive models in CARINA. As an example a cognitive model in the domain of cognitive arithmetic was implemented in CARINA.

  • Research Article
  • Cite Count Icon 15
  • 10.1007/s11219-016-9308-8
Testing of model-driven development applications
  • Feb 10, 2016
  • Software Quality Journal
  • Beatriz Marín + 4 more

Human resource management practices are key for the success of software development projects. Practices that promote knowledge sharing and organizational learning are positively related to development---effort curves, and thus software companies are looking for different alternatives oriented to promoting these practices. The model-driven development (MDD) paradigm is positioned as one of the best alternatives for reutilization of development knowledge. In particular, this paradigm considers the specification of conceptual models that can be used as input for automatic code generation to different target platforms. However, testing of applications developed through MDD solutions is still performed by the manual definition and execution of test cases by testers, which negatively impacts in the time reduction obtained from automatic code generation and the reutilization of knowledge generated during the MDD project execution. To address this issue, this paper presents a testing approach that automatically generates executable test cases for software developed by using MDD technologies.

  • Research Article
  • Cite Count Icon 6
  • 10.5555/2872965.2872996
Visual and persistence behavior modeling for DEVS in CoSMoS
  • Apr 12, 2015
  • Mostafa D Fard + 1 more

An integrated visual modeling and simulation tool called Component-based System Modeling and Simulation (CoSMoS) is extended to support behavioral specification of parallel atomic DEVS model. An approach based on Statecharts and Graphical Modeling Framework has been developed and implemented for specifying behaviors of atomic models. One or more Statecharts can be developed for any atomic model and stored in a relational database. The behavioral modeling complements atomic and coupled DEVS structural modeling where families of models with semi-automated code generated for the DEVS-Suite simulator are systematically stored and retrieve. The behavioral modeling enriches visual development of models that have DEVS-compliant specifications with modular, component-based visual representations. The visual representation of hierarchal coupled models are automatically generated or restored. An example model is employed to show the degree of details supported both in visual representation and database representation. Current and future works are briefly described.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/s1474-6670(17)51221-5
On-Line Evaluation of Systems with Discrete Observations
  • Aug 1, 1991
  • IFAC Proceedings Volumes
  • L.E Holloway + 1 more

On-Line Evaluation of Systems with Discrete Observations

Save Icon
Up Arrow
Open/Close