Abstract

The authors present an iterative and incremental development methodology for simulation models in network engineering projects. Driven by the DEVS (Discrete Event Systems Specification) formal framework for modeling and simulation, they aim to assist network design, test, analysis, and optimization processes. A practical application of the methodology is presented for a case study in the data acquisition system of the ATLAS particle physics experiment at CERN's Large Hadron Collider at CERN. By adopting the DEVS MaS formal framework in combination with software engineering best practices, the authors develop network simulation models along with enhanced modeling capabilities and boosted simulation performance for tools in a robust yet flexible way.

Highlights

  • We present a DEVS-based methodology for modeling and simulation (M&S)-driven engineering projects that integrates software development best practices tailored to a large-scale networked data acquisition system in a physics experiment

  • L1-accepted Events are temporarily stored in a readout system (ROS) in the form of data structures called fragments and accessed by a second-level filter called the high-level trigger (HLT)

  • We developed a model for TDAQ using the PowerDEVS tool,[14] which provides a graphical interface to define DEVS models via block diagrams, a C ++ editor to code the four dynamic functions for the M tuple, and libraries with reusable models

Read more

Summary

Storage node Processing node

ATLAS Experiment The Large Hadron Collider (LHC)[8] is the world’s largest particle accelerator—27 kilometers in circumference—colliding bunches of particles (protons or ions) every 25 ns near large detectors, including ATLAS, CMS,[9] ALICE,[10] and LHCb.[11]. Collisions in the ATLAS detector generate very high energy, enabling the search of novel physical evidence such as Higgs boson, extra dimensions, and dark matter. Each particle bunch collision is called an Event (we use “Event” for high-energy physics and “event” for DEVS modeling) and consists of particle-induced signals registered in the detector and digitized for further analysis. The raw amount of information generated exceeds 60 Terabyte/s To assimilate this throughput, ATLAS uses a sophisticated layered filtering system (trigger and data acquisition, or TDAQ12) that decides in real time whether each Event should be permanently stored or safely discarded. L1-accepted Events are temporarily stored in a readout system (ROS) in the form of data structures called fragments and accessed by a second-level filter called the high-level trigger (HLT). The TDAQ system and its HLT-ROS data network is our system under study

Applications and Data Network in the HLT
Computer Simulations
Perform early risk assessment
DEVS Formal Framework
Cycles and Phases
Simulation validation
Simultaneous PUs
Findings
Stay relevant with the IEEE Computer Society
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call