The overlap between large distributed simulations and real-time applications 1-3 has been always present, and it is increasing fast recently because of the circumstances raised from the current new trends, such as Cloud computing 4, 5, intelligent transportation systems 1, 6, green computing 7, 8, and virtual environments 2, 9. As the overlap increases, new challenges appear, mostly for enabling applications, like serious gaming 10, virtual and augmented reality, and collaborative virtual environments 11, to execute seamlessly on distributed resources 12-14. This special issue intends to collect the state-of-the-art research works that include a set of concepts on high performance computing; design, modeling, and validation of distributed real-time systems; and multi-agent control and simulation systems. This issue also presents as objective to enable access to future trends and directions in the scope of simulations and real-time systems, covering cloud-based simulations, simulation in GPUs, and implementation of distributed virtual environment applications. Nine articles have been chosen from a strict selective process; all of these articles have been thoroughly reviewed by highly qualified anonymous referees. The articles cover a variety of important and challenging topics in the areas of large-scale distributed simulations and real-time systems. Matthew Forshaw, et. al. 15 present a new scalable simulator of High-Throughput Computing systems (HTC-Sim) incorporating real workloads and distinguishing itself by accepting to model multi-use clusters and interactive users. Their model is specially built to work with HTCSim and allow different HTC system policies to be incorporated in the simulation system. Being a large-scale simulation software, HTC-Sim provides the ability to model multi-use clusters and the presence of interactive users; thus, it enables fault tolerance and evaluation of server's energy consumption in HTC using different resource allocation policies. Arthur Valadares, Eugenia Gabrielova, and Cristina Lopes 16 introduce a conceptual framework for facilitating the design and testing of distributed real-time systems and applications. In their work, they define six properties and concerns while designing a Distributed Real Time (DRT) system. These properties are showcased in real-life, popular examples from different applicable domains in the field of distributed virtual environments. The re-evaluation of the concerns raised in this work shows the importance of some aspect in enhancing user quality of experience in such environments. Xiaosong Li, Wentong Cai, and Stephen John Turner 17 propose two major techniques in improving execution performance of Agent-based simulations on GPUs. Their innovative techniques consist of making use of an AgentPool and a location-based shared memory management. The AgentPool is introduced to support efficient agent creation and deletion and to enhance the CUDA native memory allocation for agent-based simulations. The memory management employs an efficient neighbour searching execution strategy by carefully utilizing the GPU memory hierarchy, especially the shared memory, consequently improving the performance of agent interaction module on GPU. Libero Nigro and Franco Cicirelli 18 present a JADE-based control framework for multi-agent systems. Their framework is centred on the control mechanisms of a simulation, which enables a seamless transformation from the simulation to implementation and execution. The authors majorly focus on the application lifecyle to foster the model continuity, capturing the business logic of an application and the external elements of the environment where the application runs. The proposed control framework enables the evolution of a realized system through its mapped interactions with the environment and mediated by control forms. In their work, Matthias Becker, Helena Szczerbicka, and Desheng Fu 19 suggest the use of a technique to enhance simulation performance by avoid short look-ahead. They make use of an estimation of simulation look-ahead based on the neighbouring received information about look-ahead and invalidation announcements. This estimation is consequently better calculated within a longer time in the future, decreasing the need for constant simulation synchronization and the amount of time wasted waiting for slow logical processes. Christian Engelmann and Thomas Naughton 20 describe the enhancements they have implemented in the Extreme-scale Simulator (xSim) with the aim to further improve its scalability and run-time performance. Two improvements are presented considering the characteristics of a conservative parallel discrete event simulation algorithm, so the work consists of an enhanced deadlock resolution protocol and a more efficient message matching approach. After the detailed description of their proposed approaches, the authors present a series of extensive experiments through benchmarks, which is used to assess the validity of the proposals. In their work, Xiao Xu et. al. 21 describe a multi-layered architecture for the joint DM&S that aims to integrate engagement level and engineering level in one single simulation. The engineering level consists in more detailed and fine-grained processing, while the engagement level comprises a more general and coarser-grained decision-making. In order to enable such a simulation system, the authors describe the solutions employed to deal with the challenges brought in regarding time synchronization and interoperability. Robson De Grande and Azzedine Boukerche 22 present distributed system that enables the simulation of realistic intelligent transportation scenarios. With a modular design, this system allows the real-time 3D visualization outcome of the simulation. The integration and synchronization among the parts are the major challenges that the authors tackle in building their envisioned system. Consistency is the major provided feature in the proposed system whereas simulation, visualization, microscopic processing, and real-time data input must be perfectly aligned for proper visualization of the virtual environment. Alessandro Leite et al. 23 introduce a server consolidation strategy, which can greatly reduce power consumption on Cloud federations. The strategy that the authors propose aim to decrease energy consumption without affecting the provisioning of resources to match established QoS requirements. This objective lies on the fact that the data centres of Cloud providers are greatly limited by power consumption, which is defined by a third-party agent. The authors address the problem of managing workloads as restraining the usage of servers by migrating virtual machines among them and by shutting down the unused ones. The guest editor of this special issue wishes to express sincere gratitude to all the authors who accepted to contribute with their works on this issue. A special note of appreciation goes to all referees who donated their time and effort to the reviewing process. Without them, this high quality issue would not be possible to be achieved. I hope that this selection of works will serve as a great stimulus and foster advancements in distribution simulations and real-time applications.