Abstract

In recent years, we have witnessed tremendous advances in cloud data centers (CDCs) from the point of view of the communication layer. A recent report from Cisco Systems Inc demonstrates that CDCs, which are distributed across many geographical locations, will dominate the global data center traffic flow for the foreseeable future. Their importance is highlighted by a top-line projection from this forecast that by 2019, more than four-fifths of total data center traffic will be Cloud traffic. The geographical diversity of the computing resources in CDCs provides several benefits, such as high availability, effective disaster recovery, uniform access to users in different regions, and access to different energy sources. Although Cloud technology is currently predominant, it is essential to leverage new agile software technologies, agile processes, and agile applications near to both the edge and the users; hence, the concept of Fog has been developed. Fog computing (FC) has emerged as an alternative to traditional Cloud computing to support geographically distributed latency-sensitive and QoS-aware IoT applications while reducing the burden on data centers used in traditional Cloud computing. In particular, FC with features that can support heterogeneity and real-time applications (eg, low latency, location awareness, and the capacity to process a large number of nodes with wireless access) is an attractive solution for delay- and resource-constrained large-scale applications. The distinguishing feature of the FC paradigm is that a set of Fog nodes (FNs) spreads communication and computing resources over the wireless access network to provide resource augmentation to resource-limited and energy-limited wireless (possibly mobile) devices. The joint management of Fog and Internet of Technology (IoT) paradigms can reduce the energy consumption and operating costs of state-of-the-art Fog-based data centers (FDCs). An FDC is dedicated to supervising the transmission, distribution, and communication of FC. As a vital component of the Internet of Everything (IoE) environment, an FDC is capable of filtering and processing a considerable amount of incoming data on edge devices, by making the data processing architecture distributed and thereby scalable. An FDC therefore provides a platform for filtering and analyzing the data generated by sensors utilizing the resources of FNs. Increasing interest is emerging in FDCs and CDCs that allow the delivery of various kinds of agile services and applications over telecommunication networks and the Internet, including resource provisioning, data streaming/transcoding, analysis of high-definition videos across the edge of the network, IoE application analysis, etc. Motivated by these issues, this special section solicits original research and practical contributions that advance the use of CDCs/FDCs in new technologies such as IoT, edge networks, and industries. Results obtained from simulations are validated in terms of their boundaries by experiments or analytical results. The main objectives of this special issue are to provide a discussion forum for people interested in Cloud and Fog networking and to present new models, adaptive tools, and applications specifically designed for distributed and parallel on-demand requests received from (mobile) users and Cloud applications. These papers presented in this special issue provide insights in fields related to Cloud and Fog/edge architecture, including parallel processing of Cloudlets/Foglets, the presentation of new emerging models, performance evaluation and improvements, and developments in Cloud/Fog applications. We hope that readers can benefit from the insights in these papers, and contribute to these rapidly growing areas. This special issue contains research papers addressing the state-of-the-art technologies related to Cloud and Fog networks and their computing components. The set of accepted papers is organized under the following key themes: models/architectures, performance improvements, and emulated applications. There are several developments in Cloud/Fog models that allow for adaptive and efficient resource allocation and task scheduling, securing of data and Cloud/edge networks and satisfying the network key performance indicators (KPI) such as minimizing the delay, energy, power of the network, and bandwidth usage and maximizing the throughput, feedback on request, coverage, and routing of on-demand Cloud application requests. Tajiki et al1 introduce a new traffic engineering architecture for MPLS-OpenFlow hybrid networks, and mathematically formulate two optimization problems. In the first problem, they formalize the problem of re-configuration of LSPs in MPLS networks when a central controller is used as the path computational element. In the second problem, this paper concentrates on the issue of low-level resource re-allocation of the tasks scattered across such a network, while targeting the traffic traversing from the edge node and addressing the KPI, for example, using link utilization and path length with optimal and heuristic methods. Another work describes the provision of computing services without deploying appropriate infrastructure. In this architecture, the authors solve the problem of constrained computing resources at the edge nodes,2 and present an algorithm to tackle two-dimensional spatial data query transmission between the edge nodes. They also present statistical data on moving objects, satisfying the differential privacy preserving model and guaranteeing that the count query of the moving object's position data satisfies differential privacy. They experimentally analyze the data availability and the operating efficiency of the algorithm using real datasets. Fog technology may incorporate software-defined networking (SDN) networks. FNs are connected with virtualized SDN-enabled switches, which run on servers or in data centers at the edge of the access network. The remaining non-trivial challenge in such a system is to find a way to recover from failures. This is classified as an NP problem involving real-time routing in the case of failure in the network, with limited processing and link capacity. Fortunately, the authors address this issue in another work, in which they present a new failure recovery scheme that provides service function chaining in SDN-based networks.3 They formulate the corresponding problem in the form of integer linear programming (ILP), and introduce a fast heuristic algorithm to solve the optimization problem to make the solution scalable. They then present their methodology and a solution concerning the average probability of failure in the selected paths, link utilization, and server utilization with various network topologies and shapes. The last paper in this category is an IoT architecture model that is based on the SDN paradigm applied in Cloud networking.4 The authors utilize the flexibility offered by SDNs, which can be used to allow a large number of IoT devices from various heterogeneous networks to communicate with each other reliably and securely. In their architecture, called CENSOR, new network services can be easily integrated with the underlying communication system. In terms of validation, they show the feasibility of deployment and present an analysis of security and the limitations of CENSOR with the help of a smart city use case, which also defines its benefits and challenges. We believe that this paper neatly reflects how on-the-fly architecture can be applied in real Cloud and Fog scenarios. Other categories of approaches concentrate on improving the performance of contemporary networking systems such as Cloud computing and IoT, and how these can be integrated with the new tier of data analysis in the network, such as edge and Fog applications. These groups of methods describe several cutting-edge heuristic/meta-heuristic or joint solutions for tackling open issues such as minimizing task delays and maximizing throughput and network performance. As an example, Rehman et al5 present a new scheduling algorithm based on the genetic algorithm (GA), called the multi-objective genetic algorithm (MOGA), to optimize the multiple objectives in cloud networking. MOGA decreases the makespan of a workflow by making scheduling decisions meet user-imposed constraints in terms of deadlines and budgets. This method addresses the conflicting interests of the Cloud stakeholders for optimization and provides a solution that not only minimizes the makespan to meet budget and deadline constraints but also provides an energy-efficient solution using dynamic voltage frequency scaling. Another critical problem that must be addressed in a Cloud network is preserving data security between the network entities. Data sharing requires mutual assurance between from-end and back-end entities. Intermediate nodes also play a crucial role in data integrity and safety. A flow control mechanism (FCM) is an efficient and widely used methodology that can be used to control the sharing of data in this type of network. In the work of Khurshid et al,6 the authors tackle this important problem and present a prototype for an FCM called CamFlow that controls the existence of data files in plaintext in an IoT device network that is connected to Cloud networks. CamFlow adds security features for data transfer to the Cloud service provider, enabling information flow control policies to regulate data flow to and from customers and within the cloud. It adds a security assurance to the connection between the device and the Cloud system (ie, the login phase) and adopts an encryption method involving hash AES that controls the ciphertext transferred between the entities in a Cloud network. The authors also use a Petri net to formally validate the novelty of their mechanism, and evaluate the request execution time on both the server and device sides of the network, in different phases of the connection. Another important issue is that the performance validation of Cloud and Fog technology provides real-time delay-tolerant secure processing of the shared information in cloud brokers. In the work of Khalid et al,7 the authors use E-Poll as a Fog-aware election commission. E-Poll offers an intelligent way to identify, which data can be processed in close-proximity Fog networks, and which should be sent to the cloud for bulk processing. The authors categorize their model and solution using various storage priorities. They emulate their model and solution at different Cloud hops (ie, nodes) and evaluate the delay for this processing between server-client connections. They also simulate their method in a Cloud system to provide efficient and reliable services to clients by adding intelligence at the level of a Fog device, and analyze the failed attempts and time taken to complete the tasks. E-Poll is used to study the fog-device connections in a simple voting system application and to explain the Fog device hit ratio and average time comparison between the Cloud client and the E-Poll system. Fog technology allows existing applications to provide fewer latency services for heterogeneous users. Hence, from a psychological point of view, it is crucial to process the new perceptions of users while applying their applications in the existence of Fog technology. To do this, Haghparast et al8 propose a new concept called Fog learning, with the aim of addressing critical thinking as a problematic issue in the academic area of the Cloud network. The distributed approach as an intrinsic characteristic of FC facilitates its usage anywhere, anytime, and realizes the vision of low-latency communications for existing mobile applications, thus fostering critical thinking. The authors validate the effectiveness of their emulated model using the software usability measurement inventory for several case studies, using a sample t-test that is adapted to determine strong and weak critical thinking skills and a one-way repeated measure ANOVA to find more detailed information. They use the Friedman and McNemar test methods to evaluate their analysis for various ISP models that are connected to an FN in the Cloud system. In another paper, the authors address the restriction of resources by applying an effective and efficient approach that allows the system to choose the best nodes, create the Fog, and manage resources.9 The authors adopt a multi-criteria decision-making model that can help the existing systems to select the best nodes to overcome resource constraints in the Fog. This paper uses the resources of other smart objects in the vicinity of the primary actuator to overcome resource constraints, network traffic issues, and latency in IoT systems. They validate the communication cost based on the ratio of successful packet delivery and the productivity of the model in terms of the rate of completed tasks. The guest editors of this special issue wish to express their sincere gratitude to all of the authors who submitted their papers to this special issue. We are also grateful to the Reviewing Committee for the hard work and the feedback provided to the authors, which are essential in further enhancing the papers. Finally, we would like to express our sincere gratitude to Professor Geoffrey Fox, the Editor in Chief, for providing us with this unique opportunity to present our works in the international journal of Concurrency and Computation: Practice and Experience.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call