Monitoring Runtime Metrics of Fog Manufacturing via a Qualitative and Quantitative (QQ) Control Chart

  • Abstract
  • Highlights & Summary
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Fog manufacturing combines Fog and Cloud computing in a manufacturing network to provide efficient data analytics and support real-time decision-making. Detecting anomalies, including imbalanced computational workloads and cyber-attacks, is critical to ensure reliable and responsive computation services. However, such anomalies often concur with dynamic offloading events where computation tasks are migrated from well-occupied Fog nodes to less-occupied ones to reduce the overall computation time latency and improve the throughput. Such concurrences jointly affect the system behaviors, which makes anomaly detection inaccurate. We propose a qualitative and quantitative (QQ) control chart to monitor system anomalies through identifying the changes of monitored runtime metric relationship (quantitative variables) under the presence of dynamic offloading (qualitative variable) using a risk-adjusted monitoring framework. Both the simulation and Fog manufacturing case studies show the advantage of the proposed method compared with the existing literature under the dynamic offloading influence.

Similar Papers
  • Conference Article
  • 10.1109/case49439.2021.9551674
Predictive Offloading in Fog Manufacturing for Computational Pipelines using Multi-task Learning
  • Aug 23, 2021
  • Vignesh Raja Nallendran + 2 more

In smart manufacturing, it is significant to integrate the computation service with the manufacturing process to support real-time process controls and data analytics. A suitable computing architecture to handle the influx of data generated from the manufacturing process is Fog manufacturing. In Fog manufacturing, the Fog-cloud collaborative architecture is enabled through a distributed computing platform to facilitate responsive, scalable, and reliable data analysis in manufacturing networks. However, effective utilization of the Fog-cloud computing service requires optimal offloading strategies due to limited computational and bandwidth resources in Fog manufacturing. Therefore, a predictive offloading method that can properly deploy each computation task based on the predicted run-time metrics (e.g., time-latency) is desired. However, the run-time metrics collected in Fog manufacturing are heterogeneous in nature and cannot be modeled through conventional predictive analysis. This is because the computational flow and the data sources vary among different Fog nodes. To overcome this issue, in this paper, a multi-task learning model based predictive offloading method is proposed to assign the computation tasks based on their predicted run-time metrics in Fog manufacturing. The proposed method is evaluated on a Fog manufacturing testbed. The results show that the predictive offloading method can adequately predict the run-time metrics, and further effectively offload the computation tasks to maximize the run-time performance of the computation service.

  • Research Article
  • Cite Count Icon 7
  • 10.1080/24725854.2023.2184884
Distributed data filtering and modeling for fog and networked manufacturing
  • Mar 14, 2023
  • IISE Transactions
  • Yifu Li + 3 more

Fog Manufacturing applies both Fog and Cloud Computing collaboratively in Smart Manufacturing to create an interconnected network through sensing, actuation, and computation nodes. Fog Manufacturing has become a promising research component to be integrated into the existing Smart Manufacturing paradigm and provides reliable and responsive computation services. However, Fog nodes' relatively limited communication bandwidth and computation capabilities call for reduced data communication load and computation time latency for modeling. There has long been a lack of an integrated framework to automatically reduce manufacturing data and perform computationally efficient modeling/machine learning. This research direction is increasingly important as both the computational demands and Fog/networked Manufacturing become prevalent. This paper proposes an integrated and distributed framework for data reduction and modeling of multiple systems in a Smart Manufacturing network considering the system similarities. A simulation study and a Fog Manufacturing testbed for ingot growth manufacturing validated that the proposed framework significantly reduces the sample size used for improved computational runtime metrics while outperforming various other data reduction methods in modeling performance.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 11
  • 10.1093/gigascience/giz052
Accumulating computational resource usage of genomic data analysis workflow to optimize cloud computing instance selection.
  • Apr 1, 2019
  • GigaScience
  • Tazro Ohta + 2 more

BackgroundContainer virtualization technologies such as Docker are popular in the bioinformatics domain because they improve the portability and reproducibility of software deployment. Along with software packaged in containers, the standardized workflow descriptors Common Workflow Language (CWL) enable data to be easily analyzed on multiple computing environments. These technologies accelerate the use of on-demand cloud computing platforms, which can be scaled according to the quantity of data. However, to optimize the time and budgetary restraints of cloud usage, users must select a suitable instance type that corresponds to the resource requirements of their workflows.ResultsWe developed CWL-metrics, a utility tool for cwltool (the reference implementation of CWL), to collect runtime metrics of Docker containers and workflow metadata to analyze workflow resource requirements. To demonstrate the use of this tool, we analyzed 7 transcriptome quantification workflows on 6 instance types. The results revealed that choice of instance type can deliver lower financial costs and faster execution times using the required amount of computational resources.ConclusionsCWL-metrics can generate a summary of resource requirements for workflow executions, which can help users to optimize their use of cloud computing by selecting appropriate instances. The runtime metrics data generated by CWL-metrics can also help users to share workflows between different workflow management frameworks.

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/iske47853.2019.9170281
Markov Based Computational Tasks Offloading Decision for Face Detection
  • Nov 1, 2019
  • Mi Swe Zar Thu + 1 more

Smart mobile devices are essential for technology trend of modern lifestyles. Mobile applications are getting more diverse and complex with an increasing use of mobile devices. At mobile environment, resource limitation is irresistible extremely; it is a key challenge and can impact on mobile computing performance. Mobile Cloud computing (MCC) is one of the eventual gold computing executions of rich mobile application on an abundance of mobile devices. Limited computational power, storage, and energy are necessary for hardware limitations of offloading. So, the proposed system aims to avoid limitation of devices starting in case of intensive tasks by using dynamic computation offloading of Markov process. The system reduces energy consumption of resource hungry device as the way of taking decision to offload the computing intensive tasks to the remote cloud by the result of our cost model. The experiment shows that the proposed offloading decision solver can reduce not only computing time but also battery usage of mobile device for face detection application compared to the dynamic off loading MAUI decision framework.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/iccchina.2019.8855923
Multi-factor based Dynamic Offloading with Coalitional Game in Mobile Cloud Computing
  • Aug 1, 2019
  • Shan Guo + 2 more

Distributed mobile cloud computing is a new paradigm to enhance the edge cloud computing capacity and reduce the energy of application processing. In this paper, we establish a distributed cloudlet system which consists of several cloudlets and a computation-intensive application. The application can be divided into multiple related tasks. Each task selects mobile device or the appropriate offloading cloudlet to optimize utility. Besides energy consumption, the utility includes user mobility predictive probability, channel availability and cloudlet availability. Considering these factors can overcome the problem resulting from the user mobility and reduce the probability of connection failure while ensuring processing efficiency. We formulated this dynamic offloading problem as a transferable coalition game because each task is a sensible player to maximize overall utility. Simulation results demonstrate that the dynamic offloading decision scheme we proposed can get a better offloading decision with low complexity.

  • Research Article
  • Cite Count Icon 25
  • 10.1109/tnsm.2021.3123475
Self-Verifiable Attribute-Based Keyword Search Scheme for Distributed Data Storage in Fog Computing With Fast Decryption
  • Mar 1, 2022
  • IEEE Transactions on Network and Service Management
  • Ke Gu + 3 more

Presently many searchable encryption schemes have been proposed for cloud and fog computing, which use fog nodes (or fog servers) to partly undertake some computational tasks. However, these related schemes still retain cloud servers to undertake most computational tasks, which result in large communication costs between edge devices and cloud servers. Therefore, in this paper we propose a self-verifiable attribute-based keyword search scheme for distributed data storage (SV-KSDS) in full fog computing, where each decryption operation on the data required by a user must meet the negotiated decryption rule between fog servers. Our SV-KSDS scheme first provides attribute-based distributed data storage among fog servers through the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$(w, \sigma)$ </tex-math></inline-formula> threshold secret-sharing scheme, where fog servers can provide self-verifiable keyword search and data decryption for terminal users. Compared with the data storage in cloud computing, our scheme extends it to the distributed structure while providing fine-grained access control for distributed data storage through attribute-based encryption. The access control policy of our scheme is constructed on linear secret-sharing scheme, whose security is reduced to the decisional bilinear Diffie-Hellman assumption against chosen-keyword attack and the decisional <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">${q}$ </tex-math></inline-formula> -parallel bilinear Diffie-Hellman assumption against chosen-plaintext attack in the standard model. Based on theoretical analysis and practical testing, our SV-KSDS scheme generates less computation and communication costs, which further unloads some computational tasks from terminal users to fog servers so as to reduce computing costs of terminal users.

  • Research Article
  • Cite Count Icon 1
  • 10.48175/ijarsct-8166
Service-Oriented Network Virtualization toward Convergence of Networking and Cloud Computing
  • Jan 30, 2023
  • International Journal of Advanced Research in Science, Communication and Technology
  • Mr Inzimam Surve + 1 more

A holistic approach that makes it possible to control, manage, and optimize both computing resources and networking in a Cloud environment is required because of the crucial role that networking plays in Cloud computing. This results in a convergence of networking and Cloud computing. As a crucial feature for the next generation of networking, network virtualization is being implemented in the Internet and telecommunications sectors. It is anticipated that virtualization will bridge the gap between these two fields as a potential enabler of profound changes in the communications and computing domains. When applied to network virtualization, Service-Oriented Architecture (SOA) creates a Network-as-a-Service (NaaS) paradigm that may significantly facilitate the convergence of networking and Cloud computing. The use of SOA in network virtualization has recently received a lot of attention from both academia and industry. Although numerous pertinent research papers have been published, they are currently dispersed across a variety of subject areas in the literature, such as cloud computing, telecommunications, computer networking, and Web services. Specifically, we first introduce the SOA principle and review recent research progress on applying SOA to support network virtualization in both telecommunications and the Internet. In this article, we present a comprehensive survey of the most recent developments in service-oriented network virtualization for supporting Cloud computing, particularly from the perspective of network and Cloud convergence through NaaS. Next, we discuss the most recent advancements in network service description, discovery, and composition, as well as a framework for network-to-cloud convergence based on service-oriented network virtualization. We also talk about the problems these technologies face because of network-cloud convergence and the research opportunities in these areas. Our goal is to get researchers interested in this new interdisciplinary field.

  • Research Article
  • Cite Count Icon 11
  • 10.7717/peerj-cs.2211
Hybrid computing framework security in dynamic offloading for IoT-enabled smart home system
  • Aug 23, 2024
  • PeerJ Computer Science
  • Sheharyar Khan + 6 more

In the distributed computing era, cloud computing has completely changed organizational operations by facilitating simple access to resources. However, the rapid development of the IoT has led to collaborative computing, which raises scalability and security challenges. To fully realize the potential of the Internet of Things (IoT) in smart home technologies, there is still a need for strong data security solutions, which are essential in dynamic offloading in conjunction with edge, fog, and cloud computing. This research on smart home challenges covers in-depth examinations of data security, privacy, processing speed, storage capacity restrictions, and analytics inside networked IoT devices. We introduce the Trusted IoT Big Data Analytics (TIBDA) framework as a comprehensive solution to reshape smart living. Our primary focus is mitigating pervasive data security and privacy issues. TIBDA incorporates robust trust mechanisms, prioritizing data privacy and reliability for secure processing and user information confidentiality within the smart home environment. We achieve this by employing a hybrid cryptosystem that combines Elliptic Curve Cryptography (ECC), Post Quantum Cryptography (PQC), and Blockchain technology (BCT) to protect user privacy and confidentiality. Additionally, we comprehensively compared four prominent Artificial Intelligence anomaly detection algorithms (Isolation Forest, Local Outlier Factor, One-Class SVM, and Elliptic Envelope). We utilized machine learning classification algorithms (random forest, k-nearest neighbors, support vector machines, linear discriminant analysis, and quadratic discriminant analysis) for detecting malicious and non-malicious activities in smart home systems. Furthermore, the main part of the research is with the help of an artificial neural network (ANN) dynamic algorithm; the TIBDA framework designs a hybrid computing system that integrates edge, fog, and cloud architecture and efficiently supports numerous users while processing data from IoT devices in real-time. The analysis shows that TIBDA outperforms these systems significantly across various metrics. In terms of response time, TIBDA demonstrated a reduction of 10–20% compared to the other systems under varying user loads, device counts, and transaction volumes. Regarding security, TIBDA’s AUC values were consistently higher by 5–15%, indicating superior protection against threats. Additionally, TIBDA exhibited the highest trustworthiness with an uptime percentage 10–12% greater than its competitors. TIBDA’s Isolation Forest algorithm achieved an accuracy of 99.30%, and the random forest algorithm achieved an accuracy of 94.70%, outperforming other methods by 8–11%. Furthermore, our ANN-based offloading decision-making model achieved a validation accuracy of 99% and reduced loss to 0.11, demonstrating significant improvements in resource utilization and system performance.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 25
  • 10.1074/mcp.o114.043380
Processing Shotgun Proteomics Data on the Amazon Cloud with the Trans-Proteomic Pipeline
  • Feb 1, 2015
  • Molecular &amp; Cellular Proteomics
  • Joseph Slagel + 4 more

Cloud computing, where scalable, on-demand compute cycles and storage are available as a service, has the potential to accelerate mass spectrometry-based proteomics research by providing simple, expandable, and affordable large-scale computing to all laboratories regardless of location or information technology expertise. We present new cloud computing functionality for the Trans-Proteomic Pipeline, a free and open-source suite of tools for the processing and analysis of tandem mass spectrometry datasets. Enabled with Amazon Web Services cloud computing, the Trans-Proteomic Pipeline now accesses large scale computing resources, limited only by the available Amazon Web Services infrastructure, for all users. The Trans-Proteomic Pipeline runs in an environment fully hosted on Amazon Web Services, where all software and data reside on cloud resources to tackle large search studies. In addition, it can also be run on a local computer with computationally intensive tasks launched onto the Amazon Elastic Compute Cloud service to greatly decrease analysis times. We describe the new Trans-Proteomic Pipeline cloud service components, compare the relative performance and costs of various Elastic Compute Cloud service instance types, and present on-line tutorials that enable users to learn how to deploy cloud computing technology rapidly with the Trans-Proteomic Pipeline. We provide tools for estimating the necessary computing resources and costs given the scale of a job and demonstrate the use of cloud enabled Trans-Proteomic Pipeline by performing over 1100 tandem mass spectrometry files through four proteomic search engines in 9 h and at a very low cost.

  • Research Article
  • Cite Count Icon 24
  • 10.1609/aaai.v35i1.16123
Queue-Learning: A Reinforcement Learning Approach for Providing Quality of Service
  • May 18, 2021
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Majid Raeis + 2 more

End-to-end delay is a critical attribute of quality of service (QoS) in application domains such as cloud computing and computer networks. This metric is particularly important in tandem service systems, where the end-to-end service is provided through a chain of services. Service-rate control is a common mechanism for providing QoS guarantees in service systems. In this paper, we introduce a reinforcement learning-based (RL-based) service-rate controller that provides probabilistic upper-bounds on the end-to-end delay of the system, while preventing the overuse of service resources. In order to have a general framework, we use queueing theory to model the service systems. However, we adopt an RL-based approach to avoid the limitations of queueing-theoretic methods. In particular, we use Deep Deterministic Policy Gradient (DDPG) to learn the service rates (action) as a function of the queue lengths (state) in tandem service systems. In contrast to existing RL-based methods that quantify their performance by the achieved overall reward, which could be hard to interpret or even misleading, our proposed controller provides explicit probabilistic guarantees on the end-to-end delay of the system. The evaluations are presented for a tandem queueing system with non-exponential inter-arrival and service times, the results of which validate our controller's capability in meeting QoS constraints.

  • Single Book
  • 10.1007/978-3-031-78131-5
2nd International Conference on Cloud Computing and Computer Networks
  • Jan 1, 2025
  • Meng, Lei

2nd International Conference on Cloud Computing and Computer Networks

  • Single Book
  • Cite Count Icon 1
  • 10.1007/978-3-031-47100-1
International Conference on Cloud Computing and Computer Networks
  • Jan 1, 2024
  • Lei Meng

International Conference on Cloud Computing and Computer Networks

  • Single Book
  • 10.1007/978-3-032-03632-2
3rd International Conference on Cloud Computing and Computer Networks
  • Jan 1, 2025
  • Meng, Lei

3rd International Conference on Cloud Computing and Computer Networks

  • Research Article
  • 10.62643/ijerst.2023.v19i4.pp57-63
MACHINE LEARNING-BASED VM PERFORMANCE PREDICTION IN AWS CLOUD USING GRU AND DTW TECHNIQUES
  • Sep 22, 2023
  • International Journal of Engineering Research and Science &amp; Technology
  • Amar Gujeti

Cloud computing, such as AWS cloud, is vital for solving the growing needs of applications demanding to calculate economic computing and storage resources. Since the dependence on cloud services such as AWS is increasing, it is necessary to optimize the allocation of cloud resources. Cloudprophet introduces a new machine learning methodology to predict the performance of virtual machines (VMS) in cloud settings. This approach uses Dynamic Time Warping (DTW) to classify application types and uses Pearson's correlation to detect strongly connected Runtime metrics. Measurements are used in three variants of machine learning algorithms: LSTM without highly selected and DTW metrics, LSTM with highly selected and DTW metrics and GRU with DTW and highly correlated metrics. The GRU, including both DTW and highly correlated measures, overcomes others, with an accuracy of 99.3% when predicting VM performance to AWS. The methodology is confirmed using a cloud data file from Github and then improved by real -time rating with real data sets implemented on cloud AWS. This illustrates its efficiency in accurately predicting both applications and VM performance, with the finding underlining the pre -establishment of the GRU in cloud sources within the actual AWS contexts.

  • Book Chapter
  • 10.1201/9780429020582-1
A Survey of Swarm Intelligence for Task Scheduling in Cloud Computing
  • Jul 19, 2020
  • Ahmed A Ewees + 3 more

In the last few decades, a novel branch of intelligent computation algorithms has been inspired by a swarm intelligence theory which imitates the behavior of animals. These algorithms are successfully applied to solve many kinds of simple and complex problems in several fields such as optimization problems, pattern recognition, image processing, features section, and task scheduling in cloud and parallel computing. Cloud computing has not only become the preferred environment for several companies, but also helps others to overcome many server-related issues by utilizing characteristics such as reliability, flexibility, high scalability, and security. Therefore, many intelligent computation algorithms are used to improve this environment. In this chapter, an overview of swarm intelligence for solving the scheduling problems of tasks in cloud computing is presented, including particle swarm optimization, cat optimization algorithm, artificial bee colony, lion optimization algorithm, whale optimization algorithm, bat algorithm, gray wolf optimizer, cuckoo search algorithm, hybrid swarm algorithms, and multi-objective swarm optimization. All these algorithms are described and presented with their achievements in solving task scheduling issues in cloud computing.

Save Icon
Up Arrow
Open/Close