Resource scheduling and management of multidimensional data stream cloud platforms based on deep neural networks

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT We propose a deep learning-based scheduling framework for cloud platforms handling multidimensional data streams. Our model combines temporal modelling, graph-aware reasoning, and entropy regularisation to achieve both rapid convergence and robust task assignment. Experiments on Google, Alibaba, and Microsoft datasets show that our method significantly reduces response time and queue length while improving task completion rate and concurrent throughput. Compared to SVM, RF, and MSCNet, our approach demonstrates superior performance across system-level and robustness metrics, validating its practical deployment potential.

Similar Papers
  • Research Article
  • Cite Count Icon 27
  • 10.3390/cancers14184457
U-Net Based Segmentation and Characterization of Gliomas.
  • Sep 14, 2022
  • Cancers
  • Shingo Kihira + 10 more

Simple SummaryGliomas comprise 80% of all malignant brain tumors. We aimed to develop a deep learning-based framework for the automatic segmentation and characterization of gliomas. In this retrospective study, patients were included if they: (1) had a diagnosis of glioma confirmed by histopathology and (2) had preoperative MRI with the inclusion of FLAIR imaging. The deep learning-based U-Net framework was developed based on manual segmentation on FLAIR as the ground truth mask for automatic segmentation and feature extraction, which were used for the prediction of biomarker status and prognosis. A total of 208 patients were included from our internal dataset with stratified sampling to split the database into training and validation. An external dataset (n = 31) from an outside institution was used for testing. The dice similarity coefficient of the generated mask was 0.93 on the testing dataset. The prediction of the radiomic model achieved an AUC of 0.88 for IDH-1 and 0.62 for MGMT on the testing dataset. Our deep learning-based framework can detect and segment gliomas with excellent performance for the prediction of IDH-1 biomarker status and survival.(1) Background: Gliomas are the most common primary brain neoplasms accounting for roughly 40–50% of all malignant primary central nervous system tumors. We aim to develop a deep learning-based framework for automated segmentation and prediction of biomarkers and prognosis in patients with gliomas. (2) Methods: In this retrospective two center study, patients were included if they (1) had a diagnosis of glioma with known surgical histopathology and (2) had preoperative MRI with FLAIR sequence. The entire tumor volume including FLAIR hyperintense infiltrative component and necrotic and cystic components was segmented. Deep learning-based U-Net framework was developed based on symmetric architecture from the 512 × 512 segmented maps from FLAIR as the ground truth mask. (3) Results: The final cohort consisted of 208 patients with mean ± standard deviation of age (years) of 56 ± 15 with M/F of 130/78. DSC of the generated mask was 0.93. Prediction for IDH-1 and MGMT status had a performance of AUC 0.88 and 0.62, respectively. Survival prediction of <18 months demonstrated AUC of 0.75. (4) Conclusions: Our deep learning-based framework can detect and segment gliomas with excellent performance for the prediction of IDH-1 biomarker status and survival.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/icsai48974.2019.9010101
Overview of Cloud Computing Resource Allocation and Management Technology
  • Nov 1, 2019
  • Weijin Zhuang + 1 more

In the process of power system reform, the construction of power market and trading platform is undoubtedly the most challenging, and also the key link to the success of power reform. In order to better design and structure the cloud platform for power trading, this paper summarizes the resource management related to cloud platform. Resource management of cloud platforms is very important for building efficient cloud computing architecture. In this paper, we investigate various cloud computing resource allocation and management technologies, which includes workload prediction, resource scheduling, resource mapping etc. And analyzed advantages and disadvantages of each method. We hope that through our research and analysis, we can lay a foundation for the future development of a new cloud platform resource management model, and hope that on this basis, we can put forward the key resource architecture of cloud platform for power trading.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.jisa.2023.103609
DL-2P-DDoSADF: Deep learning-based two-phase DDoS attack detection framework
  • Sep 26, 2023
  • Journal of Information Security and Applications
  • Meenakshi Mittal + 2 more

DL-2P-DDoSADF: Deep learning-based two-phase DDoS attack detection framework

  • Research Article
  • Cite Count Icon 10
  • 10.1016/j.isprsjprs.2022.04.004
Deep-learning generation of POI data with scene images
  • Apr 26, 2022
  • ISPRS Journal of Photogrammetry and Remote Sensing
  • Jinbao Zhang + 3 more

Deep-learning generation of POI data with scene images

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/infocom42981.2021.9488916
A Sum-of-Ratios Multi-Dimensional-Knapsack Decomposition for DNN Resource Scheduling
  • May 10, 2021
  • Menglu Yu + 3 more

In recent years, to sustain the resource-intensive computational needs for training deep neural networks (DNNs), it is widely accepted that exploiting the parallelism in large-scale computing clusters is critical for the efficient deployments of DNN training jobs. However, existing resource schedulers for traditional computing clusters are not well suited for DNN training, which results in unsatisfactory job completion time performance. The limitations of these resource scheduling schemes motivate us to propose a new computing cluster resource scheduling framework that is able to leverage the special layered structure of DNN jobs and significantly improve their job completion times. Our contributions in this paper are three-fold: i) We develop a new resource scheduling analytical model by considering DNN’s layered structure, which enables us to analytically formulate the resource scheduling optimization problem for DNN training in computing clusters; ii) Based on the proposed performance analytical model, we then develop an efficient resource scheduling algorithm based on the widely adopted parameter-server architecture using a sum-of-ratios multi-dimensional-knapsack decomposition (SMD) method to offer strong performance guarantee; iii) We conduct extensive numerical experiments to demonstrate the effectiveness of the proposed schedule algorithm and its superior performance over the state of the art.

  • Conference Article
  • Cite Count Icon 4
  • 10.1109/icdcs47774.2020.00031
Classification of Channel Access Attacks in Wireless Networks: A Deep Learning Approach
  • Nov 1, 2020
  • Xianglin Wei + 4 more

Coping with diverse channel access attacks (CAAs) has been a major obstacle to realize the full potential of wireless networks as a basic building block of smart applications. Identifying and classifying different types of CAAs in a timely manner is a great challenge because of the inherently shared nature and randomness of the wireless medium. To overcome the difficulties encountered in existing methods, such as long latency, high data collection overhead, and limited applicable range, a deep learning-based CAA detection framework is proposed in this paper. First, we show the challenges of CAA classification by analyzing the impacts of CAAs on wireless network performance using an event-driven network simulator. Second, a state-transition model is built for the channel access process at a node, whose output sequences characterize the changing patterns of the node's transmission status in different CAA scenarios. Third, a deep learning-based CAA classification framework is presented, which takes state transition sequences of a node as input and outputs predicted CAA types. The performance of three deep neural networks, i.e., fully-connected, convolutional, and Long Short-Term Memory (LSTM) network, for classifying CAAs are evaluated under our CAA classification framework in five CAA scenarios and the normal scenario without CAA. Experimental results show that LSTM outperforms the other two neural network architectures, and its CAA classification accuracy is higher than 95%. We successfully transferred the learned LSTM model to classify CAAs on other nodes in the same network and the nodes in other networks, which verifies the generality of our proposed framework.

  • Research Article
  • Cite Count Icon 2
  • 10.1142/s0218126624501640
A Deep Learning-Based Knowledge Graph Framework for Intelligent Management Scheduling Decision of Enterprises
  • Dec 27, 2023
  • Journal of Circuits, Systems and Computers
  • Shiyong Ma + 1 more

Due to the huge magnitude expansion of data volume, the application of cloud computing and the Internet of Things is growing year by year. However, more and more industrial production requires real-time and efficient handling of resource scheduling. Therefore, this paper develops a deep learning-based knowledge graph framework for resource scheduling decision of enterprise. Single-objective and multi-objective problems for computing resources are studied, and the network nodes of computing resources are set with the help of network topology theory. For the single-objective problem, the mathematical model is constructed with the optimization objective of minimizing the time delay. For the multi-objective problem, the mathematical model is constructed with the optimization objectives of minimizing both time delay and energy consumption. Combining the historical scheduling scheme with the introduction of a genetic algorithm, an initial optimization method is proposed for the scheduling problem of mixed flow shop, and the optimization problem is solved to minimize the maximum completion time. The simulation experiments are conducted to evaluate the proposed method, and the obtained results show that the proposal can well realize intelligent management scheduling decision for enterprises.

  • Research Article
  • Cite Count Icon 1
  • 10.1504/ijwgs.2020.10031644
Resource scheduling of information platform for general grid computing framework
  • Jan 1, 2020
  • International Journal of Web and Grid Services
  • Meihong You + 2 more

The corresponding concepts and calculation methods are very in line with the requirements of information platform resource scheduling. Based on this, this paper discusses the related concepts of information grid in detail, which includes the key elements of grid computing. At the same time, it analyses and studies the key technology of information grid, web services, and puts forward the framework of information grid for resource scheduling of information platform. In order to better realise the storage and management of information resources in the information platform, this paper innovatively proposes a meta database data storage and management algorithm for the integration of grid computing and cloud computing in the information grid environment, and designs the corresponding data management and storage framework. Furthermore, we compare it with the cloud platform resource scheduling platform in order to reflect the advantages of our proposed method.

  • Conference Article
  • Cite Count Icon 13
  • 10.1109/bigdata.2016.7840634
SLA-based profit optimization for resource management of big data analytics-as-a-service platforms in cloud computing environments
  • Dec 1, 2016
  • Yali Zhao + 3 more

The value that can be extracted from big data greatly motivates organizations to explore data analytics technologies for better decision making and problem solving in a wide range of application domains. Cloud computing greatly eases and benefits big data analytics by offering on-demand and scalable computing infrastructures, platforms, and applications as services. Big data Analytics-as-a-Service (AaaS) platforms aim to deliver data analytics as consumable services in cloud computing environments in a pay as you go model with Service Level Agreement (SLA) guarantees. Resource scheduling for AaaS platforms is significant as big data analytics requires large-scale computing, which can consume huge amounts of resources and incur high resource costs. Our research focuses on proposing automatic and scalable resource scheduling algorithms to maximize the profits for AaaS platforms while delivering AaaS services to users with SLA guarantees on budgets and deadlines to allow timely responses with controllable costs. In this paper, we model and formulate the profit optimization resource scheduling problem and propose an optimization scheduling algorithm that maximizes profits for AaaS platforms and guarantees SLAs for query requests. Experimental evaluations show that the profit optimization scheduling algorithm performs significantly better in cost saving and profit enhancement compared to the state-of-the-art scheduling algorithms.

  • Conference Article
  • Cite Count Icon 1
  • 10.2118/207266-ms
Document Layout Analysis Using Detection Transformers
  • Dec 9, 2021
  • Prashanth Pillai + 1 more

In the O&amp;G (Oil &amp; Gas) industry, unstructured data sources such as technical reports on hydrocarbon production, daily drilling, well construction, etc. contain valuable information. This information however is conveyed through various formats such as tables, forms, text, figures, etc. Detecting these different entities in documents is essential for building a structured representation of the information within and for automated processing of documents at scale. Our work presents a document layout analysis workflow to detect/localize different entities based on a deep learning-based framework. The workflow comprises of a deep learning-based object-detection framework based on transformers to identify the spatial location of entities in a document page. The key elements of the object-detection pipeline include a residual network backbone for feature extraction and an encoder-decoder transformer based on the latest detection transformers (DETR) to predict object-bounding boxes and category labels. The object detection is formulated as a direct set prediction task using bipartite matching while also eliminating conventional operations like anchor box generation and non-maximal suppression. The availability of sufficient publicly available document layout data sets that incorporate the artifacts observed in historical O&amp;G technical reports is often a major challenge. We attempt to address this challenge by using a novel training data augmentation methodology. The dense occurrence of elements in a page can often introduce uncertainties resulting in bounding boxes cutting through text content. We adopt a bounding box post-processing methodology to refine the bounding box coordinates to minimize undercuts. The proposed document layout analysis pipeline was trained to detect entity types such as headings, text blocks, tables, forms, and images/charts in a document page. A wide range of pages from lithology, stratigraphy, drilling, and field development reports were used for model training. The reports also included a considerable number of historical scanned reports. The trained object-detection model was evaluated on a test data set prepared from the O&amp;G reports. DETR demonstrated superior performance when compared with the Mask R-CNN on our dataset.

  • Research Article
  • 10.26483/ijarcs.v8i5.3223
Enhancing Provider’s Profit on Cloud Market Infrastructure
  • Jun 20, 2017
  • International Journal of Advanced Research in Computer Science
  • Gaurav Mishra

Cloud infrastructure and platform services of the cloud are progressively getting to be plainly well known everywhere throughout the world, however, resource scheduling and allocation of multiple virtual machines on clouds are still a difficult task to attain. Streamlining of these issues can be useful in enhancing the vitality savings and load adjusting in substantial datacenters. Since the request of assets from client fluctuates with time, therefore, the load on cloud generally remains low during normal hours and remains high during peak hours. A solitary cloud service provider might not have obliged assets to satisfy client's solicitations amid the pinnacle hour and on contrary, there may be some providers with under-utilized resources. These difficulties can be overcome by using cloud federation which allows outsourcing at peak time i.e., underutilized providers can rent their resources to different IaaS (Infrastructure as a service) providers. Resource allocation and scheduling also have an impact in federated clouds, assets can be purchased from different individuals of cloud federation. Existing practices cloud confederation and resource management are a bit intricate, which makes them less dynamic in nature and in some cases decreases the revenue and profit of cloud service provider. A prototype of new method for cloud confederation has expounded which focuses to enhance the gain for cloud service provider. Our parameters include free assets to be sold, a number of outsourced resources, the fare of maintaining servers, the fare of third party resources, and workload.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.jconhyd.2025.104672
Harnessing deep learning for fusion-based heavy metal contamination index prediction in groundwater.
  • Sep 1, 2025
  • Journal of contaminant hydrology
  • Ali Asghar Rostami + 6 more

Harnessing deep learning for fusion-based heavy metal contamination index prediction in groundwater.

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/ijcnn.2019.8852044
Multi-Satellite Resource Scheduling Based on Deep Neural Network
  • Jul 1, 2019
  • Huan Meng + 5 more

Resource scheduling is one of the main problems for multi-satellite Tracking, Telemetry and Command (TT&C) networks. Traditional multi-resource joint scheduling algorithms are with long solution time, low efficiency, high computational cost, and simple description on the system. Deep Neural Network (DNN) provides a possible new way to solve those problems, but it is difficult to handle correlations among the input data. This motivates our work to solve the strong correlation problem based on the accumulated historical data, and thus enables DNN for TT&C resource scheduling. By discretizing the data, multiple constraints and related attributes are transformed into different flags, and some binary bits of the data are used to reflect the constraint relationship. Then, we can use DNN model and construct an intelligent TT&C resource scheduling system to handle multiple constraints and data attributes (such as priorities among tasks and others). This improves the efficiency of TT&C resources utilization and automation. Effectiveness of the proposed model is verified by simulations.

  • Conference Article
  • Cite Count Icon 8
  • 10.1109/vtc2021-spring51267.2021.9448879
5G Air-to-Ground Network Design and Optimization: A Deep Learning Approach
  • Apr 1, 2021
  • Yun Chen + 4 more

Direct air-to-ground (A2G) communications leveraging the fifth-generation (5G) new radio (NR) can provide high-speed broadband in-flight connectivity to aircraft in the sky. A2G network deployment entails optimizing various design parameters such as inter-site distances, number of sectors per site, and the up-tilt angles of sector antennas. The system-level design guidelines in the existing work on A2G network are rather limited. In this paper, a novel deep learning-based framework is proposed for efficient design and optimization of a 5G A2G network. The devised architecture comprises two deep neural networks (DNNs): the first DNN is used for approximating the 5G A2G network behavior in terms of user throughput, and the second DNN is developed as a function optimizer to find the throughput-optimal deployment parameters including antenna up-tilt angles and inter-site distances. Simulation results are provided to validate the proposed model and reveal system-level design insights.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 22
  • 10.1371/journal.pone.0239746
On the performance of fusion based planet-scope and Sentinel-2 data for crop classification using inception inspired deep convolutional neural network.
  • Sep 28, 2020
  • PLOS ONE
  • Nasru Minallah + 5 more

This research work aims to develop a deep learning-based crop classification framework for remotely sensed time series data. Tobacco is a major revenue generating crop of Khyber Pakhtunkhwa (KP) province of Pakistan, with over 90% of the country’s Tobacco production. In order to analyze the performance of the developed classification framework, a pilot sub-region named Yar Hussain is selected for experimentation work. Yar Hussain is a tehsil of district Swabi, within KP province of Pakistan, having highest contribution to the gross production of the KP Tobacco crop. KP generally consists of a diverse crop land with different varieties of vegetation, having similar phenology which makes crop classification a challenging task. In this study, a temporal convolutional neural network (TempCNNs) model is implemented for crop classification, while considering remotely sensed imagery of the selected pilot region with specific focus on the Tobacco crop. In order to improve the performance of the proposed classification framework, instead of using the prevailing concept of utilizing a single satellite imagery, both Sentinel-2 and Planet-Scope imageries are stacked together to assist in providing more diverse features to the proposed classification framework. Furthermore, instead of using a single date satellite imagery, multiple satellite imageries with respect to the phenological cycle of Tobacco crop are temporally stacked together which resulted in a higher temporal resolution of the employed satellite imagery. The developed framework is trained using the ground truth data. The final output is obtained as an outcome of the SoftMax function of the developed model in the form of probabilistic values, for the classification of the selected classes. The proposed deep learning-based crop classification framework, while utilizing multi-satellite temporally stacked imagery resulted in an overall classification accuracy of 98.15%. Furthermore, as the developed classification framework evolved with specific focus on Tobacco crop, it resulted in best Tobacco crop classification accuracy of 99%.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.