Abstract

As smart metering technology evolves, power suppliers can make low-cost, low-risk estimation of customer-side power consumption by analyzing energy demand data collected in real-time. With advances network infrastructure, smart sensors, and various monitoring technologies, a standardized energy metering infrastructure, called advanced metering infrastructure (AMI), has been introduced and deployed to urban households to allow them to develop efficient power generation plans. Compared to traditional stochastic approaches for time-series data analysis, deep-learning methods have shown superior accuracy on many prediction applications. Because smart meters and infrastructure monitors produce a series of measurements over time, a large amount of data is accumulated, creating a large data stream, which takes much time from data generation to deployment of deep-learning model training. In this article, we propose an accelerated computing system that considers time-variant properties for accurate prediction of energy demand by processing the AMI stream data. The proposed system is a real-time training/inference system that deploys AMI data over a distributed edge cloud. It comprises two core components: an adaptive incremental learning solver and deep-learning acceleration with FPGA-GPU resource scheduling. An adaptive incremental learning scheme adjusts the batch/epoch in training iteration to reduce the time delay of the latest trained model, while trying to prevent biased-training due to the sub-optimal optimizer of incremental learning. In addition, a resource scheduling scheme manages various accelerator resources for accelerated deep-learning processing while minimizing the computational cost. The experimental results demonstrated that our method achieved good performance for adaptive batch size and epoch for incremental learning while guaranteeing a low inference error, a high model score, and queue stability with cost efficient processing.

Highlights

  • A S the global energy demand of countless electronic devices increases rapidly, many researchers are paying close attention to energy data analysis to reduce energy waste

  • 3) We implemented heterogeneous accelerator (FPGU, Graphics Processing Unit (GPU)) resource scheduling through layer partitioning in the edge cloud

  • In this paper to accelerate the deployment procedure of a deep neural network after model training, we proposed an accelerated edge cloud system for energy data stream processing based on an adaptive incremental deep learning scheme

Read more

Summary

INTRODUCTION

A S the global energy demand of countless electronic devices increases rapidly, many researchers are paying close attention to energy data analysis to reduce energy waste. Traditional training method that repeats the same model update operation on stream data sets that accumulate over time, so that existing distributed deep learning computing frameworks can repeat the entire computation of the entire data set It is inefficient and wasteful of computational resources, and the retraining period gradually increased due to the increased training time for the accumulated data set, resulting in reduced predictive performance for short-term non-stationary AMI data. The system determines the number of data instances and the number of epochs for temporary mini-batch training to reduce the amount of processing time and computational cost required for learning while retraining the model to reflect new features in recently incoming data stream immediately. 3) We implemented heterogeneous accelerator (FPGU, GPU) resource scheduling through layer partitioning in the edge cloud

RELATED WORK
AN ONLINE LEARNING FRAMEWORK
OPTIMAL TRAINING SCHEME
PERFORMANCE EVALUATION
EXPERIMENTAL ENVIRONMENT
PERFORMANCE METRICS FOR EVALUATION
EXPERIMENTAL RESULTS
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call