Abstract

QoS-aware big data analysis is critical in Information-Centric Internet of Things (IC-IoT) system to support various applications like smart city, smart grid, smart health, intelligent transportation systems, and so on. The employment of non-volatile memory (NVM) in cloud or edge system provides good opportunity to improve quality of data analysis tasks. However, we have to face the data recovery problem led by NVM failure due to the limited write endurance. In this paper, we investigate the data recovery problem for QoS guarantee and system robustness, followed by proposing a rarity-aware data recovery algorithm. The core idea is to establish the rarity indicator to evaluate the replica distribution and service requirement comprehensively. With this idea, we give the lost replicas with distinguishing priority and eliminate the unnecessary replicas. Then, the data replicas are recovered stage by stage to guarantee QoS and provide system robustness. From our extensive experiments and simulations, it is shown that the proposed algorithm has significant performance improvement on QoS and robustness than the traditional direct data recovery method. Besides, the algorithm gives an acceptable data recovery time.

Highlights

  • Big data analysis has meaningful importance for Information-Centric Internet of Things (IC-IoT) system

  • We focus on the data recovery problem to cope with non-volatile memory (NVM) failure during IoT real-time big data analysis

  • STAGED DATA REPLICA RECOVERY It is straightforward to assume that the direct data replica recovery method can achieve shorter recovery time, which can decrease the window of vulnerability for the system

Read more

Summary

INTRODUCTION

Big data analysis has meaningful importance for IC-IoT system. On the one hand, it is easy to collect big volume and multisource data in IC-IoT with the pervasive utilization of smart equipments, such as smartphones, cameras, and sensors. We focus on the data recovery problem to cope with NVM failure during IoT real-time big data analysis. It represents the ability of the system to sustain another NVM failure or rack malfunction during data recovery stage, since there should be at least one replica for each data to satisfy data analysis service. To conduct data replica recovery, we can start the recovery process immediately when a failure occurs to achieve shorter recovery time This method will decrease the QoS and robustness significantly. We propose a staged data recovery method, which considers the replica distribution and can achieve improved QoS performance. We propose an algorithm based on data rarity model, which takes data replica distribution and service bandwidth requirement into account.

RELATED WORKS
STAGED DATA REPLICA RECOVERY
PERFORMANCE EVALUATION
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.