Abstract

This paper presents a creativity data prefetching scheme on the loading servers in distributed file systems for cloud computing. The server will get and piggybacked the frequent data from the client system, after analyzing the fetched data is forward to the client machine from the server. To place this technique to work, the data about client nodes is piggybacked onto the real client I/O requests, and then forwarded to the relevant storage server. Next, dual prediction algorithms have been proposed to calculation future block access operations for directing what data should be fetched on storage servers in advance. Finally, the prefetching data can be pressed to the relevant client device from the storage server. Over a series of evaluation experiments with a group of application benchmarks, we have demonstrated that our presented initiative prefetching technique can benefit distributed file systems for cloud environments to achieve better I/O performance. In particular, configuration-limited client machines in the cloud are not answerable for predicting I/O access operations, which can certainly contribute to preferable system performance on them.

Highlights

  • A cloud computing means, storing and accessing the data over the internet instead of client computers

  • DISTRIBUTED FILE SYSTEMS (DFS) is a process of parallel process of sharing information between many clients that process of sharing over the internet is called as distributed file system it continuously is persisted to be some backend depository system for providing I/O services to different types of information extensive trainings on the cloud computing situations [4], [5], [7], [6]

  • In this paper signifies fetching I/O actions files from the client using dualistic prediction algorithms such as Apriori for access the frequent individual’s data and logistic for if the accessed data was in server or piggybacked and frontward to significant machine

Read more

Summary

INTRODUCTION

A cloud computing means, storing and accessing the data over the internet instead of client computers. DFS is a process of parallel process of sharing information between many clients that process of sharing over the internet is called as distributed file system it continuously is persisted to be some backend depository system for providing I/O services to different types of information extensive trainings on the cloud computing situations [4], [5], [7], [6]. In this paper signifies fetching I/O actions files from the client using dualistic prediction algorithms such as Apriori for access the frequent individual’s data and logistic for if the accessed data was in server or piggybacked and frontward to significant machine. In previous research are applied linear regression and chaotic time series algorithms for predicted the data from client machine and response from the piggybacked data

CLOUD COMPUTING
CLOUD DEPLOYMENT MODELS
CLOUD SERVICE MODELS
APRIORI ALGORITHM
LOGISTIC REGRESSION
PIGGYBACKING CLIENT INFORMATION
Process Of Dfs With Prediction Algorithm
Implementation
CONCLUSION
VIII. REFERENCES
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.