Abstract
Erasure codes have been widely used to enhance data resiliency with low storage overheads. However, in geo-distributed cloud storage systems, erasure codes may incur high service latency as they require end users to access remote storage nodes to retrieve data. An elegant solution to achieving low latency is to deploy caching services at the edge servers close to end users. In this paper, we propose adaptive and scalable caching schemes to achieve low latency in the cloud-edge storage system. Based on the measured data popularity and network latencies in real time, an adaptive content replacement scheme is proposed to update caching decisions upon the arrival of requests. Theoretical analysis shows that the reduced data access latency of the replacement scheme is at least 50% of the maximum reducible latency. With the low computation complexity of our design, nearly no extra overheads will be introduced when handling intensive data flows. For further performance improvements without sacrificing its efficiency, an adaptive content adjustment scheme is presented to replace the subset of cached contents that incur the aforementioned performance loss. Driven by real-world data traces, extensive experiments based on Amazon Simple Storage Service demonstrate the effectiveness and efficiency of our design.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.