Abstract
The breadth and depth of information being generated and stored continues to grow rapidly, causing an information explosion. Observational devices and remote sensing equipment are no exception here, giving researchers new avenues for detecting and predicting phenomena at a global scale. To cope with increasing storage loads, hybrid clouds offer an elastic solution that also satisfies processing and budgetary needs. In this article, the authors describe their algorithms and system design for dealing with voluminous datasets in a hybrid cloud setting. Their distributed storage framework autonomously tunes in-memory data structures and query parameters to ensure efficient retrievals and minimize resource consumption. To circumvent processing hotspots, they predict changes in incoming traffic and federate their query resolution structures to the public cloud for processing. They demonstrate their framework's efficacy on a real-world, petabyte dataset consisting of more than 20 billion files.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.