Abstract

In modern walkthrough applications, storing massive datasets has become easy and inexpensive due to the availability of gigantic disk-based storage devices including hard drives, DVDs, and Blu-ray discs. However, fetching data from these devices for processing and rendering in interactive environments remains a bottleneck as data transfer speed has not kept pace with the sizes of both the secondary storage and main memory.Out-of-core algorithms are commonly used as a solution to transfer data efficiently from the secondary storage to main memory. Existing algorithms strongly rely on suitable data layout algorithms to reduce the data fetch time. However, in spite of all commonly used techniques, the total time required to seek and transfer data can still easily exceed the budget for total data fetch time. In this work, we propose an orthogonal approach to aggregate data and store them redundantly in multiple places in the storage device to ensure consistent data fetching performance. We pose this as a linear integer programming problem to minimize the amount of redundancy subject to the fetch time budget constraint. We provide an implementation on datasets with hundreds of millions of triangles to demonstrate how this data clustering can be created in practice and how the optimal solution is found.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call