Abstract

Abstract. A well-designed cache system has positive impacts on the 3D real-time rendering engine. As the amount of visualization data getting larger, the effects become more obvious. They are the base of the 3D real-time rendering engine to smoothly browsing through the data, which is out of the core memory, or from the internet. In this article, a new kind of caches which are based on multi threads and large file are introduced. The memory cache consists of three parts, the rendering cache, the pre-rendering cache and the elimination cache. The rendering cache stores the data that is rendering in the engine; the data that is dispatched according to the position of the view point in the horizontal and vertical directions is stored in the pre-rendering cache; the data that is eliminated from the previous cache is stored in the eliminate cache and is going to write to the disk cache. Multi large files are used in the disk cache. When a disk cache file size reaches the limit length(128M is the top in the experiment), no item will be eliminated from the file, but a new large cache file will be created. If the large file number is greater than the maximum number that is pre-set, the earliest file will be deleted from the disk. In this way, only one file is opened for writing and reading, and the rest are read-only so the disk cache can be used in a high asynchronous way. The size of the large file is limited in order to map to the core memory to save loading time. Multi-thread is used to update the cache data. The threads are used to load data to the rendering cache as soon as possible for rendering, to load data to the pre-rendering cache for rendering next few frames, and to load data to the elimination cache which is not necessary for the moment. In our experiment, two threads are designed. The first thread is to organize the memory cache according to the view point, and created two threads: the adding list and the deleting list, the adding list index the data that should be loaded to the pre-rendering cache immediately, the deleting list index the data that is no longer visible in the rendering scene and should be moved to the eliminate cache; the other thread is to move the data in the memory and disk cache according to the adding and the deleting list, and create the download requests when the data is indexed in the adding but cannot be found either in memory cache or disk cache, eliminate cache data is moved to the disk cache when the adding list and deleting are empty. The cache designed as described above in our experiment shows reliable and efficient, and the data loading time and files I/O time decreased sharply, especially when the rendering data getting larger.

Highlights

  • A well-designed cache system has positive impacts on the 3D real-time rendering engine, it can make the scene rendering smoothly, especially in the visualization of the mass geographic data

  • A cache system can be complicated when considered with the real-time rendering engine, both memory and disk cache should be taken into account, and the replacement policy could differ

  • The first one, control the number of the data item that written to the disk cache each time, we define a constant N as the maximum data item number, only N data items at top written to the disk cache each time, and the lock time can be controlled by the definition of N; the second one, instead of lock the whole elimination cache, only N data item is locked while the rest data still ready for retrieval or status resetting, the status of the locked data items will be set as “UnLoad”

Read more

Summary

INTRODUCTION

A well-designed cache system has positive impacts on the 3D real-time rendering engine, it can make the scene rendering smoothly, especially in the visualization of the mass geographic data. A cache system can be complicated when considered with the real-time rendering engine, both memory and disk cache should be taken into account, and the replacement policy could differ. LRU[2] is one of the best-known replacement policies, This algorithm to choose the most long visit is not being replaced as a block by block, it takes into account the temporal locality[3] rather than spatial locality. Megiddo and others propose ARC[5] This strategy use two LRU queues to manager the page cache, one queue is used to manage the pages which only be visited once, the other is used to manager the pages which are visited more than once, this strategy can adjust the size of the two queues according to the temporal or spatial locality. In the real-time rendering engine, cache replacement policy should be considered with the scene information. Our cache system is consists of three parts, memory cache, disk cache and multi- threading mechanism

Pre-rendering Cache
Elimination Cache
Memory Cache Elimination Methods
Index Files And Data Files
Disk Cache Eliminate Method
The Dispatch Thread
The Data Thread
3.EXPERIMENTS AND RESULTS
The Download Thread
4.CONCLUSIONS AND FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call