Abstract
Thesedays, along with read()/write(), mmap() is used to file I/O in data-intensive applications as an alternative method of I/O on emerging low-latency device such as flash-based SSD. Although utilizing memory-mapped file I/O have many advantages, it does not produce much benefit when combined with large-scale data and fast storage devices. When the working set of an application accessing file with mmap() is larger than the size of physical memory, the I/O performance is severely degraded compared to the application with read/write(). This is mainly due to the virtual memory subsystem that does not reflect the performance feature of the underlying storage device. In this paper, we examined linux virtual memory subsystem and mmap() I/O path to figure out the influence of low-latency storage devices on the existing virtual memory subsystem. Also, we suggest some optimization policies to reduce the overheads of mmap() I/O and implement the prototype in a recent Linux kernel. Our solution guarantees that 1) memory-mapped I/O will be several times faster than read-write I/O when cache-hit ratio becomes high, and 2) the former will show at least the performance of the latter even when cache-miss frequently occurs and the overhead of mapping/unmapping pages becomes significant, which are not achievable by the existing virtual memory subsystem.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.