Abstract

Motivated by the significant storage footprint of OS/Apps in mobile devices, this paper studies the realization of OS/Apps transparent compression. In spite of its obvious advantage, this feature however is not widely available in commercial mobile devices, which is due to the justifiable concern on the read latency penalty. In conventional practice on implementing transparent compression, read latency overhead comes from two aspects, including read amplification and decompression computational latency. This paper presents simple yet effective design solutions to eliminate the read amplification at the filesystem level and eliminate the computational latency overhead at the computer architecture level. To demonstrate its practical feasibility, we first implemented a prototyping filesystem to empirically verify the realization of transparent compression with zero read amplification. We further demonstrated that the OS/Apps footprint can be reduced by up to 39 percent on a Nexus 7 tablet installed with Android 5.0. Through application-specific integrated circuit (ASIC) synthesis, we show that the proposed computer architecture level design solution can eliminate the decompression latency overhead with very small silicon cost.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.