Abstract

Head-mounted displays (HMDs) have made a virtual reality (VR) accessible to a widespread consumer market, introducing a revolution in many applications. Among the limitations of current HMD technology, the need for generating high-resolution images and streaming them at adequate frame rates is one of the most critical. Super-resolution (SR) convolutional neural networks (CNNs) can be exploited to alleviate timing and bandwidth bottlenecks of video streaming by reconstructing high-resolution images locally (i.e., near the display). However, such techniques involve a significant amount of computations that makes their deployment within area-/power-constrained wearable devices often unfeasible. This research work originated from the consideration that the human eye can capture details with high acuity only within a certain region, called the fovea. Therefore, we designed a custom hardware architecture able to reconstruct high-resolution images by treating foveal region (FR) and peripheral region (PR) through accurate and inaccurate operations, respectively. Hardware experiments demonstrate the effectiveness of our proposal: a customized fast SR CNN (FSRCNN) accelerator realized as described here and implemented on a 28-nm process technology is able to process up to 214 ultrahigh definition frames/s, while consuming just 0.51 pJ/pixel without compromising the perceptual visual quality, thus achieving a 55% energy reduction and a <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\times 14$ </tex-math></inline-formula> times higher throughput rate, with respect to state-of-the-art competitors.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.