Abstract

A tone mapping operator (TMO) is a module in the image signal processing pipeline that is used to convert high dynamic range images to low dynamic range images for display. Currently, state-of-the-art TMOs typically take complex algorithms and are implemented on graphics processing units, making it difficult to run with low latency on edge devices, and TMOs implemented in hardware circuits often lack additional noise suppression because of latency and hardware resource constraints. To address these issues, we proposed a low-latency noise-aware TMO for hardware implementation. Firstly, a locally weighted guided filter is proposed to decompose the luminance image into a base layer and a detail layer, with the weight function symmetric concerning the central pixel value of a window. Secondly, the mean and standard deviation of the basic layer and the detail layer are used to estimate the noise visibility according to the human visual characteristics. Finally, the gain for the detail layer is calculated to achieve adaptive noise suppression. In this process, luminance is first processed by the log2 function before being filtered and then symmetrically converted back to the linear domain by the exp2 function after compression. Meanwhile, the algorithms within the proposed TMO were optimized for hardware implementation to minimize latency and cache, achieving a low latency of 60.32 μs under video specification of 1080 P at 60 frames per second and objective metric smoothness in dark flat regions could be improved by more than 10% compared to similar methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call