Abstract

As the display devices develop rapidly, the applicability of high dynamic range (HDR) video is increased. However, most of the researches focus on generating a single HDR image and HDR video has less attention. To obtain HDR video, it is possible to capture HDR video directly. But using a specialized camera is necessary and this kind of camera is expensive and less widespread. A practical way is to use a single camera that alternates exposures for each frame. This approach has been explored in the past and many existing researches are based on it. For most people, this approach is feasible but not all video cameras can do this. To apply existing methods to generate HDR video, the input must be alternative exposure video sequences. To avoid the limit of equipment, we propose a patch-based method using single low dynamic range (LDR) video to reconstruct moving objects in the scene and generate HDR video. We compute the motion flows in two adjacent frames and assign an appropriate search window size for each flow. Using the motion flow and search window map, we can reconstruct the region with complex motion without generating artifacts and ensure temporal coherence between adjacent frames. Comparing with Kalantari et al.s method, the state-of-the-art for producing HDR video, we can reduce 14 percent of total execution time. Also, several evaluations show that our results are more similar to the reference video than theirs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call