To enhance the viewer experience of standard dynamic range (SDR) video content on high dynamic range (HDR) displays, inverse tone mapping (ITM) is employed. Objective visual quality assessment (VQA) models are needed for effective evaluation of ITM algorithms. However, there is a lack of specialized VQA models for assessing the visual quality of inversely tone-mapped HDR videos (ITM-HDR-Videos). This paper addresses both an algorithmic and a dataset gap by introducing a novel SDR referenced HDR (SD-R-HD) VQA model tailored for ITM-HDR-Videos, along with the first public dataset specifically constructed for this purpose. The innovations of the SD-R-HD VQA model include 1) utilizing available SDR video as a reference signal, 2) extracting features that characterize standard ITM operations such as global mapping and local compensation, and 3) directly modeling interframe inconsistencies introduced by ITM operations. The newly created ITM-HDR-VQA dataset comprises 200 ITM-HDR-Videos annotated with mean opinion scores, gathered over 320 man-hours of psychovisual experiments. Experimental results demonstrate that the SD-R-HD VQA model significantly outperforms existing state-of-the-art VQA models.