Considerable progress has been made toward developing standard dynamic range (SDR) blind video quality assessment (BVQA) models that do not require any baseline reference for quality prediction. However, there is no such method for the high dynamic range (HDR) content. Unlike SDR video, HDR video represents a high-fidelity representation of the real-world scene by preserving the wide luminance range and color gamut. Therefore, SDR BVQA models are not suitable for HDR BVQA. Towards ameliorating this, a first-of-its-kind BVQA model for HDR content is presented in this work. The proposed HDR blind video quality model (HDR-BVQM) is inspired by the spatio-temporal natural scene statistics model, previously employed in SDR blind quality assessment metrics. To build our proposed model, we first develop a comprehensive subjective HDR video quality dataset, including 228 distorted videos generated through three different (H.264, HEVC, packet drop) distortion processes from 19 pristine HDR videos. The developed dataset is then used to extract HDR relevant features, which vary in different distortion types, to train and test the proposed HDR-BVQM. The features are based on the pointwise, pairwise log-derivative, and motion coherence based statistics. Finally, detailed validation and performance comparison is performed with full-reference HDR and no-reference SDR quality assessment methods. The results reveal that the quality prediction by HDR-BVQM correlates with the human judgment of quality.