The new generation of organic light emitting diode display is designed to enable the high dynamic range (HDR), going beyond the standard dynamic range (SDR) supported by the traditional display devices. However, a large quantity of videos are still of SDR format. Further, most pre-existing videos are compressed at varying degrees for minimizing the storage and traffic flow demands. To enable movie-going experience on new generation devices, converting the compressed SDR videos to the HDR format (i.e., compressed-SDR to HDR conversion) is in great demands. The key challenge with this new problem is how to solve the intrinsic many-to-many mapping issue. However, without constraining the solution space or simply imitating the inverse camera imaging pipeline in stages, existing SDR-to-HDR methods can not formulate the HDR video generation process explicitly. Besides, they ignore the fact that videos are often compressed. To address these challenges, in this work we propose a novel imaging knowledge-inspired parallel networks (termed as KPNet) for compressed-SDR to HDR (CSDR-to-HDR) video reconstruction. KPNet has two key designs: Knowledge-Inspired Block (KIB) and Information Fusion Module (IFM). Concretely, mathematically formulated using some priors with compressed videos, our conversion from a CSDR-to-HDR video reconstruction is conceptually divided into four synergistic parts: reducing compression artifacts, recovering missing details, adjusting imaging parameters, and reducing image noise. We approximate this process by a compact KIB. To capture richer details, we learn HDR representations with a set of KIBs connected in parallel and fused with the IFM. Extensive evaluations show that our KPNet achieves superior performance over the state-of-the-art methods.