Offline Reinforcement Learning (RL) faces challenges such as distributional shift and unreliable value estimation, especially for out-of-distribution (OOD) actions. To address these issues, existing uncertainty-based methods penalize the value function with uncertainty quantification and require numerous ensemble networks, leading to computational challenges and suboptimal outcomes. In this paper, we introduce a novel strategy that employs diverse randomized value functions to estimate the posterior distribution of Q-values. This approach provides robust uncertainty quantification and estimates the lower confidence bounds (LCB) of Q-values. By applying moderate value penalties for OOD actions, our method fosters a provably pessimistic approach. We also emphasize diversity within randomized value functions and enhance efficiency by introducing a diversity regularization method, thereby reducing the requisite number of networks. These modules result in reliable value estimation and efficient policy learning from offline data. Theoretical analysis shows that our method recovers the provably efficient LCB-penalty under linear MDP assumptions. Extensive empirical results demonstrate that our proposed method significantly outperforms baseline methods in terms of performance and parametric efficiency.