This paper addresses the challenges of rendering massive indoor point clouds in Virtual Reality. In these kind of visualizations the point of view is never static, imposing the need of a one-shot (i.e. non-iterative) rendering strategy, in contrast with progressive refinement approaches that assume that the camera position does not change between most consecutive frames. Our approach benefits from the static nature of indoor environments to pre-compute a visibility map that enables us to boost real-time rendering performance. The key idea behind our visibility map is to exploit the cluttered topology of buildings in order to effectively cull the regions of the space that are occluded by structural elements such as walls. This does not only improve performance but also the visual quality of the final render, allowing us to display in full detail the space and preventing the user to see the contiguous spaces through the walls. Additionally, we introduce a novel hierarchical data structure that enables us to display the point cloud with a continuous level of detail with a minimal impact on performance. Experimental results show that our approach outperforms state-of-the-art techniques in complex indoor environments and achieves comparable results in outdoor ones, proving the generality of our method.