We introduce a novel primary visibility algorithm based on ray casting that provides real time performance and a feature set well suited for rendering virtual reality. The flexibility provided by our approach allows for a variety of features such as lens distortion, sub-pixel rendering, very wide field of view, foveation and stochastic depth of field blur to be implemented and composed naturally while maintaining real time performance. In contrast, the current rasterization pipelines implemented in hardware require multiple passes and/or post processing to approximate these features and current highly optimized ray tracers, which primarily focus on Monte Carlo path tracing, do not achieve real time performance on current VR displays (1080x1200x2@90hz). Our approach uses a bounding volume hierarchy acceleration and a two level frustum culling/entry point search algorithm to optimize the traversal of coherent primary visibility rays. We introduce an adaptation of MSAA for raycasting that significantly lowers memory bandwidth, we leverage an AVX optimized CPU traversal to perform the majority of culling and an optimized CUDA GPU implementation for triangle intersection, multi-sample antialiasing, and shading. The implementation provides support for animation and physically-based shading and lighting. We believe this approach presents a concrete, viable alternative to rasterization that is significantly better suited to rendering for virtual and augmented reality. In order to engage the community, we have released our implementation under an open-source license.