Raypointing, the status-quo pointing technique for virtual reality, is challenging with many occluded and overlapping objects. In this work, we investigate how eye-tracking input can assist the gestural ray pointing in the disambiguation of targets in densely populated scenes. We explore the concept of Gaze + Plane, where the intersection between the user's gaze and a hand-controlled plane facilitates 3D position specification. In particular, two techniques are investigated: Gaze&Wall, which employs an indirect plane positioned in depth using a hand ray, and Gaze&Racket, featuring a hand-held and rotatable plane. In a first experiment, we reveal the speed-error trade-offs between Gaze + Plane techniques. In a second study, we compared the best techniques to newly designed gesture-only techniques, finding that Gaze&Wall is less error-prone and significantly faster. Our research has relevance for spatial interaction, specifically on advanced techniques for complex 3D tasks.