Handheld devices have become an inclusive alternative to head-mounted displays in virtual reality (VR) environments, enhancing accessibility and allowing cross-device collaboration. Object manipulation techniques in 3D space with handheld devices, such as those in handheld augmented reality (AR), have been typically evaluated in table-top scale and we currently lack an understanding of how these techniques perform in larger scale environments. We conducted two studies, each with 30 participants, to investigate how different techniques impact usability and performance for room-scale handheld VR object translations. We compared three translation techniques that are similar to commonly studied techniques in handheld AR: 3DSlide, VirtualGrasp, and Joystick. We also examined the effects of target size, target distance, and user mobility conditions (stationary vs. moving). Results indicated that the Joystick technique, which allowed translation in relation to the user's perspective, was the fastest and most preferred, without difference in precision. Our findings provide insights for designing room-scale handheld VR systems, with potential implications for mixed reality systems involving handheld devices.