The Smartphone Video Guidance Sensor (SVGS) is an emerging technology developed by NASA Marshall Space Flight Center that uses a vision-based approach to accurately estimate the six-state position and orientation vectors of an illuminated target of known dimensions with respect to a coordinate frame fixed to the camera. SVGS is a software-based sensor that can be deployed using a host platform’s resources (CPU and camera) for proximity operations and formation flight of drones or spacecraft. The SVGS output is calculated based on photogrammetric analysis of the light blobs in each image; its accuracy in linear and angular motion at different velocities has previously been successfully demonstrated [9]. SVGS has several potential applications in guidance, navigation, motion control, and proximity operations as a reduced-cost, compact, reliable option to competing technologies such as LiDAR or infrared sensing. One of the applications envisioned by NASA for SVGS is planetary/lunar autonomous landing. This paper aims to compare the SVGS performance in autonomous landing with existing technologies: a combination of infrared beacon technology (IRLock) and LiDAR. The comparison is based on a hardware-in-the-loop emulation of a precision landing experiment using a computer-controlled linear motion to emulate the approach motion and the ROS/Gazebo environment to emulate the response of the flight controller to the environment during landing. Results suggest that SVGS performs better than the existing IRLock with LiDAR sensor combination.