Abstract

Recent advances in autonomy and robotics have moti-vated a renewed interest in performing rendezvous and other mission-critical maneuvers autonomously with robotic space-craft. A key enabler for all autonomous proximity operations is knowledge of the relative state of one spacecraft with respect to the other. There are a variety of sensing approaches available for estimating this relative state, including visible-spectrum cam-eras, LiDAR devices, or a combination of the two. For on-orbit proximity operations using vision-based sensing, a particular challenge is the variability of lighting conditions caused by shad-ows cast by the target spacecraft or eclipses caused by planetary bodies. A complete position and velocity state estimate of the target spacecraft may not be attainable during cases of high temporal lighting variability or a lighting environment which exceeds the dynamic range of the vision-based sensor. This work focuses on low-cost vision-based sensing systems and com-puter vision algorithms for enabling target spacecraft velocity state estimation in challenging lighting conditions. A hardware demonstration of velocity state estimation in variable lighting conditions was conducted using a Raspberry Pi camera, stepper and DC motors, and a planar grid of optical fiducial markers created using the Aruco Python OpenCV library. Performance characterizations of camera pose estimation and marker iden-tification for single axis rotation in variable lighting conditions were conducted to establish baselines for noise and marker iden-tification limitations. Variable intensity LED panel light were used to control the lighting conditions ranging from 100Lm to 1, 000Lm. Pose estimate measurement noise was observed to be inversely related to lighting intensity. Specifically, maximum 1σ noise figures for the 100Lm and 1,000Lm test cases were measured to be ±0.080° and ±0.034 respectively demonstrating a degradation in pose estimation accuracy in low light conditions. Additionally, the performance of the computer vision-based Aruco marker detection algorithm was shown to degrade for images with significant amounts of motion blur. To address this limitation, long exposure test cases are conducted to charac-terize the performance of a computer vision-based velocity state estimation algorithm developed under the assumption of single axis relative rotation between the target and chaser satellite. Using a 0.33 second exposure time for the Raspberry Pi camera and rotation rates about the camera boresight axis of 6° /s to 20° /s, images of Aruco markers with varying degrees of motion blur were collected. The velocity state estimation algorithm developed as part of this work produced estimates within 5° /s of the true rotation rate as measured by incremental encoders mounted on the stepper motors. These results demonstrate the ability of the developed algorithm to produce accurate velocity estimates for single axis relative rotation in variable lighting conditions. This work's significance is the extension of vision-based velocity state estimation in challenging lighting conditions analogous to those encountered on-orbit which render current methods unusable.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.