Abstract

The star sensor boasts the highest accuracy in spacecraft attitude measurement. However, it is vulnerable to disturbances, including high-dynamic motion, stray light, and various in-orbit environmental factors. These disruptions may lead to a significant decline in attitude accuracy or even abnormal output, potentially inducing a state of disorientation in the spacecraft. Thus, it is usually coupled with a high-frequency gyroscope to compensate for this limitation. Nevertheless, the accuracy of long-term attitude estimation using a gyroscope decreases due to the presence of bias. We propose an optimization-based tightly coupled scheme to enhance attitude estimation accuracy under dynamic conditions as well as to bolster the star sensor’s robustness in cases like lost-in-space. Our approach commenced with visual–inertial measurement preprocessing and estimator initialization. Subsequently, the enhancement of attitude and bias estimation precision was achieved by minimizing visual and inertial constraints. Additionally, a keyframe-based sliding window approach was employed to mitigate potential failures in visual sensor measurements. Numerical tests were performed to validate that, under identical dynamic conditions, the proposed method achieves a 50% improvement in the accuracy of yaw, pitch, and roll angles in comparison to the star sensor only.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.