Abstract

The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters.

Highlights

  • Optical autonomous navigation is a key technology in deep space exploration

  • Comparing with the conventional navigation sensors, this study provides a feasible method for the study of a miniaturized single field of view (FOV) optical navigation sensor, which is less cost, less weight, simple to design and can obtain attitude information and LOS vector from the sensor to the centroid of the target celestial body simultaneously

  • Laboratorial analysis experiment andand night skysky experiment are Laboratorialsingle-star single-starimaging imagingand andaccuracy accuracy analysis experiment night experiment conducted to validate the correctness of the models, accuracy performance analysis, and are conducted to validate the correctness of proposed the proposed models, accuracy performance analysis, optimal exposure parameters

Read more

Summary

Introduction

Optical autonomous navigation is a key technology in deep space exploration This process is usually accomplished by multi-sensor integration, such as star sensors, navigation cameras, inertial measurement devices, and other equipment. The navigation camera captures the target celestial image with background stars and extracts the target celestial line-of-sight (LOS) vector according to the current spacecraft attitude. The best solution for deep space exploration missions is when a single navigation sensor can simultaneously obtain the attitude and LOS vector from the sensor to the centroid of the target celestial body. This approach requires the sensor to image the stars and target celestial body and to extract their navigation measurements simultaneously.

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call