Optical image sequences of spacecraft acquired by space-based monocular cameras are typically imaged through exposure bracketing. The spacecraft feature deformable alignment network for multi-exposure image fusion (SFDA-MEF) aims to synthesize a High Dynamic Range (HDR) spacecraft image from a set of Low Dynamic Range (LDR) images with varying exposures. The HDR image contains details of the observed target in LDR images captured within a specific luminance range. The relative attitude of the spacecraft in the camera coordinate system undergoes continuous changes during the orbital rendezvous, which leads to a large proportion of moving pixels between adjacent frames. Concurrently, subsequent tasks of the In-Orbit Servicing (IOS) system, such as attitude estimation, are highly sensitive to variations in multi-view geometric relationships, which means that the fusion result should preserve the shape of the spacecraft with minimal distortion. However, traditional methods and unsupervised deep-learning methods always exhibit inherent limitations in dealing with complex overlapping regions. In addition, supervised methods are not suitable when ground truth data are scarce. Therefore, we propose an unsupervised learning framework for the multi-exposure fusion of optical spacecraft image sequences. We introduce a deformable convolution in the feature deformable alignment module and construct an alignment loss function to preserve its shape with minimal distortion. We also design a feature point extraction loss function to render our output more conducive to subsequent IOS tasks. Finally, we present a multi-exposure spacecraft image dataset. Subjective and objective experimental results validate the effectiveness of SFDA-MEF, especially in retaining the shape of the spacecraft.
Read full abstract