Abstract

Introduction: Understanding the distribution and characteristics of impact craters on planetary surfaces is essential if we want to understand the geological processes such as tectonic deformation, volcanism1, erosion, transport, and impact cratering itself2 which constantly rebuilt planetary surfaces. By analysing the distribution and the density of crater using the "crater counting" approach, it is possible to estimate the age of the planetary surface3 at regional scale. Historically, crater detection has been made manually. For Mars and the Moon, the existing  handmade databases4,5,6 brings an inestimable value to the community. Nevertheless, even with corrective analysis7, manual databases are subject to human limitations. Indeed, studies have shown that human attention to repetitive tasks, such as crater counting, rapidly decreases after 30 minutes8and then errors started to occur. Several machine learning and AI-based approaches have been proposed to automatically detect craters on planetary surface images9,10,11.  Data: To train our machine learning algorithm, we basically need two kind of data. We need images for the detection and a global crater database which encompass the ground truth. In this work we took images from the context camera on board of the Mars Reconnaissance Orbiter (CTX) and preprocess by the Bruce Murray Laboratory of Caltech12. For the crater database we took A. Lagain’s global martian database which encompass more than 376000 craters of a size bigger than 1 km in diameter7. Method: In this work, we presents a novel approach using the Faster Region-based Convolutional Neural Network (Faster R-CNN) for automatic crater detection13. As shown in Fig. 1, Faster R-CNN employs three stages: CNN backbone extracts features, RPN generates ROIs with anchor boxes predicting object presence and adjustments, and Detector refines object classification and bounding box coordinates, ensuring precise object detection and localization in images.Fig. 1 : Faster R-CNN architecture as describe by S.Ren & al, 201613 with our image configurationThe proposed method involves a preprocessing step in which we cut the images to a size of 224x224 pixels, we reproject the images in order to be sure that crater will always have a circular shape at ever latitude and we split the crater database to have a ground truth label for each image. Then, we train our model with 82874 images and we test our detector on 4828 images.Results: Extensive experiments on high-resolution planetary imagery demonstrate excellent performances with a mean average precision mAP50 > 0.82 with an intersection over union criterion IoU ≥ 0.5, independent of crater scale (Fig.2 & 3).Fig. 2 : Precision as a function of recall for six different IoU thresholds. Fig. 3 : Precision vs Recall curves for an IoU = 0.5 and for different bounding box sizes of the test dataset.The Fig. 4 show an example on an equatorial Mars quadrangle.  Fig. 4 : Inference on an equatorial region of Mars. The lower right corner of this 4°×4°quadrangle is located at 44°W, 0°N. The yellow boxes represent the ground truth infor-mation and the blue boxes the predictions ones. Pleasenote that crater smaller than 10 pixels in diameter are ignored. The results also highlight the versatility and potential of our robust model for automating the analysis of craters across different celestial bodies. The automatic crater detection tool holds great promise for future scientific research of space exploration missions.     

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.