Abstract

This paper implements a novel approach to automatically detect and classify rockfalls in Lunar Reconnaissance Orbiter narrow angle camera (NAC) images using a single-stage dense object detector (RetinaNet). The convolutional neural network has been trained with a data set of 2932 original rockfall images. In order to avoid overfitting, the initial training data set has been augmented during training using random image rotation, scaling, and flipping. Testing images have been labelled by human operators and have been used for RetinaNet performance evaluation. Testing shows that RetinaNet is capable to reach recall values between 0.98 and 0.39, precision values between 1 and 0.25, and average precisions ranging from 0.89 to 0.69, depending on the used confidence threshold and intersection-over-union values. Mean processing time of a single NAC image in RetinaNet is around 10 s using a GeForce GTX 1080 Ti and GeForce Titan Xp, which is in orders of magnitudes faster than a human operator. The processing speed allows to efficiently exploit the currently available NAC data stack with more than 1 million images in a reasonable timeframe. The combination of speed and detection performance can be used to produce lunar rockfall distribution maps on large spatial scales for utilization by the scientific and engineering community.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call