Abstract

Breakthroughs in deep convolutional neural networks for new vision applications in image classification and object detection have pushed forward precision and speed performance indicators in both domains. The future of space exploration relies on the development of novel systems for autonomous operations and onboard data handling especially for computer vision and deep learning. However, previous works on object detection and image classification always operate on the rigid assumption that representative data is available and reliable while merely focusing on offline optimization of architectures for accuracy. This assumption cannot be extended to onboard processing especially in a space environment where unknown scene changes in the visual environment directly affect the performance of machine vision systems. The performance of a deep neural network is as dependent on the input data as it is on the network its self. We propose using a multi-sensory computer vision system that accounts for data reliability and availability using an adaptive input policy. We use custom datasets containing RGB and Depth images of a reference satellite mission for training and testing deep convolutional neural network models for object detection. Our simulation testbed generates our datasets which cover all poses, different ranges, lighting conditions and visual environments. The trained models use multi-sensory input data from both an optical sensor (RGB data) and a time of flight sensor (Depth data). The multi-sensory input data is passed through the adaptive input layer to complementarily provide the most reliable output in a harsh space environment that does not tolerate missing and unreliable data. For instance, the ToF sensor provides visual data that reliably cover close ranges and most importantly can operate regardless of ambient light. The optical sensor provides RGB data at farther ranges and, unlike ToF sensors, is not susceptible to saturation from Earth infra-red emissions. This selective multi-sensory input approach ensures that the CNN model receives reliable input data regardless of the changes in the visual environment to fit the strict operational requirements of space missions. Our work is validated using a sensory-data reliability assessment and object detection models based on the state of the art using Faster R-CNN and YOLO detection techniques. Average precision on the validation dataset saw a significant improvement using our approach. Average precision results went from 50% and 40% using RGB and Depth respectively to 080% using the input selective system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call