Abstract

Hard time constraints in space missions bring in the problem of fast video processing for numerous autonomous tasks. Video processing involves the separation of distinct image frames, fetching image descriptors, applying different machine learning algorithms for object detection, obstacle avoidance, and many more tasks involved in the automatic maneuvering of a spacecraft. These tasks require the most informative descriptions of an image within the time constraints. Tracking these informative points from consecutive image frames is needed in flow estimation applications. Classical algorithms like SIFT and SURF are the milestones in the feature description development. But computational complexity and high time requirements force the critical missions to avoid these techniques to get adopted in real-time processing. Hence a time conservative and less complex pre-trained Convolutional Neural Network (CNN) model is chosen in this paper as a feature descriptor. 7-layer CNN model is designed and implemented with pre-trained VGG model parameters and then these CNN features are used to match the points of interests from consecutive image frames of a lunar descent video. The performance of the system is evaluated based on visual and empirical keypoints matching. The scores of matches between two consecutive images from the video using CNN features are then compared with state-of-the-art algorithms like SIFT and SURF. The results show that CNN features are more reliable and robust in case of time-critical video processing tasks for keypoint tracking applications of space missions.

Highlights

  • Many space missions have been successfully conquered by national and international government bodies, automation in the space related tasks is still developing

  • Real time space mission tasks are time critical and for such tasks processing time plays an important parameter of evaluation

  • Keypoints, which are special points of interest inside an image must be tracked between consecutive image frames of a real time video captured by on board spacecraft cameras

Read more

Summary

Introduction

Many space missions have been successfully conquered by national and international government bodies, automation in the space related tasks is still developing. Many research challenges related to space exploration are still in their early stage of development. While revolving around a target planet a spacecraft always keeps on taking videos of a scene ahead using on board cameras for study purpose. These real time videos are needed to be processed within time constraints for different purposes. Detecting the most informative keypoints from videos in real-time is a challenging task. Further tracking keypoints between two consecutive video frames is a important task for many flow estimation algorithms. In this paper, tracking keypoints between consecutive video frames is achieved using CNN features. We propose a methodology for keypoints tracking which will be suitable for time critical space applications

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call