Abstract

Summary form only given. Recent commercially available 3D time of flight (TOF) cameras may represent the next generation of 3D robotic vision with a broad range of applications in the space environment. Typically TOF cameras return depth-maps in near real-time that can be analyzed and interpreted in a much more integrated and computationally efficient manner than stereographic imaging. The goal of our project is to investigate software related to the analysis of range data that leads to building a map of the environment that can be utilized for navigation and planning purposes. The goal of such investigations is to review computer vision software already available in the literature, mainly for military applications, and analyze such software for potential utilization in space applications: this includes assessing the software capabilities as well as its requirements in terms of computational and memory requirements and its potential implementation on on-board/on-situ hardware such as reconfigurable computers. In this paper, we discuss the use of such a camera to provide simple navigation capabilities that were tested on a prototype rover project, the MIKE (Multi-Terrain Investigative Kit for Exploration) rover: camera technology, software and hardware architectures as well as testing field experiments are described. As a 3D TOF Camera, we utilized the Swissranger SR-2 (from CSEM, Switzerland) to gather 3D data in realtime. The camera uses a wall of technique created from an array of infrared LEDs to product a depth-map across a pixel plane. Using a modulated infrared (870nm) light source, emitted light pulses are reflected by objects in the scene, and travel back to the camera, where their precise time of arrival is measured locally in each smart pixel of a custom image sensor. The time taken by these pulses to travel back and forth is proportional to the distance to the objects. The output of the camera is therefore a range image. The navigation software then applies filters to reduce the noise associated with the data before applying a feature recognition algorithm to find objects in the field of view. The objects are analyzed to determine the best path the rover can immediately take, and the navigation software then commands the rover to move based on this directional analysis. The system runs in a simple open loop mode and simply processes the next frame of data as it becomes available. The project showed that onboard processing requirements can be simplified in such a way that the 3D data is acquired in real-time by camera hardware with no moving parts. The software is now being analyzed using trade-off studies in order to compute power and computational requirements associated with the implementation of such software on reconfigurable or hybrid microprocessor/FPGA architectures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.