Abstract

The unmanned exploration of the Moon has steadily increased in the past years due to the renewed interest in creating a permanent human settlement on our natural satellite. After detailed remote sensing from orbiters, the focus is now shifting onto autonomous landing and surface locomotion by robotic devices on and around a specific area of highest interest. Especially, robotic exploration is a precursor for future human settlements on the Moon and in-situ resource utilization (ISRU) is one important aspect in the selection process of candidate locations for such settlements. In the early 1960’s, Watson (Watson et al., 1961) estimated that it would be possible to find deposits of ice caps at the bottom of craters located at the Moon’s polar regions. He argued that the shadows produced on these craters due to the very small deviation of the Moon’s equatorial plane position with respect to the Sun create an environment that would present the proper conditions to retain such deposits. Robotic exploration of these regions becomes a vital initial step on the way to build a permanent base of operations for humans to live on extended periods. The navigation of a vehicle on the Moon is presented with a series of issues such as the trafficability over the lunar soil (called “regolith”), the irregularity of the topography and the poor illumination at the Polar Regions, especially at the craters. In this chapter, attention is centered on the irregularity of the topography and the poor illumination issues. Given that we assume an environment with a very low angle of illumination which results in large shadowed areas, we propose the use of Light Detection and Ranging (LIDAR) systems to perceive the surroundings of the vehicle. These systems are not impaired by the lighting conditions assumed and have been successfully utilized in several outdoor mobile robotic applications (Langer et al., 2000),(Morales et al., 2008),(Skrzypcznski, 2008). The features of the topography of the surface of the Moon may translate as obstacles for the navigation of the vehicle and also may limit the visual range of the exteroceptive sensors at a given location. These obstacles cause an “occluding” effect on the readings of the sensors therefore decreasing the size of the perceivable area. Various researchers have approached the occlusion problem presented on indoor and outdoor autonomous navigation of mobile robotic systems. In (Dupuis et al., 2005),(Rekleitis et al., 2009), an irregular triangulated mesh is created based on a LIDAR’s sensory data, and then “filtered” in order to find “shadows” or occluding obstacles and eliminate them from the map. A description of the problems that occluding obstacles may present in teleoperated navigation is presented in (Kunii & Kubota, 2006). In (Heckman et al., 2007), a method to classify different voxels based on its location with respect to an occluding obstacle is given, whereas 13

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call