Abstract

Proposed missions to explore comets and moons will encounter environments that are hostile and unpredictable. Any successful explorer must be able to adapt to a wide range of possible operating conditions in order to survive. The traditional approach of constructing special-purpose control methods would require information about the environment, which is not available a priori for these missions. An alternate approach is to utilize a general control approach with significant capability to adapt its behavior, a so called adaptive problem-solving methodology. Using adaptive problem-solving, a spacecraft can use reinforcement learning to adapt an environment-specific search strategy given the craft's general problem solver with a flexible control architecture. The resulting methods would enable the spacecraft to increase its performance with respect to the probability of survival and mission goals. We discuss an application of this approach to learning control strategies in planning and scheduling for three space mission models: Space Technologies 4, a Mars Rover, and Earth Observer One.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call