This article develops KL-ergodic exploration from equilibrium (KL-E3), a method for robotic systems to integrate stability into actively generating informative measurements through ergodic exploration. Ergodic exploration enables robotic systems to indirectly sample from informative spatial distributions globally, avoiding local optima, and without the need to evaluate the derivatives of the distribution against the robot dynamics. Using a hybrid systems theory, we derive a controller that allows a robot to exploit equilibrium policies (i.e., policies that solve a task) while allowing the robot to explore and generate informative data using an ergodic measure that can extend to high-dimensional states. We show that our method is able to maintain Lyapunov attractiveness with respect to the equilibrium task while actively generating data for learning tasks such, as Bayesian optimization, model learning, and off-policy reinforcement learning. In each example, we show that our proposed method is capable of generating an informative distribution of data while synthesizing smooth control signals. We illustrate these examples using simulated systems and provide simplification of our method for real-time online learning in robotic systems. Note to Practitioners —Robotic systems need to adapt to sensor measurements and learn to exploit an understanding of the world around them such that they can truly begin to experiment in the real world. Standard learning methods do not have any restrictions on how the robot can explore and learn, making the robot dynamically volatile. Those that do are often too restrictive in terms of the stability of the robot, resulting in a lack of improved learning due to poor data collection. Applying our method would allow robotic systems to be able to adapt online without the need for human intervention. We show that considering both the dynamics of the robot and the statistics of where the robot has been, we are able to naturally encode where the robot needs to explore and collect measurements for efficient learning that is dynamically safe. With our method, we are able to effectively learn while being energetically efficient compared with state-of-the-art active learning methods. Our approach accomplishes such tasks in a single execution of the robotic system, i.e., the robot does not need human intervention to reset it. Future work will consider multiagent robotic systems that actively learn and explore in a team of collaborative robots.