This research explores the application of reinforcement learning (RL) to enhance the motion control of bio-inspired robots across various environments. Focusing on underwater, terrestrial, and aerial robotic models, the study integrates model-free Q-learning, Deep Q Network (DQN), State-Action-Reward-State-Action (SARSA), and Double Deep Q Network (DDQN) algorithms. These methods are employed to achieve adaptive and efficient movement strategies without relying on pre-defined environmental models. The methodologies range from real-time data capture using live camera feeds for terrestrial robots to simulating aquatic and flight dynamics in controlled environments. Experimental results confirm the efficacy of these RL methods, demonstrating significant improvements in the robots' ability to adapt to dynamic and unknown environments, optimize movement efficiency, and navigate complex scenarios. The findings suggest a promising direction for future robotic applications, emphasizing the need for further research on optimizing these algorithms and expanding their real-world applicability. This study highlights the potential of RL to revolutionize robotic motion control, making robots more versatile and capable in varied and unpredictable settings.