This study investigates the integration of reinforcement learning (RL) with optimal control to enhance precision and energy efficiency in industrial robotic manipulation. A novel framework is proposed, combining Deep Deterministic Policy Gradient (DDPG) with a Linear Quadratic Regulator (LQR) controller, specifically applied to the ABB IRB120, a six-degree-of-freedom (6-DOF) industrial manipulator, for pick-and-place tasks in warehouse automation. The methodology employs an actor–critic RL architecture with a 27-dimensional state input and a 6-dimensional joint action output. The RL agent was trained using MATLAB’s Reinforcement Learning Toolbox and integrated with ABB’s RobotStudio simulation environment via TCP/IP communication. LQR controllers were incorporated to optimize joint-space trajectory tracking, minimizing energy consumption while ensuring precise control. The novelty of this research lies in its synergistic combination of RL and LQR control, addressing energy efficiency and precision simultaneously—an area that has seen limited exploration in industrial robotics. Experimental validation across 100 diverse scenarios confirmed the framework’s effectiveness, achieving a mean positioning accuracy of 2.14 mm (a 28% improvement over traditional methods), a 92.5% success rate in pick-and-place tasks, and a 22.7% reduction in energy consumption. The system demonstrated stable convergence after 458 episodes and maintained a mean joint angle error of 4.30°, validating its robustness and efficiency. These findings highlight the potential of RL for broader industrial applications. The demonstrated accuracy and success rate suggest its applicability to complex tasks such as electronic component assembly, multi-step manufacturing, delicate material handling, precision coordination, and quality inspection tasks like automated visual inspection, surface defect detection, and dimensional verification. Successful implementation in such contexts requires addressing challenges including task complexity, computational efficiency, and adaptability to process variability, alongside ensuring safety, reliability, and seamless system integration. This research builds upon existing advancements in warehouse automation, inverse kinematics, and energy-efficient robotics, contributing to the development of adaptive and sustainable control strategies for industrial manipulators in automated environments.
Read full abstract