Abstract

Industry 4.0 introduces the use of modular stations and better communication between agents to improve manufacturing efficiency and to lower the downtime between the customer and its final product. Among novel mechanisms that have a high potential in this new industrial paradigm are cable­suspended parallel robot (CSPR): their payload­to­mass ratio is high compared to their serial robot counterpart and their setup is quick compared to other types of parallel robots such as Gantry system, popular in the automotive industry but difficult to set up and to adapt while the production line changes. A CSPR can cover the workspace of a manufacturing hall and providing assistance to operators before they arrive at their workstation. One challenge is to generate the desired trajectories, so that the CSPR could move to the desired area. Reinforcement Learning (RL) is a branch of Artificial Intelligence where the agent interacts with an environment to maximize a reward function. This paper proposes the use of a RL algorithm called Soft Actor­Critic (SAC) to train a two degrees­of­freedom (DOFs) CSPR to perform pick­and­place trajectory. Even though the pick­and­place trajectory based on artificial intelligence has been an active research with serial robots, this technique has yet to be applied to parallel robots.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.