Abstract

Automated guided vehicle (AGV) systems have been widely used in warehouses to improve productivity and reduce costs. For almost every warehouse, order picking is the most costly activity. In an order picking activity, the picker’s travel time is the dominant component. To eliminate the travel time, we have developed a picking system in which AGVs transport the entire shelves including the required items to the pickers instead of the pickers moving to the shelves, which improves the efficiency of the picking activities. To minimize the shelf waiting time for the pickers, an intelligent AGV control method such as route planning is required. While there are already some existing approaches using reinforcement learning for this, reinforcement learning often requires hand-engineered low-dimensional state representation, which results in the loss of some state information. In this paper, we present an AGV route planning method for an AGV picking system using deep reinforcement learning. This method uses raw high-dimensional map information as input instead of hand-engineered low-dimensional state representation and it enables the acquisition of a successful AGV route planning policy. We evaluated the validity of the proposed method using an AGV picking system simulator and found that the proposed method outperforms other route planning strategies including our previous method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call