Abstract

ABSTRACT This research focuses on providing an optimal dispatching algorithm for an automatic guided vehicle (AGV) in a mobile metal board manufacturing facility. The target process comprises multiple computerized numerical control (CNC) machines and an AGV. An AGV feeds materials between two rows of CNC machines for processing metal boards, or conveys a work in process. As it is difficult to derive a mathematically optimal working order owing to the high computational cost, simple dispatching rules have typically been applied in such environments. However, these rules are generally not optimal, and expert knowledge is required to determine which rule to choose. To overcome certain of these disadvantages and increase productivity, a deep reinforcement learning (RL) algorithm is used to learn an AGV’s dispatching algorithm. The target production line as a virtual simulated grid-shaped workspace is modeled to develop a deep Q-network (DQN)-based dispatching algorithm. A convolutional neural network (CNN) is used to input raw pixels and output a value function for estimating future rewards, and an agent is trained to successfully learn the control policies. To create an elaborate dispatching strategy, different hyper-parameters of the DQN are tuned and a reasonable modeling method is experimentally determined. The proposed method automatically develops an optimal dispatching policy without requiring human control or prior expert knowledge. Compared with general heuristic dispatching rules, the results illustrate the improved performance of the proposed methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call