Abstract

In design for additive manufacturing, an essential task is to determine the optimal build orientation of a part according to one or multiple factors. Heuristic search is used by the most part orientation methods to select the optimal orientation from a large solution space. Search algorithms occasionally converge towards the local optimum and waste considerable time on trial and error. The above issues could be addressed if there was an intelligent agent that knew the optimal search/rotation path for a given 3D model. A straightforward method to construct such an agent is reinforcement learning (RL). By adopting this idea, the time-consuming online searches in existing part orientation methods will be moved to the offline learning stage, potentially improving part orientation performance. This is a challenging research problem because the goal is to build an agent capable of rotating arbitrary 3D models, whereas RL agents frequently struggle to generalize in new scenarios. Therefore, this paper suggests a generalizable reinforcement learning (GRL) framework to train the agent, and a GPU-accelerated GRL benchmark to support the training, testing, and comparison of part orientation approaches. Experimental results demonstrate that the proposed part orientation method on average outperforms others in terms of effectiveness and efficiency. It is proved to have the potential to solve the local minima problems raised in the existing approaches, to swiftly discover the global (sub-)optimal solution (i.e. on average 2.62x to 229.00x faster than the random search algorithm), and to generalize beyond the environment in which it was trained.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call