Abstract

Abstract The advancement of artificial intelligence and machine learning technologies has led to significant changes in work processes. The computer agents are applied to perform not only routine and repetitive jobs but also highly complex tasks such as driving a car and steering a ship. Given the sensory information of the environment, a reinforcement learning method has been applied for agents to learn how to perform complex tasks by trial and error through interactions with the environment. To overcome the issues such as limited and sparse training data, researchers are attempting to reuse the previously learned knowledge in new task situations. In this paper, we investigate how feature extraction and finetuning methods can be combined to allow computer agents to perform transfer reinforcement learning more effectively and efficiently in the context of ship collision avoidance. Taking a computer simulation-based empirical approach, we first develop a ship collision avoidance gameplay environment by introducing the own ship, target ships, and the base case and target cases. A deep neural network including four convolutional layers and three fully connected layers is devised for work process feature capturing through deep reinforcement learning. The case study results have shown that features do exist in work processes, and they can be captured and reused. The similarity between the source case and the target case is a key factor that determines how the feature extract and finetuning methods should be combined for effective task results and efficient learning processes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call