Abstract
Artificial agents have often been compared to humans in their ability to categorize images or play strategic games. However, comparisons between human and artificial agents are frequently based on the overall performance on a particular task, and not necessarily on the specifics of how each agent behaves. In this study, we directly compared human behaviour with a reinforcement learning (RL) model. Human participants and an RL agent navigated through different grid world environments with high- and low- value targets. The artificial agent consisted of a deep neural network trained to map pixel input of a 27x27 grid world into cardinal directions using RL. An epsilon greedy policy was used to maximize reward. Behaviour of both agents was evaluated on four different conditions. Results showed both humans and RL agents consistently chose the higher reward over a lower reward, demonstrating an understanding of the task. Though both humans and RL agents consider movement cost for reward, the machine agent considers the movement costs more, trading off the effort with reward differently than humans. We found humans and RL agents both consider long-term rewards as they navigate through the world, yet unlike humans, the RL model completely disregards limitations in movements (e.g. how many total moves received). Finally, we rotated pseudorandom grid arrangements to study how decisions change with visual differences. We unexpectedly found that the RL agent changed its behaviour due to visual rotations, yet remained less variable than humans. Overall, the similarities between humans and the RL agent shows the potential RL agents have of being an adequate model of human behaviour. Additionally, the differences between human and RL agents suggest improvements to RL methods that may improve their performance. This research compares the human mind with artificial intelligence, creating the opportunity for future innovation.
Highlights
How do humans and artificial agents make decisions in different environments? Reinforcement Learning (RL) is a branch of machine learning optimizing rewards in different environments. We created a grid world to create foraging tasks to be used by humans and train artificial agents. We can compare the RL agent to that of humans
Condition #2: How much are humans and artificial agents willing to trade off rewards for movement?
O The RL agent cares about movement cost more
Summary
Reinforcement Learning (RL) is a branch of machine learning optimizing rewards in different environments. We can compare the RL agent to that of humans
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have