Large Language Models (LLMs) excel in tasks from translation to complex reasoning. For AI systems to help effectively, understanding and predicting human behavior and biases is essential. However, it remains an open question whether LLMs can achieve this goal. This paper addresses this gap by leveraging the reasoning and generative capabilities of LLMs to predict human behavior in two sequential decision-making tasks. These tasks involve balancing between exploratory and exploitative actions and handling delayed feedback, which is essential for simulating real-life decision processes. We compare the performance of LLMs with a cognitive instance-based learning (IBL) model, which imitates human experiential decision-making. Our findings indicate that LLMs excel at rapidly incorporating feedback to enhance prediction accuracy. In contrast, the IBL model better accounts for human exploratory behaviors and effectively captures loss aversion bias — the tendency to choose a sub-optimal goal with fewer step-cost penalties rather than exploring to find the optimal choice, even with limited experience. The results highlight the benefits of integrating LLMs with cognitive architectures, suggesting that this synergy could enhance the modeling and understanding of complex human decision-making patterns.