Being able to robustly interact with and navigate a dynamic environment has been a long-standing challenge in intelligent transportation systems. Autonomous agents can use models that mimic the human brain to learn how to respond to other participants’ actions in the environment and proactively coordinate with the dynamics. Modeling brain learning procedures is challenging for multiple reasons, such as stochasticity, multimodality, and unobservant intents. Active inference may be defined as the Bayesian modeling of a brain with a biologically plausible model of the agent. Its primary idea relies on the free energy principle and the prior preference of the agent. It enables the agent to choose an action that leads to its preferred future observations. An exploring action-oriented model is introduced to address the inference complexity and solve the exploration–exploitation dilemma in unobserved environments. It is conducted by adapting active inference to an imitation learning approach and finding a theoretical connection between them. We present a multimodal self-awareness architecture for autonomous driving systems where the proposed techniques are evaluated on their ability to model proper driving behavior. Experimental results provide the basis for the intelligent driving system to make more human-like decisions and improve agent performance to avoid a collision.