Vehicular communications have advanced data exchange and real-time services in intelligent transportation systems by exploiting advanced communication between vehicles and infrastructure. The emergence of Multi-access Edge Computing (MEC) has further elevated this field by utilizing distributed edge resources near vehicles for low-latency data processing and high-reliability communication. In this dynamic environment, adequate resource allocation and task offloading are pivotal to ensure superior performance, lower latency, and efficient network resource utilization, enhancing Quality of Service (QoS) and overall driving experience and safety. This paper presents a developed vehicular network and offloading mechanism, introducing a resource management model with real-time allocation and load balancing. The proposed method integrates task prioritization, multi-agent collaboration, context-aware decision-making, and distributed learning to optimize network performance. The introduced optimized algorithm initializes Q-networks and target networks, sets up an experience replay buffer, and configures agents with local state representations. Agents use an ε-greedy policy for action selection, update Q-values through experience replay, and prioritize tasks based on urgency while sharing state information for collaborative decision-making. Evaluations through simulation demonstrate optimized performance, enhancing efficiency in vehicular MEC networks compared to baseline and the other well-known algorithms.