In conventional studies on vehicular edge computing, researchers frequently overlook the high-speed mobility of vehicles and the dynamic nature of the vehicular edge environment. Moreover, when employing deep reinforcement learning to address vehicular edge challenges, insufficient attention is given to the potential issue of the algorithm converging to a local optimal solution. This paper presents a content caching solution tailored for vehicular edge cloud computing, integrating content prediction and deep reinforcement learning techniques. Given the swift mobility of vehicles and the ever-changing nature of the vehicular edge environment, the study proposes a content prediction model based on Informer. Leveraging the Informer prediction model, the system anticipates the vehicular edge environment dynamics, thereby informing the caching of vehicle task content. Acknowledging the diverse time scales involved in policy decisions such as content updating, vehicle scheduling, and bandwidth allocation, the paper advocates a dual time-scale Markov modeling approach. Moreover, to address the local optimality issue inherent in the A3C algorithm, an enhanced A3C algorithm is introduced, incorporating an ɛ-greedy strategy to promote exploration. Recognizing the potential limitations posed by a fixed exploration rate ɛ, a dynamic baseline mechanism is proposed for updating ɛ dynamically. Experimental findings demonstrate that compared to alternative content caching approaches, the proposed vehicle edge computing content caching solution substantially mitigates content access costs. To support research in this area, we have publicly released the source code and pre-trained models at https://github.com/JYAyyyyyy/Informer.git.