The rapid advancement of autonomous vehicle technology over the past decade has significantly increased the complexity of intelligent transportation systems. This need is clearly reflected in the requirements for high-performance computing in autonomous vehicles — a requirement that artificial intelligence, machine learning and big data analytic have all come to meet through their integration. We need the CPU and GPU power to handle processing in real-time data, sensor fusion and decision-making. We investigate how the CPU and GPU perform in an autonomous driving scenario, concluding that these tasks are complementary to one another describing where each can be best used within a heterogeneous computing architecture. The research delves into how these computing chips can be best used under the demand for real-time processing, reliability and efficiency. The study is to better understand the techniques for improving CPU-GPU collaboration, and applies these new findings on performance-intensive tasks like image recognition or track construction. The technical part involves conducting driving scenarios in the real world on three different types of CPU and several GPU, with key metrics such as processing latency, power consumption and accuracy. The actual experimental results show the advantages of GPU parallel computing and deep learning, while CPU still has a good advantage in multi-tasking ability and logical calculation. The results are important for the implementation of future autonomous driving systems, highlighting heterogeneous computing architectures as a means to optimize and support safe, effective vehicle operations.