Transportation big data (TBD) are increasingly combined with artificial intelligence to mine novel patterns and information due to the powerful representational capabilities of deep neural networks (DNNs), especially for anti-COVID19 applications. The distributed cloud-edge-vehicle training architecture has been applied to accelerate DNNs training while ensuring low latency and high privacy for TBD processing. However, multiple intelligent devices (e.g., intelligent vehicles, edge computing chips at base stations) and different networks in intelligent transportation systems lead to computing power and communication heterogeneity among distributed nodes. Existing parallel training mechanisms perform poorly on heterogeneous cloud-edge-vehicle clusters. The synchronous parallel mechanism may force fast workers to wait for the slowest worker for synchronization, thus wasting their computing power. The asynchronous mechanism has communication bottlenecks and can exacerbate the straggler problem, causing increased training iterations and even incorrect convergence. In this paper, we introduce a distributed training framework, Heter-Train. First, a communication-efficient semi-asynchronous parallel mechanism (SAP-SGD) is proposed, which can take full advantage of acceleration effect of asynchronous strategy on heterogeneous training and constrain the straggler problem by using global interval synchronization. Second, Considering the difference in node bandwidth, we design a solution for heterogeneous communication. Moreover, a novel weighted aggregation strategy is proposed to aggregate the model parameters with different versions. Finally, experimental results show that our proposed strategy can achieve up to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$6.74 \times$</tex-math> </inline-formula> speedups on training time, with almost no accuracy decrease.