Aiming at the problem that the existing video traffic parameter extraction methods are overly dependent on manual annotation and a single perspective cannot effectively correct the dynamic driving deviation of vehicles on site, a multi-perspective video traffic parameter extraction method that can segment and cross lane dividing lines is proposed. The method consists of an automatic annotation point generation module and a multi-perspective correction module. The automatic annotation point generation module realizes the process of automatically generating annotation points by constructing a reference block based on equal-length lane dividing lines. The multi-perspective correction module proposes multiple mapping methods between vehicles and lane dividing lines and a correction speed measurement method based on the average speed probability density function to correct the two types of deviations generated by vehicles during dynamic driving. Experimental results on public data sets and measured data sets show that the vehicle speed extraction accuracy of the proposed method is better than other speed measurement methods, and it has a certain degree of universality.
Read full abstract