Abstract

Freeway congestion monitoring can be based either on sampling-based methods, such as probe vehicle runs, or on continuous data from loop detector infrastructure. Sample size, in terms of the number of days sampled, affects the accuracy of sampling-based methods; detector spacing or detector density affects the accuracy of the detector-based method. This paper presents an empirical model of the effect of the two parameters—sample size and detector spacing—on the accuracy of both methods in estimating the annual average of three congestion parameters: total delay, average duration of congestion, and average spatial extent of congestion. The model is developed with data from four urban freeway corridors in California. Among other conclusions, the model predicts that to measure the congestion parameters with 10% error, 4 to 6 days’ worth of good probe vehicle data or loop detector data with half-mile spacing is needed. The proposed model facilitates comparison of the two alternatives in regard to the cost for achieving the same target accuracy. The result can also be used as a guide to determine the sample size or detector spacing in planning new congestion monitoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call