It is known that simulation of the mean position of a Reflected Random Walk (RRW) { W n } exhibits non-standard behavior, even for light-tailed increment distributions with negative drift. The Large Deviation Principle (LDP) holds for deviations below the mean, but for deviations at the usual speed above the mean the rate function is null. This paper takes a deeper look at this phenomenon. Conditional on a large sample mean, a complete sample path LDP analysis is obtained. Let I denote the rate function for the one dimensional increment process. If I is coercive, then given a large simulated mean position, under general conditions our results imply that the most likely asymptotic behavior, ψ ∗ , of the paths n − 1 W ⌊ t n ⌋ is to be zero apart from on an interval [ T 0 , T 1 ] ⊂ [ 0 , 1 ] and to satisfy the functional equation ∇ I ( d d t ψ ∗ ( t ) ) = λ ∗ ( T 1 − t ) whenever ψ ( t ) ≠ 0 . If I is non-coercive, a similar, but slightly more involved, result holds. These results prove, in broad generality, that Monte Carlo estimates of the steady-state mean position of a RRW have a high likelihood of over-estimation. This has serious implications for the performance evaluation of queueing systems by simulation techniques where steady state expected queue-length and waiting time are key performance metrics. The results show that naïve estimates of these quantities from simulation are highly likely to be conservative.
Read full abstract