Abstract

In sequential paging (SP) schemes, the paging process is considered on a per user basis. When an incoming call reaches a mobile terminal (MT), the associated location area is divided into several paging areas (PAs) and PAs are paged one by one until the MT is found. Even though SP algorithms can reduce the paging cost compared to blanket paging (BP), they introduce extra and unnecessary delay, and are not efficient. We present a pipeline paging (PP) scheme in which multiple paging requests (PRs) can be served in a pipeline manner for different paging areas. We study the proposed scheme via extensive simulations in terms of discovery rate, total delay, and cost under different traffic loads. Our study shows that the PP scheme outperforms both the BP and SP schemes in terms of discover rate and total delay, while maintaining a cost similar to that of the SP scheme. The study also shows that when the paging delay constraint D is as large as 6, the PP scheme achieves almost 171% of the BP's discovery rate and 58% of the BP's cost, whereas the SP's discovery rate is far less than that of the BP scheme.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call