Abstract

Modern computers with hard-disk storage and networks with dynamic spectrum access illustrate systems having a resource R that allows fragmented allocations. We model R as a sequence of M>1 units for which requests are made by items in a FIFO queue; each request is for a desired amount of R, the item size, and the residence time during which it is needed. So long as there are enough currently available units of R, an item at the head of the queue can be divided into fragments accommodated by the gaps in R formed by these units. Under the key assumption that allocations given to items cannot be changed prior to their departures, fragmentation in the form of alternating gaps and allocated resource builds up randomly as items come and go. The improvements in resource utilization created by fragmentation are acquired at a processing cost, so how fragmentation evolves is an important performance issue.We define a probability model specifying distributions for item sizes and residence times, and then analyze the system operating at capacity. If M is much larger than the maximum item size, then as the fragmentation process approaches equilibrium, nearly all of the allocated items are completely fragmented, i.e., the occupied units are mutually disjoint. In a suite of four theorems, we show how this result specializes for certain classes of item-size distributions. However, as a counterpoint to these rather intimidating results, we present the findings of extensive experiments which show that the delays in reaching the inefficient states of nearly complete fragmentation can be surprisingly long, and hence that even moderately fragmented states are usually of no concern.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call