Abstract

The path computation element (PCE) framework provides functions and protocol extensions to address the computation of paths that span multiple routing areas or administrative domains in support of traffic engineering (TE) in multi-protocol label switching (MPLS) or generalized MPLS (GMPLS) networks. A PCE node typically resides at the domain or area border and is capable of computing optimal/diverse TE label switched paths (LSP) paths, and of providing dynamic inter-layer resource optimization (e.g. between optical, and packet layers) for the network's primary and backup capacity. Requests for path computation of an inter-area or domain TE LSP can be performed by either using a centralized PCE instance present within a domain that has TE visibility over all of the other areas/domains, or can be shared among multiple PCEs - one responsible for each domain-in a distributive way. In the latter case, a PCE-based path computation, consists of relying on more than one PCE to compute the overall end-to-end path. When a PCE is not able to compute the full end-to-end path, a decision has to be made to select and forward the computation request to a downstream PCE node. The downstream PCE node selection process is crucial in the amount of overall time taken to compute the full end-to-end path. Typically, routing information - e.g. reachability to the destination announced by area border routers or autonomous system border routers - is used to generate a set of candidate PCEs that are capable of processing further the path computation request. However, among the set of candidate PCE(s), the decision to elect a certain PCE and forward the path computation request can affect significantly the overall end-to-end path computation response time, and hence the over-all time to signal the inter-area(domain) TE LSP. There are a number of schemes that can be considered to elect a preferred PCE from a set of candidates; in this paper, we present three: a selection scheme using round-robin scheduling, a least-response delay selection, and an adaptive approach based on the individual path computation response times received from each of the candidate PCEs. The first scheme assumes that requests will be locally distributed in a round-robin fashion among a number of PCEs that are capable of progressing the path computation process. In this scheme, requests from a certain source can be assumed to be locally distributed evenly among the available candidate PCEs. This scheme, however, does not guarantee a global request balancing among the all candidate PCEs, and hence, can lead to some PCEs being overloaded with large queue of requests leading to increased delay in the overall path computation response. The second scheme assumes that the request originator preserves some performance measure; for example an average path computation response time for each of the candidate PCEs. The requestor then would always pick the PCE with the least response time. This scheme will achieve relatively better load request load balancing among PCEs, however, it might slightly overload some PCEs over others due to always selecting the PCE with the lowest response time for all the local requests. The third scheme assumes that an average response time is preserved for each of the candidate PCEs and requests arriving at the source are partitioned among the candidate PCEs depending on the ratio of the average response times recorded. We believe this scheme results in an improved load balancing of the path computation requests among the candidate PCEs, and hence minimizing the overall path computation time of the inter-area or domain LSPs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call