Abstract

This work introduces and evaluates a technique for speedy packet lookups, called SPAL, in high-performance routers, realized by fragmenting the BGP routing table into subsets. Such a router contains multiple line cards (LCs), each of which is equipped with a forwarding engine (FE) to perform table lookups locally based on its forwarding table (which is a fragmented subset). The number of table entries in each FE drops as the number of LCs in a router grows. This reduction in the forwarding table size drastically lowers the amount of SRAM (e.g., L3 data cache) required in each LC to hold the trie constructed according to the matching algorithm. SPAL calls for caching the lookup result of a given IP address at its home LC (denoted by LC/sub ho/, using the LR-cache), such that the result can satisfy the lookup requests for the same address from not only LC/sub ho/ but also other LCs quickly, when the switching fabric for interconnecting LCs has a low latency. Lookup results obtained from remote LCs are also held in the LR-cache of a local LC. Our trace-driven simulation reveals that SPAL indeed leads to substantial improvement in mean lookup performance. SPAL may possibly shorten the worst-case lookup time (thanks to fewer memory accesses during longest-prefix matching search) when compared with a current router without partitioning the routing table. It takes no specific traffic into consideration when selecting the partitioning bits, promising good scalability and a small mean lookup time per packet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call