Abstract

A hardware-assisted design, dubbed cache-oriented multistage structure (COMS), is proposed for fast packet forwarding. COMS incorporates small on-chip cache memory in its constituent switching elements (SEs) for a parallel router to interconnect its line cards (LCs) and forwarding engines (FEs, where table lookups are performed). Each lookup result in COMS is cached in a series of SEs between the FE (which performs the lookup) and the LC (where the lookup request originates). The cached lookup results fulfill subsequent lookup requests for identical addresses immediately without resorting to FEs for (time-consuming) lookups, thus reducing the mean lookup time tremendously. COMS calls for partitioning the set of prefixes in a routing table into subsets (of roughly equal sizes) so that each subset involves only a small fraction of the table for one FE. This leads to a substantial savings of SRAM required in each FE to hold its forwarding table, and the total savings of SRAM in a parallel router far exceeds the amount of SRAM employed in all SE's of COMS combined. A COMS-based router of size 16 exhibits over 10 times faster mean packet forwarding than its compatible router without caching nor table partitioning. The worst case lookup time in COMS depends on the matching algorithm employed in FE's and can often be shorter than that in a compatible router. With its ability to forward packets swiftly, COMS is ideally suitable for the new generation of parallel routers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call