Abstract
Memory efficiency with compact data structures for Internet Protocol (IP) lookup has recently regained much interest in the research community. In this paper, we revisit the classic trie-based approach for solving the longest prefix matching (LPM) problem used in IP lookup. In particular, we target our solutions for a class of large and sparsely-distributed routing tables, such as those potentially arising in the next-generation IPv6 routing protocol. Due to longer prefix lengths and much larger address space, straight-forward implementation of trie-based LPM can significantly increase the number of nodes and/or memory required for IP lookup. Additionally, due to the available on-chip memory and the number of I/O pins of Field Programmable Gate Arrays (FPGAs), state-of-the-art designs cannot support large IPv6 routing tables consisting of over $300$K prefixes. We propose two algorithms to compress the uni-bit-trie representation of a given routing table: (1) \emph{single-prefix distance-bounded path compression} and (2) \emph{multiple-prefix distance-bounded path compression}. These algorithms determine the optimal maximum \emph{skip distance} at each node of the trie to minimize the total memory requirement. Our algorithms demonstrates substantial reduction in the memory footprint compared with the uni-bit-trie algorithm ($1.86\times$ for IPv4 and $6.16\times$ for IPv6), and with the original path compression algorithm ($1.77\times$ for IPv4 and $1.53\times$ for IPv6). Furthermore, implementation on a state-of-the-art FPGA device shows that our algorithms achieve $466$ million lookups per second and are well suited for $100$Gbps lookup. This implementation also scales to support larger routing tables and longer prefix length when we go from IPv4 to IPv6.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.