Routing lookup, as a core function of routers for forwarding and filtering packets, has confronted with serious challenges nowadays, ranging from memory efficiency, update performance and throughput. Rather than seeking optimization techniques for the traditional lookup model, this paper presents a brand-new parallel lookup model, named Split Routing Lookup Model. In consideration of partial similarities among prefixes, we split all prefixes to produce redundancies, which are then removed during information integration. After that, the on-chip structure is compressed sharply. Besides, by such “splitting”, route updates are diverged to be more targeted, and the lookup process is also decomposed to support parallel processing.With 14 real-world routing data, the proposed model is evaluated through 4 classic trie-based approaches, in comparison with their traditional implementations. The encouraging results show the superiorities of the proposed model in a comprehensive view. The on-chip memory savings are up to 99.2% and 94.8% for IPv4/6 respectively. While the reduction of update overhead, even in the worst case, is 50% and 30% respectively. Moreover, the pipeline depth is also reduced by 25–50%. Besides, another 2 techniques are selected to evaluate the proposed model on the virtual router platform. According to the results, based on the proposed model, 160KB on-chip memory is enough to store 14 virtual routers, each consuming only 11KB on average. In this way, the scalability of the proposed model to virtual routers is also clearly demonstrated.
Read full abstract