The adoption of cloud computing and the deployment of new services require more and more computing and networking resources in large data centers. In particular, there is a need for faster, rich featured switches. Today, switching integrated circuits can handle more than 50 Tb/s and are expected to reach 100 Tb/s next year. This trend is likely to continue, approaching petabit capacities in a few years. At the same time, switches need to support more advanced packet classification features. This poses many challenges to the designers of switching integrated circuits, creating a need for innovations to move forward. One of those challenges is to support programmable packet classification with large rulesets at wire speed using limited silicon footprint. Unfortunately, traditional solutions like ternary content addressable memories (TCAMs) have high cost in area and power, and therefore are not a viable option for large on-chip rulesets, so new alternatives are needed. In the last two decades, researchers have extensively studied the problem of packet classification, proposing many algorithms based on standard memories, mostly targeting software implementations. These packet classification algorithms could be a solution to enable large on-chip classification rulesets using standard memories. However, porting the standard packet classification algorithms to a hardware implementation is not a trivial task, due to the different requirements and the different kinds of memory and computing resources available on application-specific integrated circuit (ASIC) switching chips. This article describes a packet classification design tailored for high-speed switching integrated circuits, which is currently used in the NVIDIA® Mellanox® Spectrum® switching ASIC product line, illustrating how to solve the mismatch between the usual software-based classification algorithms and the specific requirements of a hardware implementation.
Read full abstract