Abstract

Using packet classifier algorithms in packet processing network systems is the key idea to process packets rapidly. These algorithms are so important, especially in processor equipments used in internet backbones. Ternary content addressable memories (TCAM) are used to do parallel search in hardware implementation of these algorithms. Despite the accessibility of high-speed search, one of main problems to use TCAM is its high consumption power supply. In this study, a new technique is provided to reduce memory consumption in TCAM blocks which are used in hardware classifier. In classifier architecture, first, decision tree is created and classifier rules are distributed among its leaf nodes. Since each leaf of the tree corresponds to one TCAM block, rules are included in different blocks of TCAM corresponding to tree structure, in the second stage. In this architecture, a supplementary TCAM block is used as the general block. Recent architectures have encountered memory waste and a considerable increase in power consumption due to unbalanced distribution of rules in main TCAM blocks and unexpected increase in the number of repeated rules in the general block. In this study, a new algorithm is offered to optimize rule distribution in TCAM blocks in the first stage of packet classification. Efficiency of the proposed architecture which uses intelligent cuts has been compared with recent architectures. Comparing results shows that the proposed method can distribute rules in TCAM block more balanced than competitor architectures and at the same time, reduce the number of rules assigned to the general TCAM block.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call