Abstract
Architectures for tree structures on FPGAs as well as ASICs have been proposed over the years. The exponential growth in the memory size with respect to the number of tree levels restricts the scalability of these architectures. In this paper, we propose a scalable lookup engine on FPGA for large decision-trees; this engine sustains high throughput even if the tree is scaled up with respect to (1) the number of fields and (2) the number of leaf nodes. The proposed engine is a 2-dimensional pipelined architecture; this architecture also supports dynamic updates of the decision-tree. Each leaf node of the tree is mapped onto a horizontal pipeline; each field of the tree corresponds to a vertical pipeline. We use dual-port distributed RAM (distRAM) in each individual Processing Element (PE); the resulting architecture for a generic decision-tree accepts two search requests per clock cycle. Post place-and-route results show that, for a typical decision-tree consisting of 512 leaf nodes, with each node storing 320-bit data, our lookup engine can perform 536 Million Lookups Per Second (MLPS). Compared to the state-of-the-art implementation of a binary decision-tree on FPGA, we achieve 2× speed-up; the throughput is sustained even if frequent dynamic updates are performed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.