Abstract

Artificial Intelligence [AI] and networking applications extensively incorporate Field Programmable Gate Arrays [FPGA] - based Ternary Content Addressable Memories [TCAM]. Since FPGAs cannot support TCAMs, they must be emulated with SRAM-based memories, requiring FPGA resources. Compared to state-of-the-art designs, the proposed FPGA-based TCAM implementation will save significant resources. This methodology makes use of the Lookup Table RAMS (LUTRAMs), slice carry-chains, and flip-flops (FF) allowing simultaneous mapping of rules and deeper pipelining respectively. The TCAM implementation results in lower power consumption, lesser delays, and lower resource utilization. It outperformed conventional FPGA-based TCAMs in terms of energy efficiency (EE) and performance per area (PA) by at least 3.34 and 8.4 times respectively, and 56% better than existing FPGA designs. The proposed method outperforms all previous approaches due to its low dynamic power consumption when considering the huge size of TCAM emulation on SRAM-based FPGAs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.