Abstract

Timing violations in high performance communication channels in system-on-chips (SoC) may occur in the late stages of the physical design process. To address that, latency insensitive systems (LISs) employ pipelining in the communication channels through the insertion of relay stations. Although the functionality of an LIS is robust with respect to the communication latencies, imbalances in relay station insertion may degrade the throughput of the system. While having a large number of buffer queues can eliminate such performance loss, the system may not have adequate area to accommodate these buffers. The problem of buffer queue sizing for maximizing throughput while meeting buffer area constraints has been solved using a mixed-integer linear program (MILP) formulation; however, such an approach is not scalable. In this work, we formulate the buffer queue sizing problem as a parameterized graph optimization problem where for every communication channel there is a parameterized edge with buffer counts as the edge weight. We then use a minimum cycle mean algorithm to determine from which edges buffers can be removed safely. Experimental results on large LISs suggest that the proposed approach is scalable. Moreover, quality of the solutions, in terms of the throughput and the size of buffer queues, is observed to be as good as that of the MILP-based approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.