Abstract
The trend in computing architectures has been toward multi-core central processing units (CPUs) and graphics processing units (GPUs). An affordable and highly parallelizable GPU is practical example of Single Instruction, Multiple Data (SIMD) architectures oriented toward stream processing. While the GPU architectures and languages are fairly easily employed for inherently time-synchronous based simulation models, it is less clear if or how one might employ them for queuing model simulation, which has an asynchronous behavior. We have derived a two-step process that allows SIMD-style simulation on queuing networks, by initially performing SIMD computation over a cluster and following this research with a GPU experiment. The two-step process simulates approximate time events synchronously and then reduces the error in output statistics by compensating for it based on error analysis trends. We present our findings to show that, while the outputs are approximate, one may obtain reasonably accurate summary statistics quickly.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.