Abstract

Recently, 10 Gbps or higher speed links are being widely deployed in data centers. Novel high-speed packet I/O frameworks have emerged to keep pace with such high-speed links. These frameworks mainly use techniques, such as memory preallocation, busy polling, zero copy, and batch processing, to replace costly operations (e.g., interrupts, packet copy, and system call) in native OS kernel stack. For high-speed packet I/O frameworks, costs per packet, saturation throughput, and latency are performance metrics that are of utmost concern, and various factors have an effect on these metrics. To acquire a comprehensive understanding of high-speed packet I/O, we propose an analytical model to formulate its packet forwarding (receiving--processing--sending) flow. Our model takes the four main techniques adopted by the frameworks into consideration, and the concerned performance metrics are derived from it. The validity and correctness of our model are verified by real system experiments. Moreover, we explore how each factor impacts the three metrics through a model analysis and then provide several useful insights and suggestions for performance tuning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call