Abstract
Many Internet, mobile Internet, and IoT services require both low tail-latency and high concurrency in datacenters. The current protocol stack design pays more attention to throughput and average performance, considering little on tail latency and priority. We address this question by proposing a hardware-software co-designed Labeled Network Stack (LNS) for future datacenters. The key innovation is a payload labeling mechanism that distinguishes data packets in a TCP link across the full network stack, including the application, the TCP/IP and the Ethernet layer. This design enables prioritized data packets processing and forwarding along the full data path, to reduce the tail latency of critical requests. We built a prototype datacenter server to evaluate the LNS design against a standard Linux kernel stack and the mTCP research, using IoT kernel benchmark MCC. Experiment results show that the LNS design can provide an order of magnitude improvement on tail latency and concurrency.
Highlights
For the new generation of cloud computing server applications such as mobile Internet and IoT, with characteristics of high concurrency and low latency constraints, the behavior, motivation and access time of concurrent clients are all uncertain [1], so the unconscious resource competition from massive concurrent requests will lead to fluctuations in service latency
When the encrypted packet arrives at the receiver, it should be decrypted before protocol analysis, and put to the right priority queue by label identifying
Based on the Labeled Network Stack (LNS) idea, we developed a prototype to achieve label identification and scheduling across process stages, including customized network interface card (NIC) Sando, mTCPbased user-mode protocol stack and epoll-based event driven server framework with priority enhancement
Summary
For the new generation of cloud computing server applications such as mobile Internet and IoT, with characteristics of high concurrency and low latency constraints, the behavior, motivation and access time of concurrent clients are all uncertain [1], so the unconscious resource competition from massive concurrent requests will lead to fluctuations in service latency. Test results showed that the LNS got an order of magnitude improvement on tail latency and concurrency over the mainstream systems. The LNS is to support distinguishing, isolation and prioritizing in packet granularity across the full data path through payload labeling. It is different from the traditional flow level control method that only based on predefined protocol header. 2) Prototype of LNS to show an order of magnitude improvement on tail latency and concurrency over the mainstream. Besides IoT and mobile Microservices, our server fits into application with features on long connection, high concurrency and user experience requirement widely
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have