Abstract

Datacenters currently deploy shallow-buffered switches to achieve low latency by avoiding long waiting time in the data plane. However, the limited buffer space in the switch causes the frequent overflow and the notorious TCP incast problem. Moreover, the simple scheduling strategy in buffer deprives the switch of the ability to offer deeply differentiated services. Therefore, we present a novel priority-based flow-aware in-network caching scheme, named Poche, which supplies more control capabilities for the network side through introducing some additional cache resource into switches. Poche classifies network traffic into multiple priorities according to the latency requirements of flows. The end server adds priority tags to packets and sets different RTO values for flows with distinct priorities. The switch monitors the buffer utilization of each port and performs the priority-based flow-aware caching and injecting strategies based on the analysis of the scheduling model between the buffer and cache. We conduct comprehensive experiments to compare Poche with the state-of-the-art traffic optimization schemes. The results demonstrate that Poche can reduce the FCTs of latency-sensitive flows by at least 59.1% and improve the network throughput by at least 54.4%, while ensuring the finite cached volume and effectively addressing the incast problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call