Abstract

Transmission schemes in data centers are supposed to accurately distinguish flow types for different scheduling. However, prior efforts failed to meet the needs at all levels in a cost-effectively way. Nor the existing schemes proved applicable to all the diverse scenarios or dynamic traffic patterns. Therefore, we proposed <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Diff</u> erentiated <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">Tr</u> affic sch <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">E</u> duling in d <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">A</u> ta cen <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">T</u> ers (DiffTREAT) based the Recurrent Neural Network (RNN), aiming to simplify the transmission in the dynamic and diverse network scenarios. First, DiffTREAT utilizes deep learning methods for traffic classification and flow size prediction. Second, according to the classified results of flows, DiffTREAT adopts multilevel priority queues to ensure the preferential transmission of latency-sensitive flows while optimizing the overall average flow completion time (FCT). Third, DiffTREAT employs the network cache to increase the capacity of data center networks (DCN), which effectively fights against the traffic burst and improves the throughput of latency-insensitive flows. DiffTREAT has been tested in different topologies in the contexts of diverse network loads and real-world workloads. Experiment results showed that compared with state-of-the-art schemes, DiffTREAT yielded both the lower average flow completion time for latency-sensitive flows and the higher throughput for latency-insensitive flow.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.