Abstract

Equal load balancing for dispatching incoming packets to multiple threads is a crucial requirement in the stateful forwarding of multi-threaded software routers to achieve high-speed forwarding and low packet loss simultaneously. However, equal load balancing is not trivial for Named Data Networking (NDN) routers because of their stateful forwarding. In other words, the consistency of flow states should be maintained so that multiple threads do not access the states simultaneously. Sharding, wherein packets of the same flow are dispatched to the same thread while keeping loads of threads equal, has been proposed; however, in this study, we reveal that heavy hitters like popular content packets cause load imbalance, which may eventually cause packet losses. This study proposes a load balancing mechanism for NDN routers by exploiting the fact that states of flows need not be rigorously maintained when content packets are returned from caches at intermediate routers.

Highlights

  • Named Data Networking (NDN) [1] is a novel network architecture that provides useful in-network functionalities, such as caching [2] and stateful forwarding [3]

  • The experiments were performed on an NDN software router that we developed in our previous study [9]

  • The analysis here follows the model of the multi-threaded NDN router that we developed in our previous study [16]

Read more

Summary

INTRODUCTION

Named Data Networking (NDN) [1] is a novel network architecture that provides useful in-network functionalities, such as caching [2] and stateful forwarding [3]. Because one thread is assumed to run on each CPU core, we, hereinafter, interchangeably use the words “CPU core” and “thread” interchangeably, and call an NDN software router an “NDN router” Despite their success, most studies do not consider how a Network Interface Card (NIC) dispatches incoming packets to threads. Sharding at the packet level does not cause load imbalance even if the skewness of content object popularity distributions is significant. The frontend cache is designed based on the observation that the root cause of such load imbalance is a small number of highly popular content objects (i.e., heavy hitters) [10], [13]. We propose a popularity-based packet dispatching scheme that spreads heavy hitters across threads while dispatching other packets according to sharding.

NDN PACKET PROCESSING AND MUTUAL EXCLUSION
B6: CS Replacement
MUTUAL EXCLUSION ACCORDING TO COMPARE-AND-SWAP INSTRUCTION
ANALYSIS OF THREADS’ LOADS
B2 B3 B5 B6
PIT ENTRY HANDLING
CS ENTRY HANDLING
DISCUSSIONS
OVERVIEW
SHARDING SCHEME
MUTUAL EXCLUSION SCHEME
IDEAL SCHEME
POPULARITY-BASED SCHEME
ANALYSIS RESULTS
SCENARIO SETTINGS
PACKET FORWARDING RATES AND PACKET LOSS RATIOS
RELATED WORK
VIII. CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.