Abstract

In the 5G-envisioned Internet of vehicles (IoV), a significant volume of data is exchanged through networks between intelligent transport systems (ITS) and clouds or fogs. With the introduction of Software-Defined Networking (SDN), the problems mentioned above are resolved by high-speed flow-based processing of data in network systems. To classify flows of packets in the SDN network, high throughput packet classification systems are needed. Although software packet classifiers are cheaper and more flexible than hardware classifiers, they could only deliver limited performance. A key idea to resolve this problem is parallelizing packet classification on graphical processing units (GPUs). In this paper, we study parallel forms of Tuple Space Search and Pruned Tuple Space Search algorithms for the flow classification suitable for GPUs using CUDA (Compute Unified Device Architecture). The key idea behind the offered methodology is to transfer the stream of packets from host memory to the global memory of the CUDA device, then assigning each of them to a classifier thread. To evaluate the proposed method, the GPU-based versions of the algorithms were implemented on two different CUDA devices, and two different CPU-based implementations of the algorithms were used as references. Experimental results showed that GPU computing enhances the performance of Pruned Tuple Space Search remarkably more than Tuple Space Search. Moreover, results evinced the computational efficiency of the proposed method for parallelizing packet classification algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call