Abstract

Solving large and dense linear systems over finite fields (GF(2)) forms the basis for several crypt-analytic techniques. Many popular cryptographic algorithms like Number Field Sieve for Integer Factorization, Discrete Log Problem, Cryptanalysis of Ciphers and Algebraic attack requires solving dense systems of linear equations. Gaussian Elimination is the natural and popular approach for solving such systems. However, its cubic time complexity makes it very slow and hence, parallelization is made mandatory. In this paper, we propose a GPU-accelerated linear solver over GF(2), based on Gaussian Elimination. Our parallel solver is implemented using NVIDIA CUDA and MPI to utilize the multi-level parallelism on multi-node, multi-GPU systems, which are becoming increasingly common. CUDA-aware MPI is used to leverage GPUDirect P2P and GPUDirect RDMA for optimized intra- and inter-node communication. Our experimental results show that the proposed solver is able to effectively utilize the memory bandwidth on a single Tesla P100 GPU and shows a parallel efficiency of 95% on a 4 X Tesla P100 multi-GPU node. We see 89% and 94% parallel efficiency on DGX systems with 8, Tesla P100 and Tesla V100 GPUs respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call