Abstract

The security of the RSA cryptosystem is based on the difficulty of integer factorization. The General Number Field Sieve (GNFS) is one of the state-of-the-art algorithms to solve this problem over 110 digits. The Montgomery Block Lanczos algorithm is often used for solving a large and sparse linear system over GF (2) in the GNFS. AS Graphics Processing Units (GPUs) can provide a significant increase in floating point operations and memory bandwidth over conventional Central Processing Units (CPUs), performing sparse matrix-vector multiplications with these co-processors can decrease the amount of time. In this paper, we will first improve the initialization way of the algorithm to avoid sudden breakdown in the very first stage. Because a very high possibility of failure caused by the random initialization way, we will design a pseudo random way to initialize the algorithm to generate more solutions than traditional Block Lanczos algorithm does. Based on massive research about present sparse matrix storage formats, we will parallelize the improved Block Lanczos algorithm using a newly designed hybrid sparse matrix format on GPUs. Finally, we analyze the cost of our algorithm theoretically. From the results, a speedup can be achieved on GPUs according to related experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.