Abstract

Theoretical and practical issues of parallel data search algorithms implementation on the CUDA platform for their use in intelligent automated systems and Big Data technologies are considered. The relevance of this topic is due to the constant increase in the volume of information processed and the expansion of the list of applied automation tasks, in solving which the methods of artificial intelligence and Big Data technology are used, which leads to the need to increase the speed of data processing. The aim of the paper is to modify the known algorithms of internal data search for their use on the graphics processor followed by the analysis of the efficiency of parallel versions of the developed software modules. The results show that the use of the graphics processor can significantly increase the speed of applications that use data search algorithms, and the acceleration value depends on the size of the flow block used by the GPU in block parallelization mode. In addition, the use of these parallel algorithms will optimize the work of various procedures used in artificial intelligence and Big Data technologies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.