Abstract

The use of parallel computing tools can significantly reduce the execution time of calculations in many engineer­ing tasks. One of the main difficulties in the development of multithreaded programs remains the organization of simultaneous access from different threads to shared data. The most common solution to this problem is to use locking facilities when accessing shared data. There are a number of tasks where data sharing is not needed, but you need to synchronize access to a limited resource, such as a temporary buffer. In such tasks, there is no data exchange between different threads, but there is an object that at a given time can be used by the code of only one thread. One such task is calculating the value of a B-spline. The software implementation of the functions for calculating B-splines, performed according to classical algorithms, requires the use of blocking objects when accessing the common array of intermediate data from different threads. This reduces the degree of parallelism and reduces the efficiency of computational programs using B-splines running on multiprocessor computing systems. The article discusses a way to improve the efficiency of calculating B-splines in parallel programming tasks by eliminating locks when accessing general modified data. A soft­ware implementation is presented in the form of a C++ class template, which provides placement of a temporary array used for calculating a B-spline into a local buffer of a given size with the possibility of increasing it if necessary. Using the developed template in conjunction with the threadlocal qualifier reduces the number of requests for increasing the buffer for high degree B-splines (larger than the initially specified buffer size). It is also possible to implement this scheme using the std::vector template of the C++ STL Standard Library. The results of the application of the developed class when calculating the values of B-splines in a multithreaded environment, showing a reduction in the calculation time in proportion to an increase in the number of computational processors, are presented. The methods of specifying arrays for storing intermediate calculation results considered in this article can be used in other parallel programming tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call