Abstract

Dynamically detecting data races in multithreaded programs incurs significant slowdown and memory overheads. Many existing techniques have been put forward to improve the performance slowdown through different dimensions such as sampling, detection precision, and data structures to track the happened-before relations among events in execution traces. Compiling the program source code with different compiler optimization options, such as reducing the object code size as the selected optimization objective, may produce different versions of the object code. Does optimizing the object code with a standard optimization option help improve the performance of the precise online race detection? To study this question and a family of related questions, this paper reports a pilot study based on four benchmarks from the PARSEC 3.0 suite compiled with six GCC compiler optimization options. We observe from the empirical data that in terms of performance slowdown, the standard optimization options behave comparably to the optimization options for speed and code size, but behave quite different from the baseline option. Moreover, in terms of memory cost, the standard optimization options incur similar memory costs as the baseline option and the option for speed, and consume less memory than the option for code size.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.