PurposeThe current work aims to present a parallel code using the open multi-processing (OpenMP) programming model for an adaptive multi-resolution high-order finite difference scheme for solving 2D conservation laws, comparing efficiencies obtained with a previous message passing interface formulation for the same serial scheme and considering the same type of 2D formulations laws.Design/methodology/approachThe serial version of the code is naturally suitable for parallelization because the spatial operator formulation is based on a splitting scheme per direction for which the flux components are numerically computed by a Lax–Friedrichs factorization independently for each row or column. High-order approximations for numerical fluxes are computed by the third-order essentially non-oscillatory (ENO) and fifth-order weighted essentially non-oscillatory (WENO) interpolation schemes, assuming sparse grids in each direction. The grid adaptivity is obtained by a cubic interpolating wavelet transform applied in each space dimension, associated to a threshold operator. Time is evolved by a third order TVD Runge–Kutta method.FindingsThe parallel formulation is implemented automatically at compiling time by the OpenMP library routines, being virtually transparent to the programmer. This over simplifies any concerns about managing and/or updating the adaptive grid when compared to what is necessary to be done when other parallel approaches are considered. Numerical simulations results and the large speedups obtained for the Euler equations in gas dynamics highlight the efficiency of the OpenMP approach.Research limitations/implicationsThe resulting speedups reflect the effectiveness of the OpenMP approach but are, to a large extension, limited by the hardware used (2 E5-2620 Intel Xeon processors, 6 cores, 2 threads/core, hyper-threading enabled). As the demand for OpenMP threads increases, the code starts to make explicit use of the second logical thread available in each E5-2620 processor core and efficiency drops. The speedup peak is reached near the possible maximum (24) at about 22, 23 threads. This peak reflects the hardware configuration and the true software limit should be located way beyond this value.Practical implicationsSo far no attempts have been made to parallelize other possible code segments (for instance, the ENO|-WENO-TVD code lines that process the different data components which could potentially push the speed up limit to higher values even further. The fact that the speedup peak is located close to the present hardware limit reflects the scalability properties of the OpenMP programming and of the splitting scheme as well. Consequently, it is likely that the speedup peak with the OpenMP approach for this kind of problem formulation will be close to the physical (and/or logical) limit of the hardware used.Social implicationsThis work is the result of a successful collaboration among researchers from two different institutions, one internationally well-known and with a long-term experience in applied mathematics for industrial applications and the other in a starting process of international academic insertion. In this way, this scientific partnership has the potential of promoting further knowledge exchange, involving students and other collaborators.Originality/valueThe proposed methodology (use of OpenMP programming model for the wavelet adaptive splitting scheme) is original and contributes to a very active research area in the past years, namely, adaptive methods for conservation laws and their parallel formulations, which is of great interest for the entire scientific community.