The visual saliency-based just noticeable distortion (VS-JND) model has been widely used in the quantization-based watermarking framework, but it remains a challenge that how to reflect the characteristics of the human visual system (HVS) to the greatest extent in the processing of detecting salient objects. In this paper, given that the HVS has a different perception sensitivity in the different orientation of the image, a compound orientation feature map, which contains the vertical, horizontal, and diagonal information for directional patch extraction, is computed to improve the visual saliency map within just noticeable distortion (JND) profile. First, the DC and three low-frequency coefficients of each block are used to calculate the luminance and texture feature map, respectively. Then, the feature map based on the complex directional feature includes the horizontal, vertical, and diagonal information contained in the image is calculated by three low-frequency coefficients. Finally, we will be able to make a linear fusion of the same three different visual features to get the final VS map, and the JND model is improved according to the new VS model. Later, the new proposed VS-JND model is employed in the quantization watermarking (Q-watermarking) framework. The simulation results show that the proposed watermarking scheme is effective and produces superior robustness compared with existing monochrome image watermarking schemes.