Depth estimation based on a single RGB image (a.k.a. monocular depth estimation) is a challenging task, with applications in robotics, autonomous vehicles, and other areas. With the advance of deep learning, several monocular depth estimation approaches have emerged with remarkable results. However, it is still possible to observe deficiencies in the depth maps generated by current techniques. We present a deep neural network that takes the result produced by some existing monocular depth estimation approach and enhances it by adding the details that the depth map requires to be sharp. We train the proposed Depth Enhancer Neural Network (DENN) using a new loss function that compares the input color image of the scene to the color image produced by rendering it using the Phong reflection model. Our experiments show a clear visual improvement in the sharpness of the depth image produced by the DENN, leading to edge enhancement and regularity of planar surfaces without compromising non-planar objects. On average, the standard metrics show a 5% reduction in error when considering the quantitative results. This reduction is dependent on the initial monocular depth estimation technique used by DENN to train the model.