Depth completion, which combines additional sparse depth information from the range sensors, substantially improves the accuracy of monocular depth estimation, especially using the deep-learning-based methods. However, these methods can hardly produce satisfactory depth results when the sensor configuration changes at test time, which is important for real-world applications. In this paper, the problem is tackled by our proposed novel two-stage mechanism, which decomposes depth completion into two subtasks, namely relative depth map estimation and scale recovery. The relative depth map is first estimated from a single color image with our designed scale-invariant loss function. Then the scale map is recovered with the additional sparse depth. Experiments on different densities and patterns of the sparse depth input show that our model always produces satisfactory depth results. Besides, our approach achieves state-of-the-art performance on the indoor NYUv2 dataset and performs competitively on the outdoor KITTI dataset, demonstrating the effectiveness of our method.