Traditional monocular depth estimation assumes that all objects are reliably visible in the RGB color domain. However, this is not always the case as more and more buildings are decorated with transparent glass walls. This problem has not been explored due to the difficulties in annotating the depth levels of glass walls, as commercial depth sensors cannot provide correct feedbacks on transparent objects. Furthermore, estimating depths from transparent glass walls requires the aids of surrounding context, which has not been considered in prior works. To cope with this problem, we introduce the first Glass Walls Depth Dataset (GW-Depth dataset). We annotate the depth levels of transparent glass walls by propagating the context depth values within neighboring flat areas, and the glass segmentation mask and instance level line segments of glass edges are also provided. On the other hand, a tailored monocular depth estimation method is proposed to fully activate the glass wall contextual understanding. First, we propose to exploit the glass structure context by incorporating the structural prior knowledge embedded in glass boundary line segment detections. Furthermore, to make our method adaptive to scenes without structure context where the glass boundary is either absent in the image or too narrow to be recognized, we propose to derive a reflection context by utilizing the depth reliable points sampled according to the variance between two depth estimations from different resolutions. High-resolution depth is thus estimated by the weighted summation of depths by those reliable points. Extensive experiments are conducted to evaluate the effectiveness of the proposed dual context design. Superior performances of our method is also demonstrated by comparing with state-of-the-art methods. We present the first feasible solution for monocular depth estimation in the presence of glass walls, which can be widely adopted in autonomous navigation.