Estimating the depth of transparent objects is one of the well-known challenges of RGB-D cameras due to the reflection and refraction effects. Previously, researchers propose to correct the depth of transparent objects by using their estimated segmentation masks, because it is possible to recover the internal depth of an object just from its boundary, as illustrated by those depth-from-silhouette methods. However, these algorithms only use segmentation masks. They ignore the internal structure information from the mask segmentation features, which we argue are more useful for transparent depth estimation. In this work, we demonstrate the effectiveness of segmentation features for transparent object depth estimation. We show that it is even possible to recover the depth map just from segmentation features, without any RGB or depth map as input. Based on this observation, we propose DualTransNet which uses segmentation features for transparent depth completion. In our DualTransNet, we feed segmentation features from an extra module to the main network for better depth completion quality. Extensive experiments have shown the superiority of segmentation features as well as the state-of-the-art performance of our network.