Despite the remarkable results achieved by deep convolutional neural networks (CNNs) in single image super-resolution (SISR), their computational cost increase exponentially as CNN models get deeper and wider. Consequently, depthwise separable convolutions (DSConvs) have emerged as the fundamental building basis for various contemporary lightweight network architectures. However, depthwise convolution is sub-optimal for restoring missing high-frequency details. In this paper, we commence with the vanilla depthwise convolution and progressively develop it to alleviate the above problem. Specifically, we introduce an effective method, kernel attention injection, to integrate intra-kernel correlation into the depthwise convolution layer by incorporating the learned kernel attention into the depthwise kernel. We find that the determinant is capable of capturing the correlation between diagonal elements of the depthwise kernel by summing the products of the diagonals. Therefore, we utilize the determinant as the kernel pooling method. Furthermore, we propose to remove the non-linear activation function behind the depthwise convolution (i.e., linear depthwise convolution) to preserve informative features for reconstructing faithful high-resolution (HR) images. Built on this recipe, we introduce kernel-attentive linear depthwise separable convolutions (KDSConvs), which exhibits promising super-resolution performance. Our experimental results underline the potential of depthwise convolution in super-resolution tasks by demonstrating that the proposed KDSConvs significantly outperforms DSConvs in terms of both quantitative measurements and visual quality. The code will be released at https://github.com/AwesomeHwang/KALDN.