Many cardiovascular diseases are closely related to the composition of epicardial adipose tissue (EAT). Accurate segmentation of EAT can provide a reliable reference for doctors to diagnose the disease. The distribution and composition of EAT often have significant individual differences, and the traditional segmentation methods are not effective. In recent years, deep learning method has been gradually introduced into EAT segmentation task. The existing EAT segmentation methods based on deep learning have a large amount of computation and the segmentation accuracy needs to be improved. Therefore, the purpose of this paper is to develop a lightweight EAT segmentation network, which can obtain higher segmentation accuracy with less computation and further alleviate the problem of false-positive segmentation. First, the obtained computed tomography was preprocessed. That is, the threshold range of EAT was determined to be -190, -30HU according to prior knowledge, and the non-adipose pixels were excluded by threshold segmentation to reduce the difficulty of training. Second, the image obtained after thresholding was input into the lightweight RDU-Net network to perform the training, validating, and testing process. RDU-Net uses a residual multi-scale dilated convolution block in order to extract a wider range of information without changing the current resolution. At the same time, the form of residual connection is adopted to avoid the problem of gradient expansion or gradient explosion caused by too deep network, which also makes the learning easier. In order to optimize the training process, this paper proposes PNDiceLoss, which takes both positive and negative pixels as learning targets, fully considers the class imbalance problem, and appropriately highlights the status of positive pixels. In this paper, 50 CCTA images were randomly selected from the hospital, and the commonly used Dice similarity coefficient (DSC), Jaccard similarity, accuracy (ACC), specificity (SP), precision (PC), and Pearson correlation coefficient are used as evaluation metrics. Bland-Altman analysis results show that the extracted EAT volume is consistent with the actual volume. Compared with the existing methods, the segmentation results show that the proposed method achieves better performance on these metrics, achieving the DSC of 0.9262. The number of false-positive pixels has been reduced by more than half. Pearson correlation coefficient reached 0.992, and linear regression coefficient reached 0.977 when measuring the volume of EAT obtained. In order to verify the effectiveness of the proposed method, experiments are carried out in the cardiac fat database of VisualLab. On this database, the proposed method also achieved good results, and the DSC value reached 0.927 in the case of only 878 slices. A new method to segment and quantify EAT is proposed. Comprehensive experiments show that compared with some classical segmentation algorithms, the proposed method has the advantages of shorter time-consuming, less memory required for operations, and higher segmentation accuracy. The code is available at https://github.com/lvanlee/EAT_Seg/tree/main/EAT_seg.