It is essential to automatically evaluate the position and size of the liver tumour for radiologists, diagnosis, and the clinical process. Many U-Net-based variants have been suggested in recent years to enhance the segmentation results for medical image segmentation, but they are unable to describe the global spatial and channel relationships among lesion regions. To overcome this issues, we proposed a novel network called Multi-scale Attention UNet (MA-UNet) to address this problem by adding a self-attention mechanism into our approach to adaptively combine local features with their global dependencies. The attention mechanism of the MA-UNet allows it to capture complex contextual dependencies. Position-wise Attention Block and Multi-scale Fusion Attention Block are the two blocks that we have developed. The feature interdependencies in spatial dimensions, which represent the spatial dependencies between pixels in a global view, are modelled using the Position-wise Attention Block. A multi-scale semantic feature fusion attention block is also used to capture the channel dependencies between any feature map. On the MICCAI 2017 LiTS Competition dataset, we assess our methodology. Compared to other cutting-edge methods, the suggested way performs better. The Dice and VOE values of liver tumors segmentation are 0.749 ± 0.08 and 0.21 ± 0.06 respectively. KEYWORDS: Liver tumor segmentation, Attention mechanism, Deep learning.