Semantic nuclei segmentation is a challenging area of computer vision. Accurate nuclei segmentation can help medics in diagnosing many diseases. Automatic nuclei segmentation can help medics in diagnosing many diseases such as cancer by providing automatic tissue analysis. Deep learning algorithms allow automatic feature extraction from medical images, however, hematoxylin and eosin (H&E) stained images are challenging due to variability in staining and textures. Using pre-trained models in deep learning speeds up development and improves their performance. This paper compares Deeplabv3+ and U-Net deep learning methods with the pre-trained models ResNet-50 and EfficientNetB4 embedded in their architecture. In addition, different regularization and dropout parameters are applied to prevent overtraining. The experiment was conducted on the PanNuke dataset consisting of nearly 8,000 histological images and annotated nuclei. As a result, the ResNet50-based DeepLabV3+ model with L2 regularization of 0.02 and dropout of 0.7 showed efficiency with dice coefficient (DCS) of 0.8356, intersection over union (IOU) of 0.7280, and loss of 0.3212 on the test set.