Optical coherence tomography (OCT) retinal layer segmentation is a critical procedure of the modern ophthalmic process, which can be used for diagnosis and treatment of diseases such as diabetic macular edema (DME) and multiple sclerosis (MS). Due to the difficulties of low OCT image quality, highly similar retinal interlayer morphology, and the uncertain presence, shape and size of lesions, the existing algorithms do not perform well. In this work, we design an HDB-Net network for retinal layer segmentation in diseased OCT images, which solves this problem by combining global and detailed features. First, the proposed network uses a Swin transformer and Res50 as a parallel backbone network, combined with the pyramid structure in UperNet, to extract global context and aggregate multi-scale information from images. Secondly, a feature aggregation module (FAM) is designed to extract global context information from the Swin transformer and local feature information from ResNet by introducing mixed attention mechanism. Finally, the boundary awareness and feature enhancement module (BA-FEM) is used to extract the retinal layer boundary information and topological order from the low-resolution features of the shallow layer. Our approach has been validated on two public datasets, and Dice scores were 87.61% and 92.44, respectively, both outperforming other state-of-the-art technologies.