Quickly and accurately obtaining street lamp post information has great application value in smart city construction and automatic vehicle navigation. However, the existing deep learning methods are affected by factors such as the perspective effect, different objects with the same spectrum, and occlusion. There can also be some problems in the semantic segmentation results for street lamp posts, such as under-segmentation, misextraction, and discontinuity. In this paper, we present the OSLPNet model for the extraction of street lamp posts from street view imagery. According to the characteristics of the various scales of street lamp posts in the imagery, a multi-scale phased controller (MPC) with multi-level receptive fields is proposed to reduce the under-segmentation problem for street lamp posts. According to the unique “elbow” structure of street lamp posts, deformable convolution is introduced to reduce the problem of misextraction of street lamp posts. According to the topological relationship of street lamp post context, a lightweight spatial context (LSC) module is proposed to solve the problem of discontinuous detection of street lamp posts caused by occlusion. We also proposed two street lamp pole datasets, and experimental results showed that our F1 values can reach 85.2% and 82.4% under both datasets, which is superior to the existing state of art method. The code and datasets are publicly available at https://github.com/ZzzTD/OSLPNet.