Image deraining aims to mitigate the adverse effects of rain streaks on image quality. Recently, the advent of convolutional neural networks (CNNs) and Vision Transformers (ViTs) has catalyzed substantial advancements in this field. However, these methods fail to effectively balance model efficiency and image deraining performance. In this paper, we propose an effective, locally enhanced visual state space model for image deraining, called DerainMamba. Specifically, we introduce a global-aware state space model to better capture long-range dependencies with linear complexity. In contrast to existing methods that utilize fixed unidirectional scan mechanisms, we propose a direction-aware symmetrical scanning module to enhance the feature capture of rain streak direction. Furthermore, we integrate a local-aware mixture of experts into our framework to mitigate local pixel forgetting, thereby enhancing the overall quality of high-resolution image reconstruction. Experimental results validate that the proposed method surpasses state-of-the-art approaches on six benchmark datasets.