Real-time power network dispatching and control (PDC) presents unique challenges that traditional methods cannot effectively address due to the consideration of temporal dynamic factors. Reinforcement learning (RL) has been introduced and proven to be effective. However, given the vast space for solutions, there is an urgent need to enhance the performance of RL further. Analyzing the characteristics of the power network to improve the performance of RL holds significant potential. This article presents a comprehensive analysis of power network characteristics, including imbalanced state distribution and the imbalanced action operation frequency, along with their impact on applying RL in real-time PDC to guide the algorithm design. Guided by the analysis, the paper proposes the Balance Deep Q-Network (DQN) algorithm to mitigate the negative impact of the imbalanced problems on algorithm performance. A K-means-based predefined curriculum learning (KPCL) module is proposed to address the state imbalance issue. It allows for differentiated access to operation scenarios based on their hardness, ensuring a balanced exploration of operation scenarios while maintaining diversity during training. An Option DQN module is proposed to tackle the action imbalance problem, utilizing a branching network structure and hierarchical action selection to reduce the coupling between actions with different usage frequencies, thereby enhancing the accuracy of action value estimation. The effectiveness of the Balance DQN algorithm is demonstrated through extensive experiments conducted on the Grid2Op platform with 14-bus and 36-bus cases. The results indicate that Balance DQN can efficiently alleviate the negative impacts of imbalanced problems on RL for real-time PDC agents and achieve good performance in both cases.