Visual object navigation is an essential task of embodied AI, which follows the user's demands to let the agent navigate to the goal objects. Previous methods often focus on single object navigation. However, in real life, human demands are generally continuous and multiple, requiring the agent to implement multiple tasks in sequence. These demands can be addressed by repeatedly performing previous single task methods. However, dividing multiple tasks into several independent tasks to perform, without the global optimization between different tasks, the agents' trajectories may overlap, reducing the efficiency of navigation. In this paper, we propose an efficient reinforcement learning framework with a hybrid policy for multi-object navigation, aiming to maximally eliminate the noneffective actions. First, the visual observations are embedded to detect the semantic entities (such as objects). And the detected objects are memorized and projected into semantic maps, which can also be regarded as a long-term memory of the observed environment. Then a hybrid policy consisting of exploration and long-term planning strategies is proposed to predict the potential target position. In particular, when the target is directly oriented, the policy function makes long-term planning to the target based on the semantic map, which is implemented by a sequence of motion actions. In the alternative, when the target is not oriented, the policy function estimates an object potential position towards exploring the most possible objects (positions) that have close relations to the target. The relation between different objects is obtained with prior knowledge, which is used to predict the potential target position by integrating with the memorized semantic map. And then a path to the potential target is planned by the policy function. We evaluate our proposed method on two large-scale 3D realistic environment datasets, Gibson and Matterport3D, and the experimental results demonstrate the effectiveness and generalization of the proposed method.