Global localization is an integral part of indoor robot navigation. Traditional methods implemented at image-level still face challenges when encountering dynamic environments, illumination variation and large viewpoint changes. Methods based on 3D point cloud have low computational efficiency and high localization difficulty. In this case, an efficient global localization framework based on graph matching to solve the problem of low localization stability in object-level navigation and robot abduction is proposed. Firstly, semantic information is extracted by the instance segmentation network. An efficient 3D bounding box extraction algorithm is used to obtain the object-level map. Then the object-level map is transformed into a semantic topological graph and preprocessed. The feature matrix of the map is obtained by the tree generation and the graph kernel method. The matching method based on the voting mechanism is adopted to realize the correspondence between the local graph and the global graph. Finally, the pose of the robot is obtained by a two-stage point cloud registration method. Experiments are carried out in SceneNN dataset and three real indoor environments. Extensive experiments show that our approach reaches high accuracy and can achieve robust global localization under large viewpoint changes.