Ego-motion estimation, as one of the core technologies of unmanned systems, is widely used in autonomous robot navigation, unmanned driving, augmented reality and other fields. With the development of computer vision, there has been considerable interest in ego-motion estimation with visual navigation. One of the core technologies in Visual navigation is using the matching feature points between consecutive image frames to estimate pose. Since the feature-based method performed under the assumption of a static environment, it susceptive to the dynamic targets. Visual navigation in the dynamic environment has become an important research issue. This paper proposed a practical and robust features selection algorithm of visual navigation which avoids using the feature points on dynamic objects. Firstly, according to the instance segmentation of deep neural network, the objects are classified into potential dynamic and static categories. Subsequently, the matching features on the potential moving objects are used to update vehicle state respectively, meanwhile, the relevant reprojection error of other feature points on the background could be calculated. Eventually, the result of whether the target is moving or not will be judged by the reprojection error, and the features on dynamic targets are removed. To illustrate the effectiveness of the features selection method in the dynamic environment, the proposed algorithm is merged into an MSCKF based on tri-focal tensor geometry, and it has been evaluated in a public dataset. Experimental results demonstrated the effectiveness of the proposed method.