The use of line features to improve the localization accuracy of point-based visual inertial SLAM (VINS) is of increasing interest because of the additional constraints they can provide on the scene structure. It is found that although the introduction of line features improves the accuracy of relative position estimation in some scenes due to the additional constraints, the two constraints only achieve an equalization for the estimated relative position. Therefore, in some environments two constraints can reduce the accuracy of the single-feature constraint algorithm and also make the real-time performance of the system more challenging. To address such issues, in this paper, we design a generalized SLAM system with point-line features that can be applied to multiple metaverse scenarios. We first enhance the image frames used for feature extraction by eliminating motion blur frames through the fuzzy metric method and mutation modeling of velocity and rotation. Then we improve the traditional line detection model by short line fusion, uniform distribution of line features, and refinement of edge features to obtain high-quality line features for building SLAM. Finally, based on the application of point features and line features for different scenes, a point and line feature separation-union model is proposed. In addition, we design a transformation model for line-point features to enhance the processing of point-line features. Experimental results on EuRoc, TUM_VI, KITTI, PennCOSYVIO and our own recorded dataset prove that the method proposed in this paper realizes a generalized SLAM with point and line features applied to multi-scenes with good advantages.