Abstract Chinese pre-training is an essential direction in Chinese natural language processing, and vocabulary constitutes the foundation of pre-trained models. Existing methods for constructing vocabularies for Chinese pre-trained models typically treat each Chinese character as an indivisible token, overlooking the additional information embedded in the intrinsic structure of Chinese characters and its impact on model performance. To leverage this information, we propose a method of training with rule-based fine-grained vocabularies to directly learn the sequence of intrinsic structures of Chinese characters in Chinese pre-training. Specifically, we first construct a mapping rule-based fine-grained vocabulary based on the glyph and radical splitting mapping relationship of Chinese characters. Subsequently, we employ a whole char masking strategy to train Chinese pre-trained models based on this new vocabulary. Experimental results demonstrate that compared to the BERT baseline model, our model achieves promising performance on multiple downstream tasks, with fewer model parameters and stronger robustness. The proposed Chinese pre-training method exploits the intrinsic structural information of Chinese characters, providing a novel method for subsequent research in Chinese natural language processing.