Abstract We propose a new strategy by self-supervised learning on the node and edge information of the crystal graph neural network (GNN) model, which may gain physical insights by tuning the depth of multi-scale representations. Compared with the popular manually constructed material descriptors, the self-supervised atomic representation can be applied to various machine learning models and reach better prediction performance on material properties. Applying the self-supervised atomic representation on the magnetic moment datasets, we show how they can extract rules and information from the magnetic materials. We develop the node-embedding graph neural networks (NEGNN) framework to incorporate rich physical information into the GNN model, showing significant improvements in the prediction performance. The self-supervised material representation and the NEGNN framework may investigate in-depth information from materials and can be applied to small datasets with increased prediction accuracy.