As increasing concerns have arisen on privacy leakage in data-driven smart services, federated learning (FL) has been introduced to collaboratively learn an efficient model across multiple participants without sharing raw data. However, existing FL schemes hinge on participating nodes to perform intensive ondevice training and network communication, which is a significant burden for energy-constrained mobile devices. In this work, we present the computational layered FL (CLFL) framework to enable resource-constrained devices to perform computationefficient on-device training and lightweight message transmitting. We first introduce the network structure and key aspects of the entities in the framework. Then, we give the implementation principles of CLFL, and present two instance schemes that allow devices to participate in joint training without the need for direct gradient computation or continuous data transmission. In order to more intuitively reflect the performance and efficiency of the proposed methods, we carry out a preliminary implementation and give the comparison with traditional FL. Finally, for the future exploration, we present four related research challenges of CLFL and offer possible solutions.