Federated Learning (FL) is a new type of distributed machine learning paradigm aimed at addressing data privacy and security issues in the Internet of Things (IoT) context. In the training process of federated learning model of smart home, data owners need to constantly update cached data, which lead to resources consumption and service latency. Meanwhile, the update cost of the data owners will change with the passage of time when the model owner requests data update. Therefore, this paper first puts forward an incentive scheme based on two-period dynamic contract theory under the information asymmetry. The scheme can balance the weighted preference of the model owner for age of information (AoI) and service latency, as well as encourage more data owners to participate in model training to increase the utility of model owner. Then we formally prove the feasibility of the proposed dynamic contract, which satisfies the constraints of individual rationality and intertemporal incentive compatibility. The experimental results on MNIST dataset show that the accuracy of the proposed dynamic contract is improved by at least 4% compared with the existing contracts. Additionally, compared with traditional contract and consistent pricing strategy, the model owner obtains more profit from the proposed contract.