The metaverse, seen as the internet’s successor, is a digital world with its own values and economy, linked to the physical world. Accessed via digital avatars using VR equipment, it involves extensive personal data, raising concerns about data security and privacy. Federated learning is a distributed machine learning technique that ensures privacy by allowing metaverse avatars to share knowledge without revealing user data. However, recent research shows that federated learning still faces privacy threats, such as Source Inference Attacks (SIAs) in Horizontal Federated Learning (HFL) and Label Inference Attacks (LIAs) in Vertical Federated Learning (VFL). To solve these problems, in this paper, we propose the first Incentive Scheme based Privacy-Preserving Federated Learning for avatar in metaverse (ISPPFL). The framework is composed of a privacy risks auditor, perturbation generation mechanism and adaptive incentive mechanisms, which effectively defends against privacy risks. We conducted comprehensive experiments on two distinct datasets in various scenarios. The result demonstrated that, the ISPPFL can effectively defend against privacy attacks while maintain model accuracy. For SIAs under HFL, compared to the baseline, the introduction of perturbation and incentive mechanisms could minimize privacy risk indicator (PRI) to around 20%, while maintain the model performance. For LIAs under VFL, the PRI of the model with perturbation generation mechanisms could decrease by approximately 10% compared to the model trained without defenses. Simultaneously, with the introduction of adaptive incentive mechanisms, the PRI could drop from 90% to 69%. Ultimately, the paper also summarizes the completed work and proposes directions for future research.