Multiparty learning provides solutions for training joint models with decentralized data under legal and practical constraints. However, traditional multiparty learning approaches are confronted with obstacles, such as system heterogeneity, statistical heterogeneity, and incentive design. Determining how to deal with these challenges and further improve the efficiency and performance of multiparty learning has become an urgent problem to be solved. In this article, we propose a novel contrastive multiparty learning framework for knowledge refinement and sharing with an accountable incentive mechanism. Since the existing parameter averaging method is contradictory to the learning paradigm of neural networks, we simulate the process of human cognition and communication and analogize multiparty learning as a many-to-one knowledge-sharing problem. The approach is capable of integrating the acquired explicit knowledge of each client in a transparent manner without privacy disclosure, and it reduces the dependence on data distribution and communication environments. The proposed scheme achieves significant improvement in model performance in a variety of scenarios, as we demonstrated through experiments on several real-world datasets.