Privacy-preserving federated learning, as one of the privacy-preserving computation techniques, is a promising distributed and privacy-preserving machine learning (ML) approach for Internet of Medical Things (IoMT), due to its ability to train a regression model without collecting raw data of data owners (DOs). However, traditional interactive federated regression training (IFRT) schemes rely on multiple rounds of communication to train a global model and are still under various privacy and security threats. To overcome these problems, several noninteractive federated regression training (NFRT) schemes have been proposed and applied in a variety of scenarios. However, there are still several challenges: 1) how to protect the privacy of DOs' local dataset; 2) how to realize highly scalable regression training without linear dependence on sample dimension; 3) how to tolerate DOs' dropout; and 4) how to enable DOs to verify the correctness of aggregated results returned from the cloud service provider (CSP). In this article, we propose two practical noninteractive federated learning schemes with privacy-preserving for IoMT, named homomorphic encryption based NFRT (HE-NFRT) and double-masking protocol based NFRT (Mask-NFRT), respectively, which are based on a comprehensive consideration of NFRT, privacy concerns, high-efficiency, robustness, and verification mechanism. The security analyses display that our proposed schemes are able to protect the privacy of DOs' local training data, resist collusion attack, and support strong verification to each DO. The performance evaluation results demonstrate that our proposed HE-NFRT scheme is desirable for a high-dimensional and high-security IoMT application while Mask-NFRT scheme is desirable for a high-dimensional and large-scale IoMT application.