Artificial Intelligence (AI) is becoming more ubiquitous throughout our lives. As our reliance on this technology increases, ensuring human operators maintain an adequate level of trust is integral to their safe and effective operations. To facilitate the appropriate level of operator trust in AI, a mechanism to continuously evaluate and calibrate human-AI trust is required. Such a Trust Management System (TMS) will be integral to developing trustworthy AI systems and thus enable collaborative and effective Human-AI Teaming (HAT) in future operations. This paper starts a review of the current state-of-the-art in trust research applicable to HAT, then summarizes the development and presents the IMPACTS (intention, measurability, performance, adaptivity, communication, transparency, security) homeostasis TMS. It is based on a dynamic and transactional trust framework and allows for continuous trust monitoring, managing, and behavior adjustment to ensure operator trust is calibrated.