Introduction: In an era where technological advancements in large language models and generative artificial intelligence (AI) platforms like ChatGPT continually redefine the boundaries of medicine, the advent of Amazon Web Services (AWS) HealthScribe on July 26th, 2023, heralds a transformative moment in healthcare. This HIPAA-eligible generative AI service, capable of transcribing patient-provider conversations and automatically entering them into an electronic health record (EHR) system, represents a profound intersection of technology and medical practice. As this technology permeates clinical settings, addressing associated patient concerns with AI becomes paramount. Although providers may grasp the intricacies of such technology swiftly, patients are likely to harbor concerns. Patients may be unfamiliar with the technology or question its safety. Clear and precise communication will be essential for physicians to ease patient concerns. Our team interviewed forty-eight subjects to survey their understanding, concerns, and opinions on artificial intelligence. We then classified and visually charted their responses. Based on our data, we created a framework for ethical communication physicians can follow when talking to patients about using artificial intelligence in clinical settings. Method: A multidisciplinary team encompassing physicians, advanced practice providers specializing in hematology and oncology, and bioethicists engaged with forty-eight subjects. The study population represented a diverse cross-section of society, differing in aspects such as age, sex assigned at birth, political orientation, education, income, ethnicity, occupation, and religious affiliation. Our investigative process included a structured interview containing twenty-five foundational questions. No personal health information was collected during the questionnaire, and all questions sought only to garner subject opinion. Questions were designed to probe areas of interest such as: (1) the subject's general feelings about AI in healthcare, (2) their familiarity with AI technologies, (3) the existence of any specific concerns, and (4) a deeper exploration of those concerns. The responses were meticulously collected, categorized, and analyzed to discern emergent trends. Results: Our team developed The TRUST Framework based on our research findings by identifying and addressing the three primary concerns with AI in healthcare; transparency, confidentiality, and consent. The data presented a mosaic of varied opinions, with minimal discernible trends correlating with specific demographic attributes. One of the subtle patterns identified pertained to age. Subjects within the 12-35 age bracket demonstrated a familiarity with artificial intelligence, generally expressing indifference rather than concern or enthusiasm. Those aged 35-65 exhibited greater pronounced indifference than any age demographic and revealed an unfamiliarity with existing artificial intelligence tools. Conversely, respondents aged 65 and older expressed the highest level of concern and a prevalent unfamiliarity with the range of available artificial intelligence applications. Conclusion: Effective communication when introducing the use of artificial intelligence in healthcare settings is imperative. This conclusion is based on abstract reasoning and rooted in real-world feedback received from a diverse subject population. Our research revealed that subjects have varying feelings and concerns with artificial intelligence, irrespective of their background. Therefore, avoiding unconscious bias and having a framework for communicating how AI will be used is essential. As the frontier of AI continues to expand, the need for ethical and transparent communication will only grow. This study serves as a call to action for healthcare providers to commit to clear, honest, and empathetic communication about AI's role in patient care. Doing so will promote the overall acceptance of AI in healthcare, subsequently enhancing patient outcomes and alleviating patient concerns by building trust.