Abstract
Generative AI, particularly large language models (LLMs), holds great potential for improving patient care and operational efficiency in healthcare. However, the use of LLMs is complicated by regulatory concerns around data security and patient privacy. This study aimed to develop and evaluate a secure infrastructure that allows researchers to safely leverage LLMs in healthcare while ensuring HIPAA compliance and promoting equitable AI. We implemented a private Azure OpenAI Studio deployment with secure API-enabled endpoints for researchers. Two use cases were explored, detecting falls from electronic health records (EHR) notes and evaluating bias in mental health prediction using fairness-aware prompts. The framework provided secure, HIPAA-compliant API access to LLMs, allowing researchers to handle sensitive data safely. Both use cases highlighted the secure infrastructure's capacity to protect sensitive patient data while supporting innovation. This centralized platform presents a scalable, secure, and HIPAA-compliant solution for healthcare institutions aiming to integrate LLMs into clinical research.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have