Background: The potential of generative artificial intelligence (genAI) tools, such as ChatGPT, is being increasingly explored in healthcare settings. However, the same tools also introduce significant cybersecurity risks that could compromise patient safety, data integrity, and institutional trust. This study aimed to examine real-world security breaches involving genAI and extrapolate their potential implications for healthcare settings. Methods: Using a systematic Google News search and a consensus-based approach among the authors, five high-profile genAI breaches were identified and analyzed. These cases included: (1) Data exposure in ChatGPT (OpenAI) due to an open-source library bug (March 2023); (2) Unauthorized data disclosure via Samsung’s (Samsung Group) use of ChatGPT (2023); (3) Logical vulnerabilities in Chevrolet (General Motors) AI-powered chatbot resulting in pricing errors (December 2023); (4) Prompt injection vulnerability in Vanna AI (Vanna AI, Inc.) which enabled remote code execution (2024); and (5) the deepfake technology used in a scam targeting the engineering firm Arup (Arup Group Limited), leading to fraudulent transactions (February 2024). Hypothetical healthcare scenarios were constructed based on the five cases, mapping their mechanisms to vulnerabilities in electronic health records (EHRs), clinical decision support systems (CDSS), and patient engagement platforms. Each case was analyzed using the Confidentiality, Integrity, and Availability (CIA) triad of information security to systematically identify vulnerabilities and propose actionable safeguards. Results: The analyzed cases of AI security breaches revealed significant risks to healthcare systems. Confidentiality violations included the potential exposure of sensitive patient records and billing information, extrapolated from incidents such as the ChatGPT data exposure and Samsung’s cases. These identified security breaches raised concerns about privacy violations, identity theft, and non-compliance with regulations such as Health Insurance Portability and Accountability Act (HIPAA). Integrity vulnerabilities were highlighted in Vanna AI's prompt injection flaw incident, with risks of altering patient records, compromising diagnostic algorithms, and misleading CDSS with erroneous recommendations. Similarly, logic errors identified in the Chevrolet case exposed potential risks of inaccurate billing, double-booked appointments, and flawed treatment plans within healthcare contexts. Availability disruptions, observed through system outages and operational suspensions following breaches like the ChatGPT and deepfake cases, can delay access to EHR systems or AI-driven CDSS. Such interruptions would directly impact patient care and create inefficiencies in administrative workflows. Conclusions: Generative AI presents a double-edged sword in healthcare, with transformative potential accompanied by substantial risks. Extrapolation of security breach cases in this study highlighted the urgent need for robust safeguards if genAI is implemented in healthcare settings. To address these vulnerabilities, healthcare institutions must implement strong security protocols, enforce strict data governance, and create AI-specific incident response plans. The balance between genAI-enabled innovation and protection of patient safety and data integrity trust requires proactive safety measures.
Read full abstract