Abstract

The study investigates data governance challenges within AI-enabled healthcare systems, focusing on Project Nightingale as a case study to elucidate the complexities of balancing technological advancements with patient privacy and trust. Utilizing a survey methodology, data were collected from 843 healthcare service users employing a structured questionnaire designed to measure perceptions of AI in healthcare, trust in healthcare providers, concerns about data privacy, and the impact of regulatory frameworks on the adoption of AI technologies. The reliability of the survey instrument was confirmed with a Cronbach's Alpha of 0.81, indicating high internal consistency. The multiple regression analysis revealed significant findings: a positive relationship between the awareness of technological projects and trust in healthcare providers, countered by a negative impact of privacy concerns on trust. Additionally, familiarity with and perceived effectiveness of regulatory frameworks were positively correlated with trust in data, while perceptions of regulatory constraints and data governance issues were identified as significant barriers to the effective adoption of AI technologies in healthcare. The study highlights the critical need for enhanced transparency, public awareness, and robust data governance frameworks to navigate the ethical and privacy concerns associated with AI in healthcare. The study recommends adopting flexible, principle-based regulatory approaches and fostering multi-stakeholder collaboration to ensure the ethical deployment of AI technologies that prioritize patient welfare and trust.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call