As decision-making algorithms become more prevalent in society, the importance efficiency and problem-solving abilities come into question when predictions impact individuals’ lives. High-risk applications require trusted AI systems to be designed with fairness and accountability; such trust and consideration is essential for public acceptance and successful deployment. Despite growing advocacy for ethical and trustworthy AI, along with the emergence of guidelines like the EU AI Act, controversies surrounding AI persist in the media. Public sector AI systems are being implemented haphazardly, whether in judicial decision-making, healthcare diagnostics, or social welfare distribution. These high-risk applications directly affect citizens’ quality of life, highlighting the need for a critical assessment of how AI are being designed and deployed in the public sector. My thesis explores the integration of fairness, accountability, and uncertainty in public sector AI to assess whether these systems are appropriately designed, effectively adapted, and capable of enhancing societal well-being. The research aims to provide actionable insights for designing AI systems that align with public sector needs and maximize societal benefits.
Read full abstract