Abstract

Artificial intelligence (AI) has proven to be superior to human decision-making in certain areas. This is particularly the case whenever there is a need for advanced strategic reasoning and analysis of vast amounts of data in order to solve complex problems. Few human activities fit this description better than politics. In politics we deal with some of the most complex issues humans face, short term and long term consequences have to be balanced, and we make decisions knowing that we do not fully understand their consequences. I examine an extreme case of the application of AI in the domain of government, and use this case to examine a subset of the potential harms associated with algorithmic governance. I focus on five objections based on political theoretical considerations and the potential political harms of an AI technocracy. These are objections based on the ideas of ‘political man’ and participation as a prerequisite for legitimacy, the non-morality of machines and the value of transparency and accountability. I conclude that these objections do not successfully derail AI technocracy, if we make sure that mechanisms for control and backup are in place, and if we design a system in which humans have control over the direction and fundamental goals of society. Such a technocracy, if the AI capabilities of policy formation here assumed becomes reality, may, in theory, provide us with better means of participation, legitimacy, and more efficient government.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.