Abstract

This article provides a comprehensive overview of the current state of academic research on benefits and concerns of artificial intelligence (AI) in everyday life. Findings from the literature presented in this article offer useful guidance for the stakeholders who are looking into establishing governance practices for responsible artificial intelligence (AI) so that the future of our smart societies is safe, inclusive, and sustainable. This synthesis of literature review connects the dots from various academic disciplines concluding with a model theoretical framework for responsible AI in a multi-stakeholder arrangement. Based on the findings from the literature review, two interesting discussions are presented in this article. The discussions reflect upon inevitable multidisciplinary complexity of the topic AI and society. Discussion 1, by breaking down complex concepts and providing clear explanations, offers useful insights on the reasons why ethical considerations have become a focal concern for AI governance. Similarly, discussion 2 highlights the need for a multi-stakeholder approach for the responsible adoption of AI. Overall, this article contributes to the ongoing discussions and debates about the responsible AI by systematically building an argument as to why a multi-stakeholder approach is vital to address the societal concerns of AI.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.