Abstract
AbstractThe significance of artificial intelligence (AI) trustworthiness lies in its potential impacts on society. AI revolutionizes various industries and improves social life, but it also brings ethical harm. However, the challenging factors of AI trustworthiness are still being debated. This research explores the challenging factors and their priorities to be considered in the software process improvement (SPI) manifesto for developing a trustworthy AI system. The multivocal literature review (MLR) and questionnaire‐based survey approaches are used to identify the challenging factors from state‐of‐the‐art literature and industry. Prioritization based taxonomy of the challenges is developed, which reveals that lack of responsible and accountable ethical AI leaders, lack of ethics audits, moral deskilling & debility, lack of inclusivity in AI multistakeholder governance, and lack of scale training programs to sensitize the workforce on ethical issues are the top‐ranked challenging factors to be considered in SPI manifesto. This study's findings suggest revising AI‐based development techniques and strategies, particularly focusing on trustworthiness. In addition, the results of this study encourage further research to support the development and quality assessment of ethics‐aware AI systems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.