Abstract

Confidence in artificial intelligence (AI) is necessary, given its growing integration in every aspect of life and livelihood. Citizens are sharing both good and unpleasant experiences that have fuelled opinions of AI as an emerging, advantageous capability while also expressing an abundance of concerns that we must address. Clean energy scientific discoveries, the supply of autonomous vehicles that perform with zero carbon emissions, the rapid discovery of chemicals and/or anomalies that generate medicinal value, or the integration of AI in human resource processes for accelerated efficiencies are examples of AI use cases that can save lives and do so at the speed of urgency. The concerns and challenges are to ensure that models, algorithms, data and humans — the whole AI — are secure, responsible and ethical. In addition, there must be accountability for safety and civil equity and inclusion across the entire AI life cycle. With these factors in action, risks are managed and AI is trustworthy. This paper considers existing policy directives that are relevant for managing risks across the AI life cycle and provides further perspectives and practices to advance implementation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.