Abstract
The astonishing pace of development in Artificial Intelligence (AI) has been welcomed for the benefits it brings to society and businesses. However, the malicious use of AI has raised alarms in academia and the government. The unbridled growth of AI technology has been portrayed as a risk to humankind. [7] The initial impact of the misuse of AI is shown in the form of deepfakes and sophisticated phishing attacks. The pace of AI development is unprecedented, and other issues that will come up are unknown. In this paper, I will assess how the “known unknown” risks of artificial intelligence should be handled and propose ways for the government to get involved in a constructive way, such that the risks of AI can be mitigated. I draw lessons from emergency preparedness and establish a need to have government involvement at the federal level to mitigate the growing risks from AI. This organization would bring other government agencies, businesses, and citizens together to take steps to deal with the risks from AI. Taking measures to counter deepfakes, facilitating the learning of AI models, and handling biases in AI models are three important goals for the government’s agency.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.