Abstract

Artificial intelligence is becoming a common element of our times. It is becoming more and more pervading into every element of our lives. Mass applications of artificial intelligence started when it began to be used in video games, but now it is available to everyone and can help with many tasks that, up to a few years ago, could be done only by humans. Discussions about artificial intelligence began very early before it existed. Most of the science-fiction literature tried to imagine many forms of AI and the consequences, both good and evil, of its use. But now artificial intelligence is a real, concrete thing and its mass usage must be subordinated to a risk evaluation and mitigation process to make it safe. In this paper, an introduction to this risk assessment will be made and the main guidelines for it will be defined. These guidelines could be used by researchers, designers, developers and even users to validate an AI-based application before delivering it to people. The paper considers the basic concepts of risk and tailors them to provide effective support in developing risk analysis for the specific area of artificial intelligence. Then a set of typical risks are defined and methods to detect and minimize them are provided. In conclusion, a call for stricter regulation of AI and high-performance processing is issued.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call