Abstract

Under the headline “AI safety”, a wide-reaching issue is being discussed, whether in the future some “superhuman artificial intelligence” / “superintelligence” could pose a threat to humanity. In addition, the late Steven Hawking warned that the rise of robots may be disastrous for mankind. A major concern is that even benevolent superhuman artificial intelligence (AI) may become seriously harmful if its given goals are not exactly aligned with ours, or if we cannot specify precisely its objective function. Metaphorically, this is compared to king Midas in Greek mythology, who expressed the wish that everything he touched should turn to gold, but obviously this wish was not specified precisely enough. In our view, this sounds like requirements problems and the challenge of their precise formulation. (To our best knowledge, this has not been pointed out yet.) As usual in requirements engineering (RE), ambiguity or incompleteness may cause problems. In addition, the overall issue calls for a major RE endeavor, figuring out the wishes and the needs with regard to a superintelligence, which will in our opinion most likely be a very complex software-intensive system based on AI. This may even entail theoretically defining an extended requirements problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call