Abstract

The launch of various AI systems has been one of the main highlights of the industry. Alongside the enormous and revolutionary benefits, AI can cause numerous problems (usually resulting from poor design) and people have recently started to get serious about researching ways to make AI safer. Many of the AI safety concerns sound like science fiction, problems that might occur with very strong AI systems that are still years away, making these issues difficult to investigate. We don’t know what such potential AI systems would be like, but similar issues exist with AI systems that are currently in progress or even running in the real world. The author addresses the possible implications in this article, outlining some important approaches in terms of software development methodologies and philosophy that we can start working on right now to support us with current AI systems and, hopefully, future systems

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call