Abstract

In prior publications relating to this presentation, we have looked at the general problems associated with artificial intelligence. But in this article, we will focus on one particular issue relating to artificial intelligence – the assumption by non-technology management that AI systems work as intended. While that may sometimes be true, assuming it to be true is, at best, ill-advised, and at worst, dangerous. There are multiple examples of artificial intelligence systems failing, ranging from bias (hopefully unintentionally) built into the algorithms, known as “implicit bias” to issues arranging from not clearly understanding the code that makes up the artificial intelligence application, including many years of open-source code and embedded libraries. This knowledge sometimes referred to as a “software bill-of-materials” or “SBOM”) is currently being recognized as vital. But while the evidence that artificial intelligence systems can and do fail, senior executives seem to operate in some instances as if these failure factors didn’t exist, or to simply assume that they had been factored in to the project, albeit without evidence of that fact. Ultimately, the authors believe that standards for such systems should include a full risk assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call