Abstract

There is a core problem with the modern Artificial Intelligence (AI) technologies, based on the current new wave of Artificial Neural Networks (ANNs). Whether they have been used in healthcare or for exploring Mars, we, the programmers who build them, do not know well why they make some decisions over others. Many are putting into question, hence, this aura of AI objectivity and infallibility; on our side, we, instead, identify a key issue around the problem of AI errors and bias into an insufficient human ability to determine the limits of the context, where the ANNs will have to operate. In fact, while it is of great amplitude the range of what the rational side of the human mind can master, machine intelligence has limited capacity to learn in completely unknown scenarios. Simply, an inaccurate or incomplete codification of the context may result into AI failures. We present here a simple cognification ANN-based case study, in an underwater scenario, where the difficulty of identifying and then codifying all the relevant contextual features has led to a situation of partial failure. This paper reports on our reflections, and subsequent technical actions taken to recover from this situation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.