Abstract

Artificial Intelligence (AI) is experiencing a crisis.1 Several reports1-3 have shown how the breakthrough of modern AI has not yet been able to improve on existing diversity challenges regarding gender, race, geography, and other factors, neither for the end users of those products nor the companies and organizations building them. Plenty of examples have surfaced in which biased data engineering practices or existing data sets led to incorrect, painful, or sometimes even harmful consequences for unassuming end users.4 The problem is that ruling out such biases is not straightforward due to the sheer number of different bias types.5 To have a chance to eliminate as many biases as possible, most of the experts agree that the teams and organizations building AI products should be made more diverse.1-3 This harkens back to Linus' law6 for open source development (given enough eyeballs, all bugs are shallow) but applied to the development process of AI products.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.