Abstract

Artificial Intelligence (AI) is supporting decisions in ways that increasingly affect humans in many aspects of their lives. Both autonomous and decision-support systems applying AI algorithms and data-driven models are used for decisions about justice, education, physical and psychological health, and to provide or deny access to credit, healthcare, and other essential resources, in all aspects of daily life, in increasingly ubiquitous and sometimes ambiguous ways. Too often systems are built without considering human factors associated with their use, such as gender bias. The need for clarity about the correct way to employ such systems is an an increasingly critical aspect of design, implementation, and presentation. Models and systems provide results that are difficult to interpret and are blamed for being good or bad, whereas only the design of such tools is good or bad, and the necessary training for them to be integrated into human values. This chapter aims at discussing the most evident issues about gender bias in AI and exploring possible solutions for the impact on humans of AI and decision support algorithms, with a focus on how to integrate gender balance principles into data sets, AI agents, and in general in scientific research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call