Abstract

Humans are social beings. Emotions, like their thoughts, play an essential role in decision-making. Today, artificial intelligence (AI) raises expectations for faster, more accurate, more rational, and fairer decisions with technological advancements. As a result, AI systems have often been seen as an ideal decision-making mechanism. But what if these systems decide against you based on gender, race, or other characteristics? Biased or unbiased AI, that's the question! The motivation of this study is to raise awareness among researchers about bias in AI and contribute to the advancement of AI studies and systems. As the primary purpose of this study is to examine bias in the decision-making process of AI systems, this paper focused on (1) bias in humans and AI, (2) the factors that lead to bias in AI systems, (3) current examples of bias in AI systems, and (4) various methods and recommendations to mitigate bias in AI systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call