Abstract

Humans are social beings. Emotions, like their thoughts, play an essential role in decision-making. Today, artificial intelligence (AI) raises expectations for faster, more accurate, more rational, and fairer decisions with technological advancements. As a result, AI systems have often been seen as an ideal decision-making mechanism. But what if these systems decide against you based on gender, race, or other characteristics? Biased or unbiased AI, that's the question! The motivation of this study is to raise awareness among researchers about bias in AI and contribute to the advancement of AI studies and systems. As the primary purpose of this study is to examine bias in the decision-making process of AI systems, this paper focused on (1) bias in humans and AI, (2) the factors that lead to bias in AI systems, (3) current examples of bias in AI systems, and (4) various methods and recommendations to mitigate bias in AI systems.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.