Abstract
The use of AI in different applications for different purposes has raised concerns due to discriminatory biases that have been identified in the technology. This paper aims to identify and analyze some of the main measures proposed by Bill No. 2338/23 of the Federative Republic of Brazil to combat discriminatory bias that companies should adopt to provide and/or operate fair and non-discriminatory AIs. To do so, it will first attempt to measure and analyze people's perceptions of the possibility that AI systems are discriminatory. For this a qualitative descriptive exploratory was made using as a reference sample the inhabitants of the Southeast region of Brasil. The survey results suggest that people are more aware that AIs are not neutral and that they may come to incorporate and reproduce prejudices and discriminations present in society. The incorporation of such biases is the result of issues related to the quality and diversity of the data used, inaccuracies in the algorithms employed, and biases on the part of both developers and operators. Thus, this work sought to reduce this gap and at the same time break the barrier of the lack of dialogue with the public in order to contribute to a democratic debate with society.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.