Abstract

One of the major research problems related to artificial intelligence (AI) models at present is algorithmic bias. When an automated system “makes a decision” based on its training data, it can reveal biases similar to those inherent in the humans who provided the training data. Much of the data used to train the models comes from vector representations of words obtained from text corpuses, which can transmit stereotypes and social prejudices. AI system design focused on optimising processes and improving prediction accuracy ignores the need for new standards for compensating the negative impact of AI on the most vulnerable categories of peoples. An improved understanding of the relationship between algorithms, bias, and non-discrimination not only precedes any eventual solution, but also helps us to recognize how discrimination is created, maintained, and disseminated in the AI era, as well as how it could be projected into the future using various neurotechnologies. The opacity of the algorithmic decision-making process should be replaced by transparency in AI processes and models. The present work aims to reconcile the use of AI with algorithmic decision processes that respect the basic human rights of the individual, especially the principles of non-discrimination and positive discrimination. The Argentine legislation serves as the legal basis of this work.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.