Abstract
In this project I have worked towards a method for critical, socially aligned research in Artificial Intelligence by merging the analysis of conceptual commitments in technical work, discourse analysis, and critical technical practice. While the goal of critical technical practice as proposed by [1] is to overcome technical impasses, I explore an alternative use case - ensuring that technical research is aligned with social values. In the design of AI systems, we generally start with a technical formulation of a problem and then attempt to build a system that addresses that problem. Critical technical practice tells us that this technical formulation is always founded upon the discipline's core discourse and ontology, and that difficulty in solving a technical problem might just result from inconsistencies and faults in those core attributes. What I hope to show with this project is that, even when a technical problem seems solvable, critical technical practice can and should be used to ensure the human-compatibility of the technical research.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.