Abstract

Large language models have recently been heavily publicized with the release of ChatGPT. One of the uses of these artificial intelligence (AI) systems today is to power virtual companions that can pose as friends, mentors, therapists, or romantic partners. While presenting some potential benefits, these new relationships can also produce significant harms, such as hurting users emotionally, affecting their relationships with others, giving them dangerous advice, or perpetuating biases and problematic dynamics such as sexism or racism. This case study uses the example of harms caused by virtual companions to give an overview of AI law within the European Union. It surveys the AI safety law (the AI Act), data privacy (the General Data Protection Regulation), liability (the Product Liability Directive), and consumer protection (the Unfair Commercial Practices Directive). The reader is invited to reflect on concepts such as vulnerability, rationality, and individual freedom.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.