The European Union's Artificial Intelligence Act (EU AI Act) defines a "Deep Fake" as an “AI-generated or manipulated image, audio, or video content that mimics existing persons, objects, places, or other entities or events, creating a false appearance of authenticity or truthfulness to viewers.” In other words, Using machine learning algorithms, the bogus videos are transformed into realistic videos. Several AI-related catastrophes, including automated car accidents that resulted in physical injuries, robots causing harm to labourers, AI-assisted online privacy assaults, AI-assisted fraud that involves face, speech, or signature imitations, AI-assisted digital fingerprints falsely classifying innocent people as criminals at airports, and AI-assisted fraud in elections, have made headlines. Cryptographic signing and the hashing of a video into a fingerprint are used to confirm and reconfirm whether a video came from its original source. Many strategies for detecting deepfake videos have been developed over the last decade as a result of the rising threat of these videos. However, the fundamental issue with such procedures is that they are inaccurate and time-consuming. It also brings additional concerns such as misinformation and disinformation. The deep fakes raise concerns regarding digital identity theft. Identity theft, a criminal offence under the laws of most countries, including India, is addressed in the digital context under the Information Technology Act. However, as deepfakes are generated using artificial intelligence, the issue of fixing liability becomes increasingly complex. Commentators have argued that existing legal liability concepts may fail to address future conflicts involving AI systems. The challenge with the current legal system is primarily posed by AI systems that function without human interaction, but also by AI systems that operate with minimal human assistance. The current legal framework does not clearly define whether artificial intelligence can be regarded as a legal person, which complicates the issue of determining the extent of AI liability for the creation and dissemination of deepfakes. This ambiguity presents significant risks to the protection of digital identities, highlighting the need for a thorough review of existing laws to ensure they are adequately equipped to tackle the unique challenges posed by AI-generated content.
Read full abstract