Abstract

Generative artificial intelligence (A.I.), most prominently ChatGPT, has generated massive amounts of hype around the world, including in litigation. The use of this technology, while possibly beneficial in certain regards, also poses significant risks: misinformation and made-up information, breaches of legal professional privilege, data collection and retention, damage to judicial integrity, and concerns about ethics. This paper set out to (1) review the risks that the use of generative A.I. poses in litigation, and (2) suggest regulations to address said risks. The findings showed that generative A.I., in its current form, should be prohibited altogether in litigation. Its use in the future, if allowed to be used, should be strictly regulated. Whether generative A.I. should be involved in litigation at all remains an open societal question which urgently demands consideration. Keywords: generative artificial intelligence, litigation, regulation, ChatGPT, literature review

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.