Abstract
Problem setting. Artificial intelligence is rapidly affecting the financial sector with countless potential benefits in terms of improving financial services and compliance. In the financial sector, artificial intelligence algorithms are already trusted to account for transactions, detect fraudulent schemes, assess customer creditworthiness, resource planning and reporting. But the introduction of such technologies entails new risks. Analysis of resent researches and publications. The following scientists were engaged in research of the specified question: D.W. Arner, J. Barberis, R.P. Buckley, Jon Truby, Rafael Brown, Andrew Dahdal, O. A. Baranov, O. V. Vinnyk, I. V. Yakovyuk, A. P. Voloshin, A. O. Shovkun, N.B. Patsuriia. Target of research. The aim of the article is to identify key strategic issues in developing mechanisms to ensure the effective implementation and use of artificial intelligence in the financial services market. Article’s main body. The paper investigates an important scientific and practical problem of legal regulation of artificial intelligence used by financial services market participants. The legal risks associated with the use of artificial intelligence programs in a particular area are considered. The most pressing risks to address targeted AI regulation are fundamental rights, data confidentiality, security and effective performance, and accountability. This article argues that the best way to encourage a sustainable future in AI innovation in the financial sector is to support a proactive regulatory approach prior to any financial harm occurring. This article argues that it would be optimal for policymakers to intervene early with targeted, proactive but balanced regulatory approaches to AI technology in the financial sector that are consistent with emerging internationally accepted principles on AI governance. Conclusions and prospects for the development. The adoption of rational regulations that encourage innovation whilst ensuring adherence to international principles will significantly reduce the likelihood that AI-related risks will develop into systemic problems. Leaving the financial sector only with voluntary codes of practice may encourage experimentation that in turn may result in innovative benefits – but it will definitely render customers vulnerable, institutions exposed and the entire financial system weakened.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.