Abstract

The article presents specific recommendations for the examination of AI systems in medicine developed by the authors. The recommendations based on the problems, risks and limitations of the use of AI identified in scientific and philosophical publications of 2019-2022. It is proposed to carry out ethical expertise of projects of medical AI, by analogy with the review of projects of experimental activities in biomedicine; to conduct an ethical review of AI systems at the stage of preparation for their development followed by monitoring the testing of the created system; to focus on bioethical principles during the examination of AI systems for medical purposes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.