Abstract

Background: The inclusion of neural networks in the work of healthcare institutions and medical education is an urgent problem in bioethics – a discipline that develops issues of personal choice between benefit and harm, between good and evil, between the volume and quality of information processing. The introduction of neural networks into the practice of healing is inevitable and is "the most commonly used analytical tool". The pros and cons of digitalization of medicine are described in detail in the literature, such as acquiring a digital assistant for diagnosis, determining optimal treatment plans and monitoring the health status of patients. Aim: to consid-er the possibility of improving clinical thinking in partnership with neural networks using the example of analyzing clinical situations.Materials and methods: An analytical review of the literature on the problem of integrating artificial intelligence into medical practice was carried out. The empirical base is represented by materials from qualitative sociological research (case study method). Results: Based on the analysis of cases, it is shown that the guidelines for writing out recipes for correcting errors are spelled out abstractly (the neural network is irresponsible, human intelligence must exceed the intelligence of the machine) and concretely (the initial answer of the neural network to the question posed is superficial and requires clarification using questions unexpected for the neural network spe-cific configuration of terms not recognized by artificial intelligence as keywords). The risks of introducing artificial intelligence into the work of medical institutions have been identified: on the one hand, with a high degree of compliance of doctors to the recommenda-tions of neural networks, the doctor is responsible for their errors, and the patient suffers, on the other hand, with a high degree of com-pliance of AI to user requests, training neural networks in dialogues is dangerous multiplication of dubious recommendations from un-differentiated/incompetent users. The doctor’s competence in dialogues training the neural network is invisible, unverified, and essen-tially virtual. Conclusion: Based on the conducted research, the possibility of improving neural networks through their adaptation to regional paradigms of healing, to value systems that are based on the archetypes of domestic healthcare is shown.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.