在医疗人工智能引起的法律议题中,运用医疗人工智能的说明义务亟须规范解释。医疗人工智能无法被当前的说明义务体系所涵摄,原因在于医疗器械不是知情同意制度所调整的对象,但这无法回应人工智能所带来的新型科技风险。说明义务的开放框架为披露医疗人工智的使用创造了弹性空间,技术水平、法律规范与医疗实践等多元基础进一步证立此项披露告知的必要性和重要性。医疗人工智能的说明义务在现阶段应采取理性患者与具体患者之混合标准、排除理性医生标准,在程度上应以简洁为要,照顾社会公众对医疗人工智能的认知水平。医疗人工智能说明义务的内容由“标准的风险与利益披露”所衍生,结合医疗决策主导权归属问题的探讨,应包括医疗人工智能的参与程度、固有风险、期待利益及替代措施四项基本信息。 In the legal issues caused by medical artificial intelligence, the duty of explanation of using medical artificial intelligence urgently needs to be normatively interpreted. Medical artificial intelligence cannot be covered by the current system of explanation duties, as medical devices are not the subject of the informed consent system. However, this cannot respond to the new technological risks brought by artificial intelligence. The open framework of the explanation duties creates flexible space for disclosing the use of medical artificial intelligence, and the diverse foundations of technological level, legal norms, and medical practice further establish the necessity and importance of this disclosure. The duty to explain medical artificial intelligence at the current stage should adopt a mixed standard of the reasonable patient standard and the subjective patient standard, excluding the reasonable doctor standard, and should be concise in terms of degree, taking into account the public’s awareness level of medical artificial intelligence. The content of the duty to explain medical artificial intelligence is derived from the “standard risk-benefit-disclosure” and should include four basic pieces of information: the level of involvement of artificial intelligence in medical decision-making, inherent risks, expected benefits, and alternative measures, combined with the exploration of the issue of medical decision-making authority.
Read full abstract