Abstract

The rapid advancement of generative artificial intelligence (AI) systems, such as Midjourney, has paved the way for their use in medical training, producing computer-generated images. However, despite clear disclosures stating that these images are not intended for medical consultations, their accuracy and realism are yet to be thoroughly examined. A series of requests were addressed to the Midjourney AI tool, a renowned generative artificial intelligence application, with a focus on depicting appropriate systemic anatomy and representing aesthetic surgery operations. Subsequently, a blinded panel of four experts, with years of experience in anatomy and aesthetic surgery, assessed the images based on three parameters: accuracy, anatomical correctness, and visual impact. Each parameter was scored on a scale of 1-5. All of images produced by Midjourney exhibited significant inaccuracies and lacked correct anatomical representation. While they displayed high visual impact, their unsuitability for medical training and scientific publications became evident. The implications of these findings are multifaceted. Primarily, the images' inaccuracies render them ineffective for training, leading to potential misconceptions. Additionally, their lack of anatomical correctness limits their applicability in scientific articles. Although the study focuses on a single AI tool, it underscores the need for collaboration between AI developers and medical professionals. The potential integration of accurate medical databases could refine the precision of such AI tools in the future. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call