Abstract Aim To provide a comprehensive overview of quality assurance processes implemented within published simulated surgical training programmes, identifying areas of deficit. Method A systematic review was conducted according to PRISMA guidelines. Databases were searched from 1947 until April 2023 to identify studies reporting structured simulation programmes undertaking qualitative assessment of laparoscopic technical skills. The assessment had to be procedural (e.g., appendicectomy) versus tasks (e.g., bean drop). Simulation had to be physical modalities (synthetic/cadaveric). Virtual and augmented reality modalities were excluded. Data was extracted against an existing quality assurance standards framework for simulation-based education, developed by the Association for Simulated Practice in Healthcare (ASPiH) in partnership with Health Education England. Study quality was appraised using the Medical Education Research Study Quality Instrument (MERSQI). Results 39 studies were included, with 17 from USA and 3 from UK simulation programmes. Range of participants was 6-256, with 7 studies reporting >50 participants. Faculty: 40% (n=15) of studies reported no description of faculty expertise or selection. 60% (n=24) of studies implemented no assessor training. 90% (n=35) described no faculty evaluation or feedback. Activity: Variability was seen in assessment methods including number of stations, assessors, and assessment tools. 60% (n=24) of studies sought no feedback on their programme. 90% (n=35) of the programmes were not aligned to curricula or learning needs analyses. Conclusions This review highlights deficits in the implementation of quality assurance processes including faculty selection, training, and programme feedback. It also highlights the variability of assessment methods, reiterating the need for individualised robust validation processes.