BackgroundAccess to health care is an essential health policy issue. In several countries, waiting time guarantees mandate set time limits for assessment and treatment. High-quality waiting time data are necessary to evaluate and improve waiting times. This study’s aim was to investigate health care providers and administrative management professionals’ perceptions of validity and usefulness of waiting time reporting in specialist care.MethodsSemi-structured interviews (n = 28) were conducted with administrative management and care professionals (line managers and care providers) in specialized clinics in the Stockholm Region, Sweden. Clinic-specific data from the waiting time registry was used in the care provider interviews to assess face validity. Clinics were purposefully sampled for maximum variation in complexity of care, volume of production, geographical location, private or public ownership, and local waiting times. Thematic analysis was used.ResultsThe waiting time registry was perceived to have low validity and usefulness. Perceived validity and usefulness were interconnected, with mechanisms that reinforced the connection. Structural and cognitive barriers to validity included technical and procedural errors, errors caused by role division, misinterpretation of guidelines, diverging interpretations of nonregulated cases and extensive willful manipulation of data.ConclusionsWe identify four misconceptions underpinning the current waiting time reporting system: passive dissemination of guidelines is sufficient as implemented, cognitive load of care providers to report waiting times is negligible, soft-law regulation and presentation of outcome data is sufficient to drive improvement, and self-reported data linked to incentives poses a low risk of data corruption. To counter low validity and usefulness, we propose the following for policy makers and administrative management when developing and implementing waiting time monitoring: communicate guidelines with instructions for operationalization, address barriers to implementation, ensure quality through monitoring of implementation and adherence to guidelines, develop IT ontology together with professionals, avoid parallel measurement infrastructures, ensure waiting times are presented to suit management needs, provide timely waiting time data, enable the study of single cases, minimize manual data entry, and perform spot-checks or external validity checks. Several of these strategies should be transferable to waiting time monitoring in other contexts.