BackgroundElectronic order-sets increasingly ask clinicians to answer questions or follow algorithms. Cooperation with such requests has not been studied. SettingInternal Medicine service of an academic medical center. ObjectiveWe studied the accuracy of clinician responses to questions embedded in electronic admission and discharge order-sets. Embedded questions asked whether any of three “core” diagnoses was present; a response was required to submit orders. Endorsement of any diagnosis made available best-practice ordering screens for that diagnosis. DesignThree reviewers examined 180 electronic records (8% of discharges), drawn equally (for each core diagnosis) from possible combinations of Yes/No responses on admission and discharge. In addition to noting responses, we identified whether the core diagnosis was coded, determined from notes whether the admitting clinician believed that diagnosis present, and sought clinical evidence of disease on admission. We also surveyed participating clinicians anonymously about practices in answering embedded questions. MeasurementsWe measured occurrence of six admission and five discharge scenarios relating medical record evidence of disease to clinician responses about its presence. ResultsThe commonest discordant pattern between response and evidence was a negative response to disease presence on admission despite both early clinical evidence and documentation.Survey of study clinicians found that 75% endorsed some intentional inaccuracy; the commonest reason given was that questions were sometimes irrelevant to the clinical situation at the point asked. ConclusionThrough faults in order-set design, limitations of software, and/or because of an inherent tendency to resist directed behavior, clinicians may often ignore questions embedded in order-sets.