Critical care nurses who care for postoperative cardiac surgery patients need such specialty knowledge as atrial electrograms (AEGs). An inadequate audit trail exists for psychometric performance of instruments to measure knowledge of AEGs. The aim of this study was to revise a previously tested instrument and assess evidence for content validity (content validity index), internal consistency (Cronbach α), and stability (correlation coefficient, r) reliability against the a priori criterion of 0.80. The multiple-choice response, self-administered, paper-and-pencil instrument was revised to 20 items and named the Drake Atrial Electrogram Assessment Survey (DAEGAS). A panel of 6 AEG experts reviewed the DAEGAS for content validity evidence. The instrument was further revised to 19 items (13 knowledge and 6 AEG interpretation) and tested with 76 critical care nurses from the greater Houston metropolitan area. The content validity index was 0.93. Cronbach α was .51, and test-retest r was 0.74. Cronbach α increased to .60 and r was 0.73 with removal of 3 items: 2 items with a negative item-total correlation and 1 item that was transitioned to a sample question. Content validity evidence exceeded the a priori criterion. Internal consistency and stability reliability estimates did not meet the criterion, albeit the latter met the criterion recommended by psychometricians for a new instrument. Recommendations include further development of the DAEGAS to improve internal consistency estimates and testing for evidence of other forms of validity. Reliable and valid assessment of critical care nurse knowledge of AEGs will require improved psychometric performance of the DAEGAS.