Background: A recent National Highway Traffic & Safety Administration (NHTSA) report states that 10% of fatal crashes and 18% of injury crashes were reported as distraction-affected crashes. In that same year, 3,179 people were killed and an estimated 431,000 injured in motor vehicle crashes involving distracted drivers, many of which involved secondary visual displays (NHTSA, 2016). Augmented reality (AR) head-up displays (HUD) promise to be less distractive than traditional in-vehicle displays since they do not take drivers’ eyes off the road (Gabbard, Fitch, & Kim, 2014). However, empirical studies have reported possible negative consequences of AR HUDs, in part, due to AR graphics’ salience (Sharfi & Shinar, 2014), frequent changes (Wolffsohn, McBrien, Edgar, & Stout, 1998), and visual clutter (Burnett & Donkor, 2012). Moreover, current in-vehicle display assessment methods which are based on eye-off-road time measures (NHTSA, 2012), cannot capture this unique challenge. Objective: This work aims to propose a new method for the assessment of AR HUDs by quantifying both positive (informing drivers) and negative (distracting drivers) consequences of AR HUDs which might not be captured by current in-vehicle display assessment methods. Method: We proposed a new way of quantifying the distraction potential of AR HUDs by measuring driver situation awareness with operational improvements on the situation awareness global assessment technique (Endsley, 2012) to suit AR usability evaluations. A human-subject experiment was conducted in a driving simulator to apply the proposed method and to evaluate two AR HUD interfaces for pedestrian collision warning. The AR warning interfaces were prototyped by the augmented video technique (Soro, Rakotonirainy, Schroeter, & Wollstdter, 2014). Twenty-four participants drove while interacting with different types of AR pedestrian collision warning interfaces (no warning, bounding box, and virtual shadow). Drivers’ situation awareness, confidence, and workload were measured and compared to the no warning condition. Results: Only one of the warning interface designs, the virtual shadow (Kim, Isleib, & Gabbard, 2016), improved driver situation awareness about pedestrians which were cued by the AR HUD, not affecting situation awareness about other environmental elements which were not augmented by the HUD. The experiment also showed drivers’ overconfidence bias while interacting with the bounding box which is another warning interface design. The empirical user study did not provide any evidence for reduced driver workload when AR warnings were given. Conclusion: Our initial human-subject study demonstrated a potential of the proposed method in quantifying both positive and negative consequences of AR HUDs on driver cognitive processes. More importantly, the experiment showed that AR interfaces can have both positive and negative consequences on driver situation awareness depending upon how we design perceptual forms of graphical elements. Application: The proposed assessment methods for AR HUDs can inform not only comparative evaluation among design alternatives but also assist in incrementally improving design iterations to better support drivers’ information needs, situation awareness, and in turn, performance, and safety.