Increases in cyber incidents have required substantial investments in cyber defense for national security. However, adversaries have begun moving away from traditional cyber tactics in order to escape detection by network defenders. The aim of some of these new types of attacks is not to steal information, but rather to create subtle inefficiencies that, when aggregated across a whole system, result in decreased system effectiveness. The aim of such attacks is to evade detection for long durations, allowing them to cause as much harm as possible. As a result, such attacks are sometimes referred to as “low and slow” (e.g., Mancuso et al., 2013). It is unknown how effective operators are likely to be at detecting and correctly diagnosing the symptoms of low and slow cyber attacks. Recent research by Hirshfield and colleagues (2015) suggests that the symptoms of the attack may need to be extreme in order to gain operator recognition. This calls into question the utility of relying on operators for detection altogether. Therefore, one goal for this research was to provide an initial exploration of attack deception and magnitude on operator behavior, performance, and potential detection of the attack. Operators in these systems are not passive observers, however, but active agents attempting to further their task goals. As a result, operators may alter their behavior in response to degraded system capabilities. This suggests that changes in the pattern and frequency of operator behavior following the inception of a cyber attack could potentially be used to detect its onset, even without the operator being fully aware of those changes (Mancuso et al., 2014). Similarly, since low and slow attacks are designed to degrade overall system effectiveness, performance measures of system efficiency, such as frequency and duration of tasks completed, may provide additional means to detect an ongoing cyber attack. As such, a second goal for the present research was to determine whether changes in operator behavior and system efficiency metrics could act as indicators of an active low and slow cyber attack. Participants in this experiment performed a multiunmanned aerial vehicle (UAV) supervisory control task. During the task, participant control over their UAVs was disrupted by a simulated cyber attack that caused affected UAVs to stop flying toward participant- selected destinations and enter an idle state. Aside from halting along their designated flight path, idled UAVs displayed no other indication of the cyber attack. The frequency of cyber attacks increased with time-on-task, such that attacks were relatively infrequent at the beginning of the task, occurring once in every five destination assignments made, and were ubiquitous by the end of the task, occurring after each destination assignment. Attack deception was manipulated with regard to participants’ approximate screen gaze location at the time of a cyber attack. In the overt condition, UAVs entered the idle state near the participant’s current focal area (indexed by the location of operator mouse interactions with the simulation), thereby providing some opportunity for operators to directly observe the effects of the cyber attack. In the covert condition, the attack occurred outside the operator’s current focal area, forcing them to rely on memory to detect the cyber attack. In the control condition, no cyber attacks occurred during the experiment. Following the UAV supervisory control task, participants were asked a series of debriefing questions to determine if they had noticed the UAV manipulation during the task. Most participants (approximately 64%) reported noticing the manipulation, but only after a series of questions prompting them to think of any problems they encountered during the task. The remaining participants reported noticing no errors during the task. Results regarding measures of performance and system efficiency indicated that performance decreased as the magnitude of the cyber attack increased. Measures of efficiency were calculated using fan-out (Olsen & Goodrich, 2003) which provided information regarding how many UAVs operators were able to control and how long UAVs were in an idle state during the trial. Operators controlled fewer vehicles, and vehicles sat idle for longer durations, as the magnitude of the cyber attack increased. However, these differences in efficiency were not statistically significantly different until relatively late in the trial. Overall, operators seemed insensitive to the presence of the cyber attack, only disclosing the problem after being prompted several times through guided questions by the experimenter. However, significant changes in operator behavior and system efficiency were observed as the magnitude of the cyber attack increased. These results demonstrate that subtle cyber attacks designed to slowly degrade human performance were measurable, but these changes were not apparent until late in the experiment when the attack was at its midpoint in magnitude. This experiment suggests that even though measurable changes in operator behavior may not occur until late in an attack, these metrics are more effective than reliance on operator detection.