The concept of deactivation in functional MRI (fMRI) has recently gained greater acceptance as a physiological process rather than an artifact of image analyses or shunting of blood flow. It is likely that recent studies demonstrating decreases in the rate of oxygen metabolism (Shmuel, Yacoub, et al., 2002) and local field potentials (Shmuel, Augath, et al., 2003) in regions of deactivation have contributed to this greater acceptance. However, the signal change in fMRI is relative, and the concept of a deactivation suggests that there is some state that can be used as a standard baseline (Newman, Twieg, & Carpenter, 2001; Raichle et al., 2001; Stark & Squire, 2001). Deactivations are BOLD signal decreases that occur during a task relative to this standard baseline. Several recent studies have identified specific brain regions (inferior parietal, posterior cingulate, medial temporal/parahippocampal, and medial prefrontal cortices) that are active while subjects rest quietly and are deactivated during attention-demanding tasks (McKiernan, Kaufman, Kucera-Thompson, & Binder, 2003; Gusnard & Raichle, 2001; Raichle et al., 2001; Binder et al., 1999). The article published in this issue of the Journal of Cognitive Neuroscience by Michael D. Greicius and Vinod Menon (Default-Mode Activity during a Passive Sensory Task: Uncoupled from Deactivation but Impacting Activation) uses independent component analyses (ICA) to investigate this ‘‘default mode’’ of brain function in a dataset from the fMRI Data Center archive (Laurienti et al., 2002). The primary hypothesis presented in the manuscript is that the default-mode network will not be deactivated during passive stimulation. This hypothesis is based on the idea that passive sensory stimulation does not demand attentional resources and will not suppress activity in this network. The investigators suggest that it will be possible to identify the default-mode network even in the absence of stimulus-induced deactivation using ICA. The data clearly demonstrate that the analysis methodology is able to identify a predefined network of brain regions providing further evidence that there is, in fact, a default-mode brain network. In addition to the main findings, this study again raises the important issue of global normalization that often surrounds studies evaluating deactivations (Macey, Macey, Kumar, & Harper, 2004; Gavrilescu et al., 2002; Desjardins, Kiehl, & Liddle, 2001; Aguirre, Zarahn, & D’Esposito, 1998). The process of global normalization is designed to remove whole-brain signal changes that can act as confounds in studies designed to evaluate regional signal changes (Aguirre et al., 1998). There have been several methods developed to remove unwanted global signal changes (Macey et al., 2004; Gavrilescu et al., 2002; Andersson, Ashburner, & Friston, 2001; Desjardins et al., 2001; Andersson, 1997). If the global signal is correlated with the time course of the stimulation paradigm, artifacts can occur when commonly used normalization procedures, such as proportional scaling, are used. Specifically, if the global signal is positively correlated with the stimulus time course, the global normalization procedure can induce extensive regions of artifactual deactivations (Gavrilescu et al., 2002; Desjardins et al., 2001; Aguirre et al., 1998) and can include white matter, as noted by Greicius and Menon in their article. It is important to recognize that although the global normalization procedure can induce artifacts in the results, the global signal change that is correlated with the paradigm may not be an artifact itself. It has been reasoned that global signal changes can be an ‘‘artifact’’ when the whole-brain mean signal increases due to large regional signal increases (Aguirre et al., 1998). Such large ‘‘regional’’ increases in signal can elevate the ‘‘global’’ signal even if increases were localized to the activated brain areas. In such a case, the global normalization procedure will artificially decrease the signal (artifactual deactivations) in regions that exhibit no correlation with the stimulation paradigm, including the white matter. However, it is also possible that there are true global signal changes where the total brain blood flow changes in concert with the stimulation paradigm. In fact, it has been shown that the global cerebral blood flow (CBF) can actually decrease during painful stimulation (Coghill, Sang, Berman, Bennett, & Iadarola, 1998. This finding of global CBF decreases in the presence of significant activity increases suggests that large regional areas of activity may not account for the global signal change in all studies. It is interesting to note that when global signal is negatively correlated with the stimulation time course, the global normalization Wake Forest University School of Medicine, Winston-Salem, NC