Aspirited debate is occurring in popular press on social science and scientific method. Jonah Lehrer (2010b) published an article in New Yorker whose primary topic was effect, where initial estimates of interventions' effectiveness weaken when replicated. Lehrer's examples came from pharmacology, medicine, psychology, zoology, and more. A debate was ignited, largely on Internet (for example, Neurological blog, Respectful Insolence, Science Based Medicine, Psychology Today, ABC News). In next issue of New Yorker, Lehrer (2011) responded to letters and e-mails in another article as well as an article in Wired (Lehrer, 2010a). Issues raised by debate deserve consideration by social work researchers. This editorial explains decline and presents comments on replications, failure to submit, publication bias, and fundamental cognitive flaw (Lehrer, 2010b). The term effect has not appeared in social work literature, nor have estimates of number of replications, extent of failing to report research, or evidence of journals' publication bias, with exception of Dickersin (1997). I base my comments on almost 30 years in academic social work at four universities, extensive reading of our research literature, teaching research, reviewing manuscripts, and publishing. THE DECLINE EFFECT The effect is a phrase attributed to Rhine (1938) in research on extrasensory perception describing situation in which a research participant guessed a hidden card at a rate beyond chance. Alter repeated testing, however, the dramatically diminished (Rhine, 1938). Lehrer (2010b) reported a similar pattern in research about antipsychotic medications. Early clinical trials documented medications' positive impact on psychiatric symptoms, but recent research indicates the therapeutic power of drugs appeared to be steadily waning (Lehrer, 2010b, para. 2). Another researcher, Jonathon Schooler, attempted to replicate Rhine's finding of a decline by studying precognition. At first, Schooler found higher than expected precognition, 'but then, as we kept on running subjects, size'--a standard statistical measure--'kept getting smaller and smaller' (Lehrer, 2010b, p. 6-7). Lehrer (2010b) described similar results in research on language and memory, zoology, biology, and epidemiology. He described a study in which researcher identified 49 most frequently cited clinical research studies and reviewed subsequent research literature to see if there might be a decline (Ioannidis, 2005).The sampled studies had to have 1,000 or more citations. In 45 of sampled studies intervention in question was found to be effective. (It is interesting to note four studies in sample showed no efficacy for intervention being studied were contradicting earlier claims of efficacy). Thirty-four of studies in sample had been replicated: 21% were contradicted by subsequent research, and 21% had weaker effects. One should not conclude, Ioannidis (2005) pointed out, that original studies were totally wrong and newer ones are correct simply because they are larger or better controlled (p. 224). To what degree has decline been observed in social work? A literature search reveals no reference to it as such. The prior question, however, and one of central concern here is this: How frequently are replication studies done in social work? Without replications, decline effects cannot be observed. Replication studies seem to be fairly rare in social work; if they occur, it is likely to happen with more heavily funded and broadly implemented interventions. Failure to replicate intervention studies is costly for reasons far beyond documenting decline effects: Replication, widely acknowledged as cornerstone of science, brings many benefits to discipline embraces it. …