Abstract

The nocebo effect has been described as the flip side to the placebo effect, whereby an adverse reaction is experienced by someone who receives an inert exposure (Kennedy, 1961). An inert exposure in this context is a substance or procedure that has no active medicinal or physiological properties able to directly influence the symptom experience in the receiver. Nocebo effects seem to primarily occur due to negative expectations (Webster, Weinman, & Rubin, 2016) and are commonly encountered in clinical practice (Colloca & Miller, 2011). Experimental nocebo research commonly involves some element of deception by misleading participants about the nature of the experimental stimulus. Such procedures are often essential – informing participants that the stimulus is inert may dramatically change their expectations of side effects and hence their subsequent symptom experience. This reliance on deception raises a number of ethical concerns, not least in terms of informed consent. Due to concerns about these potential negative effects, deception is not always received favourably by institutional ethics review boards, making nocebo effects notoriously difficult to study. However, whilst the adverse ethical implications of deceptive studies are a concern, the costs of not conducting deceptive research should also be considered. Nocebo effects are one of the underlying mechanisms for the development of non-specific side effects to medications (Barsky, Saintfort, Rogers, & Borus, 2002). In their recent overview, Benedetti and Shaibani (2018) note the importance of understanding nocebo effects as something different from placebo effects, explaining that more research is needed to understand these nocebo-induced side effects which may prove crucial in controlling treatment adherence and therefore patient outcomes. In addition, many nocebo researchers still claim that their participants provide informed consent for their studies, when logically this cannot be the case if they are deceived as to the nature of the exposure. For example, in a systematic review conducted by our team, the majority of studies incorrectly explained that participants gave informed consent (Webster, Weinman, & Rubin, 2016). Therefore, further discussion around these issues is clearly warranted. This editorial provides an overview of what deception is, the current guidelines for using deception, and its effects. We include recommendations for deceptive research and draw upon an example of a recent study conducted by our team. Although these suggestions will not resolve all ethical issues relating to nocebo research, they may help researchers navigate some of the key issues involved in this field. There is no one agreed definition of deception. Hey (1998) explains there is a difference between withholding information from participants and telling them the wrong thing; it is the latter which counts as deception. However, more recently, others have taken a broader view and include violations of participants’ default assumptions in their definition of deception. For example, according to Pierce (2008), withholding information can result in participants forming false beliefs about a study. Therefore, perhaps a more appropriate definition of deception in research is that which intentionally allows participants to have, or maintain, a belief that the investigator knows is not true. Although discouraged, deception is not ‘banned’ in social science research. The British Psychological Society (BPS) allows withholding information from participants in exceptional circumstances to preserve the research integrity (BPS, 2009). This is not a carte blanche for researchers, however. Additional requirements stipulate deception should only be used if: (1) There are no other effective procedures to obtain the desired results; (2) the research has a strong scientific merit; (3) there is an appropriate risk management and harm alleviation strategy; and (4) when deception is revealed, it is unlikely to lead to discomfort, anger, or objections from participants (BPS, 2014). The code also requires that participants can withdraw at any time and are debriefed as soon and as sensitively as possible after the study. In addition, deceptive studies should be designed to protect the dignity and autonomy of the participants, and withholding of information should be clearly specified in the protocol that is subjected to ethical review (BPS, 2014). Although allowed by the ethical guidelines, there is a contested trade-off between the need for deception and the consequences this could have on participants. Evidence, however, on its effects is mixed. In a review of studies assessing participants’ reactions to being involved in deception experiments, Christensen (1988) concluded that participants do not perceive that they have been harmed as a result and do not mind having been misled. Instead, participants enjoyed the experience more and felt they received more educational benefit than those who participated in non-deceptive experiments (Christensen, 1988). Kimmel (1998) also concludes that deception has minimal negative effects on participants and that they do not become resentful about being deceived. However, a more recent review by Hertwig and Ortmann (2008) noted that this is not always the case. For example, participants who have been deceived have been annoyed (Allen, 1983), and confederates have noted the angry reactions from participants once they find out they have been deceived (Oliansky, 1991). There are also broader effects of deception to consider. Deception has been suggested to affect the reputation of research teams and the discipline of psychology as a whole (Lawson, 2001). Indeed, studies have shown that deceived participants tend to be more suspicious of the truthfulness of experimenters, although this does not seem to affect their beliefs about psychologists’ trustworthiness in general (Cook et al., 1970). Similarly, no negative effects have been found on deceived participants’ attitudes towards psychological research (Kimmel, 1996; Sharpe, Adair, & Roese, 1992). The main reason why the evidence is so mixed is because the type of deception varies between studies (Hertwig & Ortmann, 2008). Unless a direct replication, no two studies deceive their participants in the same way. It seems logical that different types and degrees of deception will have different effects. For example, the BPS notes there is more likely to be a problem if the deception implies a more benign topic of study than is actually being carried out. However, in reality, it is hard to predict the effect any type of deception will have on participants as many studies fail to report how the deception was received by participants. Deceptive research, however, is still not risk-free, and researchers should be wary of any potential deleterious effects. There are various approaches that researchers can take as precautions. Although widely discussed in the context of reducing nocebo effects in clinical practice, authorized concealment (Wells & Kaptchuk, 2012), which would involve deciding what to tell research participants based on their characteristics and previous experiences, may not be a viable way to inform them about a study. Instead, some of our suggested approaches originate from placebo research, but can still be applied in the context of nocebo research as the element of deception is still the same – participants cannot be informed of the true nature of the exposure (i.e., that it is inert). There are concerns that informing participants about deception may compromise the study's validity almost as much as revealing its true nature and may also hinder recruitment. However, Martin and Katz (2010) have found that authorized deception does not affect the magnitude of placebo effects, recruitment, or retention of participants compared to non-authorized deception and is preferred to not alerting participants to the presence of deception. This suggests that the authorized deception is a viable and ethically preferable alternative consent process for deceptive studies. In addition, it is worth noting that authorized deception is the process that is currently used in double-blind clinical trials, often branded as the gold standard research design. Here, participants are told that they will either get the experimental drug or the placebo; however, they are openly informed that the information about which drug they are receiving will be withheld until the end of the trial. In a recent study by our team, we used authorized deception whilst making sure that information given to participants was as truthful as possible (Webster, Weinman, & Rubin, 2018). Our study involved a randomized controlled trial altering patient information leaflets (PILs) to reduce symptom attribution to a sham medicine (an inert tablet). We openly informed participants that information would be withheld from them. For example, they were told we could not reveal the type of tablet to avoid biasing their views about it and that the difference between the PILs included slight changes to the wording, to see whether this influenced their thoughts about the tablet, but that we could not reveal what the changes were. In addition, we correctly described the tablet as ‘a well-known tablet available without prescription’, and the leaflet was truthful for an inert tablet. For example, sections about taking too many tablets explained that this can cause more severe side effects as noted by Webster, Weinman, & Rubin (2016), whilst all potential side effects listed the common non-specific side effects reported during a nocebo response (Wells & Kaptchuk, 2012). Planning of the study was discussed with a PPI panel, to get their input on whether our approach was appropriate and how to minimize any ethical issues still further. In addition, all participants were debriefed at the end of data collection by informing them of the purpose of the study and what the tablet was. Participants had the opportunity to withdraw at this point, upholding their participant autonomy, and any feedback received from participants following the debrief was collated. No negative effects were found. Current guidelines include withholding information as deception and allow deception in psychological research under a strict set of conditions. Evidence from the literature suggests that criticisms of the potential negative effects of deception are often unfounded. Nonetheless, maintaining high ethical standards in such a controversial area is important. We propose a strategy that includes deception by omission whilst still being truthful where possible, authorized deception and debriefing, together with input from a PPI panel. Rebecca Webster, John Weinman, and James Rubin are affiliated to the National Institute for Health Research Health Protection Research Unit (NIHR HPRU) in Emergency Preparedness and Response at King's College London in partnership with Public Health England (PHE), in collaboration with the University of East Anglia and Newcastle University. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, and the Department of Health or Public Health England. All authors declare no conflict of interest.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call