Abstract

In the United Kingdom, the Animals (Scientific Procedures) Act 1986 (HMSO, 1986) regulates experimental work on vertebrate animals. The European Convention for the Protection of Vertebrate Animals Used for Experimental and Other Scientific Purposes (Council of Europe, 1986) covers the same ground. Both the British law and the EU Convention use a closely similar set of criteria to judge the status of a scientific procedure. The British law states that “a regulated procedure on a protected animal under the 1986 Act is one that may have the effect of causing that animal pain, suffering, distress or lasting harm.” The same words are used in the EU Convention to define a procedure as being regulated. Recently in the UK the Animal Welfare (Sentience) Act 2022 (HMSO, 2022) takes the topic even further in that it is “an Act to make provision for an Animal Sentience Committee with functions relating to the effect of government policy on the welfare of animals as sentient beings.” It goes on to say that “in this Act ‘animal’” means (a) any vertebrate other than Homo sapiens, (b) any cephalopod mollusc and (c) any decapod crustacean. For scientists working with fish the existence of these acts means that there is a need to understand whether fish can or cannot suffer from pain. To get to a point of sufficient understanding we need to ask whether fish are conscious, can feel pain and can suffer. At present we do not have certain knowledge about how fish experience the world. More broadly it is probably safe to claim that we do not know how all but a handful of non-human animals experience the world (Birch et al., 2020). In the current debate about pain and suffering in fish, the evaluation of evidence is biased by philosophical confusions and the prejudices of the protagonists. In many papers dealing with phenomena which are not directly observable in non-humans, such as consciousness, pain or fear, authors use terms derived from human usage and assume that these terms can be applied to non-humans with the same meaning. Consequently it is essential to cut back the undergrowth of philosophical opaqueness and to identify the assumptions made so that the important features of the debate are left clear and distinct for further analysis and discussion. The starting point for this article is the hypothesis that without some form of self-aware consciousness, there is no possibility of experiencing pain as we humans know it. Pain in humans is a subjective mental phenomenon that a robot responding to external stimuli would be unlikely to experience. The first task is to establish the justification for proposing this hypothesis. Before going further it is important to deal with three factors that can get in the way of rational discussions of whether or not fish experience pain (see also Hart, 2016): (a) humans work on the assumption that consciousness is more influential than it really is in deciding how we behave; (b) complex behaviour must imply consciousness and (c) language used by humans to describe and discuss their consciousness can be used in the same way for non-humans. Each will be discussed in turn. A little introspection shows that much of our behaviour is accomplished with little or no participation of conscious processes (Gray, 2004). You may tell yourself repeatedly that it is time to get out of bed at 06.00 hours in the morning, but is the final decision that propels you from your warm bed a conscious one? Recent evidence would suggest not (Custers & Aarts, 2010). For many motor tasks, such as returning a tennis ball, the conscious experience of the event lags behind the action by about one fifth of a second (Gray, 2004). Consciousness reviews rather than directs. Evolution has favoured the myth of a conscious self by which our lives are directed. Humphrey (2006, 2007, 2022) argues that through this myth we create a sense of self to which events can happen, which further gives us the ability to create goals and give life meaning. For most humans, the myth of conscious control of their lives is so powerful and persuasive that action without it, even in non-humans, seems impossible. If, instead, one takes the view that many of the complex behaviours that we employ are monitored and informed by consciousness but are not directed by it, then we are freer to assess non-human behaviour for what it is. If humans can drive a car, walk, give birth to a baby, play the violin, produce sentences in speech, all without the aid of conscious direction then surely non-human animals can also do complex things without conscious direction. The most startling illustration of the relationship between consciousness, awareness and complex behaviour is what has been called blindsight (Humphrey, 2022; Humphrey & Weiskrantz, 1967; Isa & Yoshida, 2021). First discovered in rhesus macaque monkeys (Macaca mulatta), but subsequently also found in humans with damage to the visual cortex, adolescent monkeys that had had their visual cortex removed were discovered to be able to avoid or pick up objects as if they could see them. Humans with damage to the visual cortex report that they cannot see, yet behave in a way that shows that they can. Both in monkeys and humans, the subjects are at first hesitant about employing this skill of which they are not consciously aware. It is a widespread tendency to assume that because an animal shows complex behaviour, it must be conscious, because if you believe that consciousness is of over-riding importance to complex human behaviour then this must be true of non-human animals too. Otherwise one has to acknowledge that unconscious processes can direct complex behaviour. It is increasingly the case that scientists studying animal behaviour are adopting language that assumes that the animals have conscious feelings and emotions because they show complex behaviour. For example, Verheijen and Flight (1997) conclude unequivocally that fish have subjective experiences and can suffer, yet they present no convincing evidence for this sweeping conclusion. Yue et al. (2004) report a study in which they investigated the response of a rainbow trout (Oncorhynchus mykiss) to a simulated predator, and argue that the fish feels fear as a result of the experience. Like pain, fear is a human experience where the agent needs to be self-aware to suffer from it, so making the claim that fish are experiencing fear is demanding a high level of behaviour. In a similar vein, Bateson and Matheson (2007) and Bateson et al. (2011) attribute feelings of pessimism to starlings (Sturnus vulgaris) and honeybees (Apis melifera) and, more generally, Wiepkema and Koolhaas (1992) consider that all vertebrates employ emotions to deal with their environment. Baars (2005) and Edelman et al. (2005) also discuss this approach. Because of the human meanings that they carry, the use of the language of feelings and emotions by these authors is dangerous, and we need to adopt a more cautious terminology. The modern resurgence of research that considers mental states as legitimate targets of investigation is a reverse of the practice of early ethologists. For example, Tinbergen (1951, p. 4) wrote that “… to ascribe a causal function to something that is not objectively observable (e.g., consciousness or feelings) often leads to false conclusions.” Although there are ways for exploring the mental states of non-humans, as demonstrated by Cheney and Seyfarth (1990, 2007), many present-day researchers are abandoning the evidence-based approach to understand animal minds and making unjustified assumptions. We use the words “consciousness” and “pain” in speaking about other humans because we have good reason to believe that others share our own mental experiences (Figure 1). We even have neural processes that mirror the actions of others so giving us a good insight into what they are thinking (e.g., Frith & Frith, 2006). We also know that fellow humans have brains and behaviours that are closely similar to our own, so we feel justified in assuming that others think and feel as we do. As Baron-Cohen (1995, p. 21) has written, “Attributing mental states to complex systems (such as a human being) is by far the easiest way of understanding it.” Therefore, words such as “pain,” “fear,” “suffering” and “consciousness” are words we use to describe and express our own and other's mental states. When it comes to non-humans, and this is particularly true of fish, we cannot use the words “consciousness” or “pain” in the same way, as we have no idea whether a fish or any other non-human can think and if it can what it is thinking. This is the most important point of this paper. Without a clear view of how a word can be used and what we refer to when we employ it we cannot make sense of the world. The way meaning is to be understood has been discussed by many philosophers. Wittgenstein (1922), in a difficult but influential book, proposed that language was used to create a picture of reality with a one-to-one correspondence between words and objects and with grammar mirroring the logical structure of the world. Wittgenstein claimed that words we use to discuss abstract concepts of the mind have no meaning and as a result he finished the book with “What we cannot speak about we must pass over in silence.” Analogously, as we do not know the meaning of the word “pain” when applied to a fish we cannot talk about pain in fish. In his later philosophy Wittgenstein (1953) considered that the meaning of words is defined by their use in a process he called a “language game,” the rules of which we learn in early life. In Wittgenstein (1953, paragraph 244), he describes how a child might learn to use the word “pain” to label the unpleasant sensation it experiences when it has hurt itself – “A child has hurt himself and he cries; and then adults talk to him and teach him exclamations and, later, sentences. They teach the child new pain-behaviour.” In this view of the meaning of words we can only agree on how to use a word if we share a common view of the world (see Figure 1). As we do not share a common view of the world with a fish, can we then use words from human experience to describe the inner experiences of a fish? In the literature on animal welfare scientists discussing consciousness, pain and suffering in non-humans use these concepts as if they were well understood. As I have tried to show, in the human context the use of these words is mutually understood, but I have argued that we are not justified in using the same words when talking about non-humans. There is a strong case for a much more rigorous approach to how concepts such as “pain” and “fear” are used when referring to non-humans. With these constraints in mind we need to be much more precise about what consciousness is. The impression is given in some of the animal welfare literature that consciousness is thought of as easily recognised by all as is a limb or a fin and has clearly visible properties (e.g., Braithwaite, 2010; Huntingford et al., 2006). It is vital that we are clear on what consciousness is and what its properties are, in humans. Only then can we begin to think what it means to say that a non-human animal is conscious. Searle (2004, p. 134), a philosopher of mind, and someone who takes on board the biological nature of consciousness asks, “What are the features of consciousness that any philosophical-scientific theory should hope to explain?” He suggests that the best way forward is to list several features of human, and he presumes also, of animal consciousness (Table 1). The average human has a strong awareness of being a unified entity. “I” is very real to all of us and is almost seen as a being separate from the body (see Nagel, 1986). Humphrey (2006, 2007, 2022) develops an hypothesis as to how a sense of self has evolved and how important this sense is to a person's life. A sense of self is essential for our interactions with other people and with the material world around us. It is also essential to make us experience suffering. Human suffering occurs when a person is subjected to an unpleasant experience for a prolonged period with no relief in sight. The families of kulaks (land-owning peasants), broken up and sent to different corners of the Soviet Union during Stalin's First Five Year Plan, suffered the mental torture of not knowing what had happened to their husbands or children (Figes, 2007). I would make a claim for self-consciousness as the most important attribute of consciousness in humans and perhaps the reason consciousness evolved in the first place. By being self-conscious in a social situation, we can see how we are doing in relation to others and be aware of the consequences of our actions. Searle's (2004) characterisation of consciousness is not exclusive, and other more detailed systems have been proposed such as that by Seth et al. (2005, table 1) and Birch et al. (2020). There are also simpler definitions or characterisations. For example MacPhail (1998) considers that there are two components to human consciousness, self-consciousness and feeling-consciousness. He considers that the former is likely to be found in humans and a very small sub-set of non-humans, whereas the latter is likely to be found in a wider range of animals. MacPhail (1998) proposes that feeling-consciousness might be the basis for non-humans experiencing pain but demonstrates that some of the behavioural criteria that we humans use to identify feelings in our fellow humans are unlikely to be interpretable in the same way in non-humans. Further, Cheney and Seyfarth (2007, p. 201) consider that consciousness is “… the ability to introspect explicitly about at least some of our knowledge and beliefs and ‘to know what we know.’” This is equivalent to Macphail's (1998) self-consciousness. The essential conclusion to be drawn here is that consciousness is a complex and elusive phenomenon which is hard to define for humans and therefore even harder to characterise in non-humans. We might expect animals, such as some mammals and birds, to have a sub-set of Searle's 11 features, and it is possible that feeling-consciousness as defined by MacPhail (1998) is likely to be more prevalent in animals than is self-consciousness. The term “pain” carries with it a range of meanings, often defined by the context in which it is used, and it is clear that when several people use the word it is not obviously the case that they will all be using it with the same meaning (Hardcastle, 1999, figure 5.1, see also my discussion of Wittgenstein's thoughts on how we learn to use a word). Pain has been taken as a prime exemplar of qualia, which are the subjective experiences that only the individual can have. You might see me howl when you step on my big toe, but you do not know how it feels to me. You cannot see or experience my qualia. Wittgenstein's (1953) description of how a child would come to realise that this feeling I have when I have been hurt is labelled as “pain,” introduced earlier, does not allow the child to learn anything about the nature of pain in general, just how to use the word to report on its sensations. When some scientists are writing about how non-humans respond to harmful stimuli, they use the word “pain” to describe the response of an animal to an unpleasant stimulus alone and do not assume that the animal also necessarily has a subjective feeling of pain. Others use the word to include the subjective feelings that humans have when experiencing an unpleasant or harmful stimulus (Gibbons et al., 2022). As discussed earlier, this route has little support from the evidence and is another example of where we use our own intuitions to interpret animal behaviour (see again Figure 1). Therefore, the problem with the word “pain” and how we use it in relation to non-humans is that our habits are firmly formed by the uses to which we put the word in interactions with our fellow humans. As other humans have many similarities with ourselves, it is a reasonable hypothesis that if another person shows behaviour that we associate with the feelings of pain, then they must also be feeling pain. We are up against a problem aptly captured again by Wittgenstein (1958) when giving his view of what philosophy is: “Philosophy, as we use the word, is a fight against the fascination which forms of expression exert upon us.” Although we are not talking philosophy in the true sense of the word in this article, the way we use words such as “pain” is bound by the way in which we use words in our everyday lives. A good approach to understanding pain in a more objective way is to follow Hardcastle (1999, Bechtel, et al., 2001) who “… believe[s] that we have a system that uses incoming nociceptive information to figure out which events in the world are noxious, just as we have systems that use impinging photons to compute the size and orientation of things in our environment or that use air compression waves to compute the location of an object” (Bechtel, et al. 2001, p. 295). Does the input have to be perceived consciously by the human agent for the outcome of this system detecting nociceptive information to be acted upon? If this is the case, then can we use the word “pain” to describe the detection of nociceptive events in all animals? I argue that we cannot, as pain as we know it is a subjective experience and we do not know whether other animals experience the same. This issue is illustrated by the way we react to touching a very hot surface (Gray, 2004, p. 76). We remove our hand very quickly but unconsciously, and then we begin to experience pain. This decoupling of the reaction from the feeling makes it possible that in organisms with less computing power, the feeling of pain may not be experienced, and it will not be essential for the quick removal of the limb from the noxious stimulus. It might be argued that the use of the word “pain” for non-humans is not like the use of the word “pride” or “deceit” because pain has well-defined physiological correlates (Hardcastle, 1999). This may be so, but the existence of physiological correlates does not guarantee that the animal is experiencing, or is aware of, pain. One of the main points made by Bechtel, et al. (2001) is that we are happy to accept that vision and smell will involve several systems in the brain, at various levels of organisation (e.g., for vision see Gray, 2004), but when it comes to pain, we imagine it as just a single process with one brain location. Bechtel, et al., (2001) thesis is that just like vision, the nociceptive system involves many components covering all levels of brain organisation. She writes, “Our brain houses lots of individual processors; knowing all the pieces does not tell us which component piece belongs to which larger system” (p. 297). There is no doubt that elements of the pain detection system are also found in fish. They have C-fibres that have been shown to detect noxious stimuli (Braithwaite, 2010; Sneddon et al., 2003), and they have a spinal column with a dorsal horn. Some have also argued that parts of human brain associated with pain detection have homologues in the fish brain (see Chandroo et al., 2004, p. 231). Homologies between parts of brains tell us very little about the functional roles that those parts play. Environmental detection systems in living animals are dependent on a sequence of structures which process and extract meaning from the incoming signals (Gray, 2004). Through time new components have been added to the system allowing it to extract increasingly sophisticated generalities from the input (Pearson & Pearson, 1976). Consequently, because fish have parts of the system that humans have, it does not necessarily mean that they process incoming information in the same sophisticated way. For example fish do not have a neocortex as possessed by humans, and as Gray (2004, p. 313) writes, “… Human neuroimaging research show that conscious experience of pain [ … ] is associated with activity, not in the phylogenetically ancient brain stem, but in the more recently evolved neocortex [… ].” If this turns out to be true then if fish are to experience pain consciously, they will have to be able to generate the conscious experience by means of a different mechanism. Although this is not impossible, the conservatism of the evolutionary process, which mostly builds on what is available, makes it equally likely that fish just have not evolved the capacity to be aware and conscious. This issue is discussed more fully in the next section. For fish, we would need to be able to determine how many of Searl's (2004) 11 features are a property of their conscious experience. Alternatively, we could apply Birch et al.'s. (2020) multidimensional set of criteria. We can only really do this once we understand how consciousness is generated in ourselves. Only then will we know the neurobiology involved. We will know which brain areas are parts of the consciousness-generating system, how they are linked and how the whole functions to generate awareness. Once human consciousness is understood, we can start to look for a corresponding system in the brains of other animals, including fish. A preliminary approach to this is found in Walling (2020) who explores the dimensionality of electroencephalographs from a range of species. The data from the electroencephalographs are plotted as an attractor characteristic of dynamical systems (Mitchell, 2009, p. 282), and the attractor's dimensionality is then determined. The higher the dimensionality the more complex is the electroencephalograph, which possibly reflects the complexity of the neural network producing the electroencephalographs (Kauffman, 1993). Walling (2020) then plots attractor dimensionality against the approximate time at which a species arose in the fossil record (Figure 2). This approach is based on the hypothesis that consciousness is an emergent property of a complex adaptive system (Holland, 1995, 1998). The trouble is we are not very far down the route to understanding how human consciousness is produced. As Gray (2004) writes, “We started this book lacking a viable theory of consciousness. We lack it still.” As time passes, we understand more about how consciousness is produced in humans (see, e.g., Velmans, 2009), but we have to accept that it may just be that we will never be able to understand how first-person experiences are produced and how they influence our behaviour. This conclusion has been reached by McGinn (1993, 2004) who argues that as evolved creatures, designed by natural selection for a particular purpose, we cannot expect that our brains will be able to understand everything about the world. McGinn considers that consciousness, free will and ethics are all aspects of human experience that we will never be able to understand because of our conceptual limitations. Of course, time and further research will determine whether or not McGinn is right, but until then his views remain a possibility. Even if we understood human consciousness, would we expect fish to demonstrate all 11 properties named by Searle (2004)? If not, to what degree would we expect these properties to be fulfilled? Many of them are interconnected so that it would not be possible for a fish to show one but not another. For example, in social situations, a sense of self is essential if the subject is to see themselves as an agent, which has effects on other selves. The sense of self is so central to many of our other conscious experiences as adults that it would be difficult to experience them without it. This is probably true of pain in particular in that without a sense of self as an agent, to who would we attribute a feeling of pain? Despite the difficulties in understanding consciousness in humans, most scientists cannot wait for a full understanding before asking whether fish have the same property. Two broad approaches have been taken by people looking at fish consciousness: (a) looking for brain structures that are homologous with structures in the human brain that are known to be or are thought to be involved in the generation of aspects of consciousness and (b) studying learning and behavioural capacities of fish and then asking whether those capacities require consciousness for a proper explanation. These two approaches can be translated into hypotheses about consciousness. The first approach is based on the hypothesis that consciousness is a property of particular brain structures and that if they exist in a brain then the owner will experience consciousness (see the earlier discussion of Bechtel, et al., 2001). There are two problems with this hypothesis. The first is that we do not know with certainty that the presence of certain types of brain structure means that consciousness can be produced. The second problem is that we would have to have a way of determining consciousness in a non-human before we could correlate structure with the phenomenon. This just means that we are back with the old problem of how to determine whether a non-human is conscious or not. The second approach is based on the hypothesis that complex learning and behavioural capacities require the animal to be conscious for them to work. This has been tackled earlier where I argued that the faith we have as humans in the power of conscious processes subverts us into thinking that complex behaviour cannot exist without consciousness. As Cheney and Seyfarth (2007) argue, evidence from chacma baboons (Papio cynocephalus ursinus) shows that they have a good knowledge of the rank system of the group in which they live. It is tempting to conclude that this is good evidence for the animals having some sort of theory of mind and appreciation of what other baboons are thinking. What must be realised is that all the social behaviours observed by Cheney and Seyfarth (2007) can also be explained by assuming that the animals are just good at learning the contingencies of behaviour. For example, a baboon might think “When another individual grunts in a certain way, I must give way,” rather than “this grunt is coming from an animal that is dominant to me in the hierarchy, and dominants always get the upper hand.” Cheney and Seyfarth (2007) illustrate how important it is to evaluate the evidence for a particular hypothesis. It is unfortunate that within the literature dealing with the possibility of consciousness in fish and other non-humans, there are many who abandon the strict requirements of evidence evaluation and push a particular viewpoint (see also Browman et al., 2019). The literature on consciousness in fish and other non-humans can be assigned to three groups: (a) The sceptics who judge that fish brains are just not up to producing conscious experience. Rose (2002, 2007), Arlinghaus et al. (2007), Key (2015, 2016) and Humphrey (2022) argue for this approach. (b) The hopefuls who recognise that the evidence is not available to decide whether fish are or are not conscious but decide to assume for the purposes of discussion that they are. Significant works within this paradigm are those by Broom (2001), Sneddon et al. (2003), Chandroo et al., (2004a, 2004b); Braithwaite and Huntingford (2004); Huntingford et al. (2006), Emery et al. (2007), Braithwaite (2010) and Brown (2022). (c) The agnostics who carefully evaluate the evidence and always apply Occam's razor. They adopt the stance that as yet the jury's out on animal consciousness. Proponents of this approach, although none work on fish, are Dawkins (1993, 2006), Cheney and Seyfarth (1990, 2007), Hauser (2000), MacPhail (1998), Wynne (2001, 2004) and Browman et al. (2019). Rose (2002) argues that fish do not have the cortical brain structures that are involved in conscious experience in humans and therefore cannot have consciousness. Humphrey (2022) proposes an hypothesis as to how consciousness is produced and argues that it is unlikely that cold-blooded organisms cannot sustain the high metabolic costs of the mechanism he proposes. As a result he adopts the view that only mammals and birds are candidates for non-human consciousness. In contrast Chandroo et al. (2004a, 2004b) argue that perhaps consciousness can be generated by different brain structures in different organisms, so fish might have consciousness. They also spend much time trying to determine homologies between human and fish brains. Sneddon et al. (2003) have demonstrated the existence of nociceptors in rainbow trout (O. mykiss). These results do not show that fish are able to feel pain. The centres common to fish and mammals may both be involved in registering the input from nociceptors, but then extra brain areas are required to turn these into conscious states (Bechtel, et al., 2001). We do not yet know whether fish have the extra centres needed to generate consciousness. It is argued by some that because a fish administered analgesics no longer responds to a noxious stimulus, this means that the fish was feeling pain before the administration of the drug (Sneddon, 2003). It is equally possible that the analgesic just shuts off the response to the noxious stimulus so that the nervous system does not react with avoidance. No consciousness is required for this to happen. The proponents of consciousness in fish argue that complex behaviour is a sign of complex mental processes and that consciousness cannot be ruled out. An example illustrates how this assumption can be misleading when behaviour is being interpreted. Utne Palm and Hart (2000) studied competitive interactions between pairs of threespined sticklebacks (Gasterosteus aculeatus) offered one prey item at a time. The results lead to the conclusion that the fish could recognise each other and could distinguish between familiar individuals and strangers. The assumption was that this was achieved using vision because sticklebacks have big eyes and the neural

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call