Abstract

Donald McLaren might be best known for instigating a sea change in the history of our understanding and treatment of Severe Acute Malnutrition (SAM). In a series of pointed articles beginning in the mid-1960s, McLaren helped initiate a shift away from a colonial-era obsession regarding protein and those generalized African and Asian diets that, relative to Western regimens, contained scant animal product (McLaren 1966). Despite concerted attempts to redress the shortcomings of earlier approaches, and not overlooking some significant successes in the management of severe deficiency, malnutrition remains a pressing concern. In 2011, for instance, it was estimated that inadequate or inappropriate feeding was responsible for 3.1 million, or 45 percent, of child deaths globally (Black et al. 2013). Contemporary and historical populations across Africa have, in particular, been shaped by histories of deficiency (Curtin 1985). While ecology and economics may be the root cause of malnutrition in African contexts (for discussion of this economic history, see Watts 1983; Iliffe 1990), the medicalization and treatment of severe deficiency remain pertinent to its lingering presence on the continent. The role of biomedical interventions against malnutrition—and the relationship between politics and nutritional science—has been the subject of a number of fine histories published in recent decades (see, for example, Wylie 2001; Brantley 2002; Tappan 2017). These works have, however, largely concentrated on colonial contexts. Nearly 30 years after his initial broadsides against the nutrition establishment, McLaren (1994) continued to deride the “little progress” seen in nutritional medicine. While not intending to provide an authoritative history of nutrition since the end of imperial rule, this article surveys subsequent progress against severe malnutrition and explores the shifting political economy of nutritional science in the particular political environment of postcolonial Africa. Perhaps the most remarkable thing about the postcolonial history of nutrition is how long it took for anything to change. Progress was delayed by an outlook so heavily biased by the politics and personnel of colonial rule that it could not easily break from them. The internationalist form of medicine that stepped into the space left by retreating colonial regimes continued to look backward and appropriated many of the ideas that had informed or misinformed its predecessor. At the same time that nutritionists were building on modernist, reductive, technical ideas regarding deficiency, social scientists and policy advisors were developing technical approaches to African hunger informed by the intellectual developments of nutritional science as well as evolving ideas regarding demography and “overpopulation.” What were ostensibly new approaches to nutrition and hunger were broadly rooted in the same classical liberal economics that had underlined colonial development. The economic depression that spread across the continent in the 1970s and 1980s combined with the waning of Soviet influence, the return of Malthusianism (particularly in the first of those decades), and a renewed faith in free-market solutions to poverty and hunger (particularly in the second). It was in this very particular environment that treatments for malnutrition were eventually reappraised and ultimately revolutionized, with the biology of deficiency more keenly understood and mortality rates dramatically reduced. However, in the same way that colonial nutrition reflected colonial preoccupations, neoliberal treatments for malnutrition reflected the political and economic environment of neoliberal Africa. Echoing approaches that largely ignored the structural causes of malnutrition—the politics and economics of deficiency—nutritional science in postcolonial Africa remained subordinate to the political economy in which it evolved. Nutritional medicine came of age under imperial rule; the sheer variety of ecologies, food economies, and diets outside of Europe offered new insight into the relationship between food and health. The early history of nutritional medicine often reflected the politics of colonial government (Worboys 1988; Arnold 1994). In 1939, the British government released the results of its pioneering, Empire-wide nutrition survey, which explained that “diseases resulting from malnutrition … prevail almost everywhere among tribal races … excess of carbohydrate, deficient of fat and first class protein, and uncertain or negligible supplies of milk and green vegetable are the outstanding features.” These broad conclusions could never be representative of the diverse patchwork of ethnicities and economies that made up a vast and truly global empire. However, a fundamental element of European imperialism was the construction of a colonized “other” and its distancing from a metropolitan norm. Differences in diet provided valuable distance between Europeans and their colonial subjects (on “otherness,” see Said 1978). As part of an imperialized form of “tropical medicine,” nutrition helped establish a stark contrast between illness in the African periphery and the vigor and well-being of the metropole. By emphasizing that “the native food problem is not so much one of quantity as one of quality,” the British colonial government pushed a narrative that emphasized ignorance and minimized metropolitan responsibility for colonial malnutrition.1 In this environment, protein deficiency came to dominate discourse around nutrition in the colonies. Research into kwashiorkor embodied this concern. Appearing at one extreme of a deficiency spectrum now understood as Protein-Energy or Protein-Calorie Malnutrition (PEM or PCM), kwashiorkor usually presents in the period after weaning but before the age of five or six years. The current, “classical” theory of kwashiorkor causation argues that it results from a diet moderately adequate in calories but grossly deficient in protein (Truswell 2012). Often kwashiorkor occurs when children are weaned too early and onto low-protein staples, perhaps in advance of another baby. Kwashiorkor's relative rarity in twentieth-century Europe, alongside its complex social and biological pathology, allowed for kwashiorkor research to emerge as both scientifically rewarding and politically palatable, speaking as it did to ideas of African otherness or the “exceptionalism” of African poverty and the African health environment (Watts 1991; Comaroff 1993). In southern Ghana, where kwashiorkor was first described in medical literature and where the word has its origins, it had been understood by the Ga population as a psycho-social condition, “In general conversation, if a child is crying, one might say to it, ‘What, is your mother pregnant, are you getting kwashiorkor?’”2 Emerging in the early twentieth century, the “nutritionist” dietetic paradigm, or the reductive concentration on individual nutrients, downplayed any such social interactions (Scrinis 2013). Foucauldian histories have explained the discovery of individual nutrients and the definition of individual deficiencies as “biopolitical” tools to be understood in the context of “biopower,” the shift from repressive rule to paternalistic authority over the body of the individual and the collective bodies of the wider populace (Foucault 1979, 135–145; Smith 2009). In Africa, this emerged as a particular concentration on protein, something which was both heavily informed by the culture and politics of imperialism and which largely ignored the often adverse relationship between colonization and economic, domestic, and dietary change (Moore and Vaughan 1994; Nott 2016). By the 1950s, however, the first cracks in the British construction of African nutrition were starting to show. Given imperial involvement in the conceptualization of kwashiorkor as a simple protein deficiency, it is little surprise that the protein-deficiency hypothesis garnered its earliest criticisms from the postcolonial world. Speaking against WHO's conclusions, the pioneering American food scientist Nevin Scrimshaw explained to attendees of an FAO/WHO-sponsored nutrition conference in 1953 that “in INCAP [the Institute of Nutrition for Central America and Panama] we are far from convinced that the biochemical changes encountered are specific to protein malnutrition; rather they seem to be non-specific” (Waterlow 1955: 4). However, Western medical consensus overrode these anxieties, pursuing the argument that kwashiorkor was simply the most extreme form of a widespread protein deficit. The “impending protein crisis” led to the formation of the UN's Protein Advisory Group (PAG), created in 1955 to “fight to close the protein gap” that was seen to exist between rich and poor (Carpenter 1994: 162). By 1962, Marcel Autret, then director of FAO's nutrition division, explained that, in the organization's view, “The number one problem … for national agricultural departments is the production of protein foods of good quality” (Autret 1961: 537). Although the winds of change were beginning to sweep away colonial governments, they did little to change the direction of medical thinking in Africa, and former colonial officers continued to dictate the direction of nutrition research long into independence. As Ghanaian medical pioneer Fred Sai explained with reference to Ghana's nutrition program, “you can overthrow a government very much more readily than you can change members of the civil service” (Sai 1978: 100). In this environment, and in the shadow of the “impending protein crisis,” international agencies and western governments concentrated on the high-tech production of protein foods. In what Tom Scott-Smith has described as the “high-modernist” approach to nutrition, researchers around the world continued a colonial-era trend that promoted culturally alien, high-protein “foodstuffs” that played into the continuing dislocation of nutrition from food. British Petroleum, for instance, created Single-Cell Proteins grown on oil, others created Leaf-Protein Concentrate, green jelly-like substances that were produced by putting inedible leaves through a centrifuge. Fish-Protein Concentrate, offal left over from filleting, or the whole of a “junk fish” proved fairly popular. Chlorella, a single-cell form of algae that was grown on sewage, was less so. Nonetheless, as food grown entirely on waste, it did represent the crowning achievement of modernist nutrition and is still available, marketed as a health food (Scott-Smith 2014a, 165–191). The modernist obsession seeped into burgeoning African academies too, with a number of reports detailing technical solutions to protein deficit in a time of rapid population growth (Sai 1960; MacGregor 1972). Slowly, though, the relative position of protein and calories in global nutrition was reconsidered. Dietary histories of kwashiorkor patients were reassessed and found to generally be deficient in calories as well as protein and, following a series of publications questioning the degree of protein needed to prevent and treat kwashiorkor in infants, the UN system slashed their recommended protein allowances (Carpenter 1994: 180–203). Estimated requirements of protein for a one-year-old were, for instance, dropped to 1.1 g/kg a day in 1965, down markedly from the 2.0 g/kg a day recommended by the 1957 report (FAO Committee on Protein Requirements 1957; FAO/WHO Expert Group 1965). Despite this, even in 1970 it was reported that “the FAO as part of its Indicative World Plan for Agricultural Development (IWP), continue to be pessimistic [about the supply of protein] for most parts of the developing world”(FAO/WHO/UNICEF Protein Advisory Group 1970: 41). The future is fraught with danger and at present we are ill-prepared to meet it. Marasmus is already underestimated, and all the indications are that it is rapidly on the increase as an epiphenomenon of the half-assimilated modernising process engulfing the developing regions of the world. On the other hand, kwashiorkor, under the same influences, is dying out. In 1974, McLaren famously suggested that the overt concentration on kwashiorkor should be considered a “great protein fiasco” that had undervalued non-Western diets while simultaneously ignoring localized patterns of endemic marasmus (McLaren 1974: 93–96). McLaren's criticism of this trajectory was, at the 1966 International Congress of Nutrition, immediately met with “public rejoinders from ‘the establishment’ defending the party line.” However, “In private I was told by many delegates that they agreed but were afraid to say so aloud for fear of having their support cut off” (McLaren 1974: 94). Epidemic marasmus continued to spread into the postcolonial world as breastmilk substitutes, promoted overtly at first by retreating imperial governments, were given the tacit approval of those internationalist organizations that had grown to occupy their space in the developing world (Palmer 2009: 238–259). Poorly regulated markets and poorly equipped medical systems opened the door for European manufacturers of baby foods. Nestlé employees dressed as nurses plied maternity wards to push breastmilk substitutes onto new mothers, while the budding mass media similarly advised them “Choose Cow and Gate milk food for your baby and watch him thrive from the very first bottle” (Muller 1974: 10). Despite their internationalist mandates, UNICEF, FAO, and WHO were loath to speak out against predatory marketing strategies. Vigorous criticism from such organizations could be seen as economic interventionism and an attack on those free-market doctrines pursued by their main sponsors. Instead, as late as 1973, the United Nation's Protein Advisory Group advised that “in any country lacking breastmilk substitutes, it is urgent that infant formulas be developed and introduced” (Chetley 1986: 41). In the mid-1970s, following the increasingly barbed arguments of McLaren and others, the protein bubble eventually burst. The United Nations Organization, previously principal scaremonger for “a world protein problem,” made no mention of any global protein deficit during its 1974 World Food Conference (United Nations 1975; Carpenter 2003: 3337). The next year, in a word-for-word reversal of the conclusions made in Nutrition in the Colonial World, Waterlow and Payne (1975) published a Nature article that explained that “the problem is mainly one of quantity rather than quality of food,” clarifying that “the protein gap is a myth, and that what really exists, even for vulnerable groups, is a food gap and an energy gap.” The epidemiology of marasmus was soon re-evaluated and the pathology of kwashiorkor soon reconsidered, with one 1980 study finding that energy deficiency was a necessary precursor to kwashiorkor and equally important as the absence of protein (Landman and Jackson 1980). By the mid-1980s, the pursuit of expensive, technical solutions to protein deficiency in resource-poor settings was receiving unreserved criticism in the medical press. One project pursuing the development of the “winged bean” was denounced by London School of Hygiene and Tropical Medicine (LSHTM) faculty members as constituting part of “a continuing process of justifying scientific enthusiasms by the drawing of facile and tenuous links between research that is intellectually exciting to the investigator and problems that are of sufficient public concern to make it politically attractive to devote funds to them” (Henry, Donachie, and Rivers 1985; Carpenter 2003: 3337). The biology of kwashiorkor remained scientifically interesting though, primarily because of the complicated pathology of the edema seen in kwashiorkor. Studies into the presentation of adult “famine edema” had long discounted the exclusive influence of protein since swelling also appeared in patients with no history of low-protein diets (Keys et al. 1950; McCance 1951). Drawing from these studies, Golden (1982: 1264) concluded in an article that “no independent effect of protein intake on either loss or accumulation of oedema could be demonstrated.” The explanation that kwashiorkor results simply from inadequate protein intake was apparently oversimplistic and considerable contradictory evidence led Golden, among others, to conclude that “oedematous malnutrition in the child or adult is not caused by protein deficiency; such a concept can lead to fatal therapeutic error in oedematous malnutrition treatment.” Golden went on to suggest that a shortage of antioxidants was the cause of edematous malnutrition, something disputed by a 2005 study that concluded that the administration of antioxidants failed to prevent the onset of kwashiorkor in a sample of 2,000 Malawian preschool children (Golden 1998: 433; Ciliberto et al. 2005). Other researchers doubting the position of simple protein deficiency in the etiology of kwashiorkor have forwarded “dysadaptation,” mycotoxins, or other free radical damage as the ultimate cause (Truswell 2012: 304). In any case, the place of protein in the etiology of kwashiorkor is far from simple and far from settled. Sadly, very little of this later kwashiorkor research was considered in the context of public health. Though the protein bubble may have burst, the earlier fetishization of kwashiorkor, and related assumptions regarding the alterity or otherness of African illness, at least as compared with Western well-being, remained an important part of medical discourse. In a short reply to this renewed debate, Ghanaian physician F.I.D. Konotey-Ahulu (1994: 548) reminded the international community to look beyond diet and to consider the social etiology of malnutrition. Explaining, in a 1994 letter to The Lancet, that “those of us who grew up in the kwashiorkor belt and who have also had the benefit of an excellent medical education cannot help but caution our ministries of health and of social welfare about the danger of missing the social pathology wood for the trees of free radicals and leukotrienes.” With a broad base in imperial medicine, the trajectory of nutrition research can, even into the 1970s, be seen as a continuation of the same paternalistic biopolitics that had distorted the “discovery” of malnutrition in the colonial world. The perceptions of African dietary deficiencies established under colonial rule and in the context of colonial power extended well beyond the period of direct imperial intervention as part of a scientific tradition informed by the imperialized invention of a specific form of malnutrition. The preoccupation with protein in Africa was a politically reactive scientific construction and, by continuing to codify malnutrition as a problem of ecology or understanding into independence, this formulation understated the role of social and economic change and undermined the development of preventative measures. The reductive concentration on protein in postcolonial Africa should, therefore, be seen as part of a linear intellectual history beginning with the 1939 publication of Nutrition in the Colonial Empire. The “protein fiasco” can be considered a continuation of the same thought process that allowed the Colonial Office to ignore the possibility that malnutrition was a structural, social problem resulting from the partial integration of Africa into the globalized economy and instead reframe it as an ecological, technical problem that could only be tackled by globalized science (Worboys 1988: 221). Nutritional science in the twentieth century is then inseparable from the earlier politics of colonialism. In Bruno Latour's philosophy of science, scientific discovery and understanding is often muddied by the establishment of its own worth. The further science progresses away from each initial discovery, the harder it is to understand the science of that discovery and the processes that led to it. Even into the postcolonial period, nutritional intervention was built on undercriticized conclusions about nutrition that were largely formed in the skewed scientific environment of tropical medicine and colonial rule. Driven by competition and informed by the sociopolitical environment, Latour further explains that scientific fact does not simply exist in the natural world but is instead created by consensus and maintained by a network of social, cultural, and political alliances (Latour 1987). The consensus regarding protein malnutrition was formed in the particular cultural context of imperial Africa and was strengthened by the networks of Western researchers that continue to underpin perseverant ideas regarding African exceptionalism and European authority over African development. During the colonial period, kwashiorkor research was pursued because it was scientifically interesting and politically anodyne—an inevitable and apolitical consequence of the generalized African food environment. However, Western approaches to nutrition in Africa changed quickly in the postcolonial climate, where responsibility for African hunger could be less readily ascribed to the actions of European colonial governments. As with the construction of endemic protein deficiency in earlier years, understandings of African malnutrition developed in conjunction with postcolonial political discourse. The refiguring of African deficiency into a problem of calories cannot be divorced from these external political conditions. In the same way that protein was a biopolitical object manipulated both consciously and subconsciously by colonial administrators, calories, food aid, and famine relief were utilized as biopolitical agents for the control and coercion of postcolonial African governments. The political ascendancy of neo-Malthusian thinking in the mid-to-later twentieth century coincided with postcolonial conflict, economic decline, and the African food crisis, coloring the subsequent science of nutrition. It is, perhaps more than anything else, the politics of population that has determined the modern history of nutrition in Africa. Population has long been an important part of the dialogue concerning African hunger. During the early years of colonial rule it was generally assumed, in the absence of credible census data, that Africa's population was stagnating and that it would grow only through Western intervention. In extremely invasive regimes, as in the Belgian Congo, colonial governments and expatriate companies explicitly promoted high birth rates and short birth spacing through financial incentives (Hunt 1988). However, as evidence of Africa's population explosion under the relative stability of colonial rule emerged, demographers began to construct a view of Africa's population growth as unsustainable and overreaching the ecological restraints of a poorly endowed continent. Driven by the increasing incidence of drought and famine, something of a consensus was forming along these lines by the 1970s (see, for example, Steel 1970). “Overpopulation” has since become a recurring motif in modern discourses regarding Africa, while “population pressure” is often used to explain high burdens of poverty, famine, and disease (see, for example, Iliffe 1987: 253–254). Narrowly framed as a ratio of food production to population, and given Africa's limited industrial output, the neo-Malthusian conceptualization of African hunger could be addressed either through the expansion of food production or through the slowing of population growth, often referred to as “population control.” In practice, this meant either a South Asia–style Green Revolution to swell agricultural production through the introduction of new crop varieties (with the accompanying need for fertilizers and irrigation), or the widespread adoption of contraception. In either case, and in much the same way as colonial governments approached “endemic” kwashiorkor, postcolonial hunger could only be solved through technical fixes peddled by Western governments, whose authority was leveraged by Western commercial enterprise (Ittmann 2010; see also, Bashford 2014; Ittmann 2014; Merchant 2015). In the 1940s and 1950s some agencies, including the FAO, favored addressing problems of food production and distribution, assuming that the deployment of Western science could establish a world where equitable food supply and consequent mortality reduction would improve standards of living and precede a natural decline in fertility (Bashford 2014). Others, such as the increasingly influential Rockefeller-funded Population Council, sought to curb population growth through fertility control (Merchant 2015). These same proponents of population engineering also helped develop and promote demography as a science sympathetic to such an approach (Merchant 2017). Using the “demographic transition” model, demographers initially assumed that fertility decline would follow the historical European pattern, with fertility falling in response to improved socioeconomic conditions and declining mortality rates. However, the social precursors to European mortality and fertility declines were often absent in postcolonial Africa. As many African countries began to record economic growth with little indication of fertility decline, demographers began to believe that the causality was reversed: that population growth might undo the benefits of development and that fertility had to first be reduced to make room for development (Szreter 1993). This negated a modernist or welfarist approach to food reform. What instead emerged was a view of African hunger that problematized fertility rather than social and economic inequality or the inequitable distribution of food. This narrative suited Western agendas, emphasizing the extension of Western medicine and the positive effects of mortality reduction (via its effects on fertility), while simultaneously ignoring the historical, social, and economic factors affecting diet, food production, and food security. Given the growing consensus regarding Africa's rapidly growing population, it did not take long for Western agencies to intervene in African population policy. However, the new conceptualization of African hunger was founded on problematic scientific principles (Ittmann 2014). Eugenicist and often explicitly racialized approaches to contraception had formed the basis of many earlier family planning initiatives, especially in states and regions, like South Africa or the American South, where racial disparities in fertility threatened white supremacy (see, for instance, Klausen 2000). Following the Second World War, however, eugenics fell from vogue, not only because research had weakened its scientific foundations, but because of revulsion over its policy application in Nazi Germany. However, ecology and environmentalism picked up neo-Malthusian critiques of high-fertility populations and have run with them ever since. Hugely popular works such as Stanford ecologist Paul Ehrlich's The Population Bomb startled audiences, highlighting the ecological impact of overpopulation and making sombre predictions regarding upcoming global catastrophe. Written in 1968, The Population Bomb sold over three million copies in the following ten years. University of California biologist Garrett Hardin's influential 1968 Science article “The Tragedy of the Commons” gave a philosophical framework to these new concerns. Hardin suggested that welfarism and access to commonly held property or land was at the root of the population problem, going on to ask “how shall we deal with the family, the religion, the race, or the class … that adopts overbreeding as a policy to secure its own aggrandizement?” (Hardin 1968: 1246). At their most controversial, such ideas backgrounded utilitarian arguments against food aid and poverty relief (Hardin 1974). More fundamentally, “The Tragedy of the Commons” provided justification for private property, capital accumulation, and the moral authority of class stratification. The spreading neo-Malthusianism expressed by Hardin was already enshrined in the Cold War containment policies pursued by Western governments and their agents, who saw overpopulation as the source of the conditions that attracted the poor to communism. Population control became part of foreign policy and national security planning. In the 1950s, and with the tacit approval of Western governments, private groups such as the International Planned Parenthood Federation (IPPF) and the Pathfinder Fund began to establish birth control clinics across Africa. By the 1960s, USAID and the UK's Ministry of Overseas Development began to directly fund birth control programs, often explicitly tying the provision of food aid to the implementation of family planning (Ittmann 2010, 2014). Here Western financing of fertility manipulation plays into the debate concerning “soft power,” NGO involvement in Africa, and the erosion of state sovereignty by wealthy nonstate actors (Manji and O'Coill 2002). More generally, the coercive implementation of family planning programs can be seen as a continuation of the domestic engineering first attempted by colonial governments (Riedmann 1993). Host countries, however, were not entirely convinced by the necessity of fertility management. Sai (1988: 270) explains that, at the 1974 Bucharest Population Conference, “African countries attended in some strength; but they attended in some strength to present the view that population did not fit into African aspirations or African development.” Despite this, by the 1980s virtually every nation in sub-Saharan Africa had implemented family planning programs, largely at the behest of Western donors. The enduring absence of fertility decline was long understood as an inconsistency between the preference for a smaller family and the knowledge needed to limit family size. Largely unquestioned until the late 1980s, the “KAP-gap”—standing for knowledge, attitude, practice—emphasiz

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call