According to the “pathogen stress theory of values,” the evolutionary case that Thornhill and his colleagues have put forward, our behavioral immune systems—our group responses to local disease threats—play a decisive role in shaping our various political systems, religions, and shared moral views.
If they are right, Thornhill and his colleagues may be on their way to unlocking some of the most stubborn mysteries of human behavior. Their theory may help explain why authoritarian governments tend to persist in certain latitudes while democracies rise in others; why some cultures are xenophobic and others are relatively open to strangers; why certain peoples value equality and individuality while others prize hierarchical structures and strict adherence to tradition. What’s more, their work may offer a clear insight into how societies change.
This is a reasonable view, and something I’ve long observed from working on infectious diseases in developing countries. The developmental trajectory of a country is influenced by the deliberate avoidance of illness. An example can be seen in the locations of African cities. Many African administrative capitals are located on isolated, cool hilltops, far away from rivers and lakes. Colonialists would intentionally set up shop in areas where they were unlikely to encounter malaria.
Developmentally, this has had major implications for trade within Africa. European cities are often placed along water ways amenable to domestic European trade. The lack of trade between African countries is one of the reasons the continent has developed so poorly. This is the direct result of not only colonial priorities of resource extraction to Europe, but also the unfortunate placement of economic centers in response to malaria.
Certainly, the nature of cities themselves have much to do with the control of infectious diseases. Public works often involve the management of sewage waste and the delivery of clean water. Thornhill might suggest that the development of democracy, citizen involvement and taxation to pay for urban improvements are in direct response to enteric diseases.
However, while it is interesting to try to apply this view, it can be taken to the extreme:
Fincher (a graduate student of Thornhill) suspected that many behaviors in collectivist cultures might be masks for behavioral immune responses. To take one key example, collectivist cultures tend to be both more xenophobic and more ethnocentric than individualist cultures. Keeping strangers away might be a valuable defense against foreign pathogens, Fincher thought. And a strong preference for in-group mating might help maintain a community’s hereditary immunities to local disease strains. To test his hypothesis, Fincher set out to see whether places with heavier disease loads also tended toward these sorts of collectivist values.
I’m not sure it’s that easy to boil down political differences between Asia and Europe to a need to manage infectious disease. Certainly, Sweden is more collectivist than England, but I wouldn’t say that their infectious disease profiles are all that different.
Worse yet, if taken to the extreme, this “hunt for significance” will provide one with evidence to support any crazy theory at all. Pathogens exist wherever humans do. Moreover, we risk attributing the contribution of pathogens to human development based on current conditions, assuming that the present is deterministically preordained centuries ago. Until very recently, nearly the entire world was at risk for malaria, but despite this, various societies have embarked on different social and political trajectories.
The biggest problem I have with the theory is in its basic in rational theory. It assumes that humans are making rational choices based on pathogen threats, when we know, and particularly those of us who work in the tropics, that humans often have poor conceptions of disease transmission and causes of illness. At times, despite very obvious threate, humans will act in manners which exacerbate that threat. The history of enteric disease is filled with tales of ignorance and folly.
If we are going to subscribe to a rational model of political and social development which includes pathogens, then we have to also address first, the ability of pathogens to hijack human behavior to create new opportunities for replication and survival and second, that social changes can exacerbate the worst effects of infection. For the first point, I would look to the development of international trade systems which allow pathogens such as influenza to move around the world quickly, increasing opportunities for mutation to avoid immune responses. For the second I would point to polio, a disease which becomes a problem on after the introduction of water sanitation practices.
Thornhill’s ideas are interesting, and certainly provide good material for the popular press and BBQ conversation, but they require that the reader suspend too much consideration of the details of the complex history of human social and political development. Taken with restraint, as in the example of the locations of African cities, they can provide interesting insights into how current conditions are impacted by past pathogenic threats.
Every once in a while, you run across something that just gives you the chills.
“A report presented to the World Health Organization (WHO) in 1948 states: “It is not enough to quote that about 3,000,000 deaths are caused yearly by malaria in the world, or that every year about 300,000,000 cases of malaria occur …… that malaria is prevalent in tropical and subtropical areas where food production and agricultural resources are potentially very high, and that, by affecting the mass of rural workers, it decreases their vitality and reduces their working capacity and thus hampers the exploitation of the natural resources of the country. At a time when the world is poor, it seems that control of malaria should be the first aim to achieve in order to increase agricultural output” (WHO, 1948).
Snow RW, Amratia P, Kabaria CW, Noor AM, Marsh K: The changing limits and incidence of malaria in Africa: 1939-2009. Adv Parasitol 2012, 78:169-262.
Today, April 7th. is World Health Day, an annual event sponsored by the World Health Organization to help bring attention to pressing public health issues.
This years event focuses on vector borne diseases like dengue fever and Chagas disease, which are transmitted through a third party host such as Aedes mosquitoes or triatomines (kissing bugs).
Both of these diseases are becoming increasingly relevant as the world urbanizes. Dengue and malaria form a complementary nexus of diseases. Malaria is largely associated with rural areas, and rarely found in cities, where dengue fever is almost exclusively found in urban areas. Generally speaking, dengue is a disease of development, where malaria is a disease of the lack of development.
While known to be distributed widely through Latin America and Southeast Asia, dengue has yet to make it on Africa’s radar yet, simply (in my opinion) because not enough people are looking hard enough. Africa, as the most rapidly urbanizing area of the world will eventually face a double burden of dengue and malaria and health facilities aren’t yet prepared to deal with it.
It’s a reasonable question to which no one really has an answer. I work in a field site located on Lake Victoria, the office of which is based out of the International Centre for Insect Physiology and Ecology (ICIPE) station on Mbita Point.
We do malaria field surveys and have a large health and demographic surveillance system that has monitored births, deaths, migration and health events of nearly 50,000 people over the past six years.
The goals of the project are to monitor changes in demographics, outbreaks and changes in the dynamics of the transmission of infectious diseases and gauge the effectiveness of interventions.
While I view those as scientifically important, I don’t think that people on the ground experience any immediate benefit from scientific research activities. In fact, I’m pretty sure that, unless they’re getting a free bednet, it’s mostly an annoyance. Of course, we appreciate their cooperation and they are free to tell us to bugger off at anytime.
We are seeing rapid declines in malaria incidence, infant mortality and fertility in the communities we study. This is, of course, cause for celebration. Less kids are dying and people are having fewer of them.
In fact, the shift in the age distribution was so dramatic from 2011 to 2012, that we thought it an aberration of the data: the mean age of 12,000 people rose nearly two years from the beginning of 2011 to the latter part of 2012. Old people died off, and fewer babies were there to replace them, resulting in an upward shift in the age distribution. Cause for celebration in an area where women normally have anywhere from 5 to 10 children, who often end up malnourished, poorly housed and uneducated.
But we have to ask ourselves, how much of this is representative of trends in communities similar to the ones we study and how much is directly influenced by the presence of the research station itself?
A recent article in Malaria Journal documents the positive impacts that a research facility had on the local community:
To make the community a real partner in the centre’s activities, a tacit agreement was made that priority would be given to local people, in a competitive manner, for all non-professional jobs (construction workers, drivers, cleaners, field workers, data clerks, and others). Of the 254 people employed at the CRUN, about one-third come from Nanoro. This has strengthened the sense of ownership of the centre’s activities by the community. Through the modest creation of new jobs, CRUN makes a substantial contribution to reducing poverty in the community. In addition, staff members residing in Nanoro contribute to the micro-economy there.
Another crucial benefit for Nanoro and CRUN stemming from their productive engagement was electrification for the area. This was made possible by the mayor of Nanoro leading the negotiations for extending the national electrical grid to the CRUN, and with it, to the village of Nanoro. Electrification spurred a lot of economic activity and social amenities that enhance the wellbeing of the community, such as: (1) improved water supply through use electricity instead of generator; (2) ability to use electrical devices, such as fans during the hot season (when temperatures can reach 45-47°C), lighting so students can study at night, the use of refrigeration to safely store food and the extension of business hours past sunset.
Health care services have been improved through CRUN’s new microbiology laboratory. Before this laboratory was established, local patients had to travel about 100 km to the capital city, Ouagadougou, for the service.
This agrees with my experience on Lake Victoria. The presence of the research facility (built originally in the 1960’s) and the subsequent scale up of research activities has been transformative for the area. As more and more people have moved to the area, a bridge to Rusinga Island has been built, two new ferry routes have been installed, the existing ferries have been upgraded, power has been extended to the area and finally, after years of waiting, a paved road has been built from Kisumu to Mbita Point.
..which brings back me to my initial question. It is clear that the building of research facilities can be a major spur for economic development and economic activity in a previously desolate and marginalized area. In case of Mbita Point, it is possible that these gains can be sustained even following an eventual cessation of research activities and strangled funding. In this sense, field research projects are doing at least some of the world good.
However, the gains which these communities are experience really have little to do with the research projects themselves and more to do with the influx of employment and infrastructure that come with research stations and research projects. This is non-controversial and I’m sure that the locals appreciate it.
But the quality and goals of research need to be assessed. Are the results we are seeing truly representative of communities which may be similar to the Mbita Point of the past? Are we unnecessarily influencing the outcomes of the research and then perhaps inappropriately generalizing them to contexts which little resemble our target communities? From a scientific perspective, this is troubling.
Of greater concern, however, are we claiming that gains against malaria are being made, when in fact, morbidity and mortality in communities we haven’t looked at is increasing? This could result in a dangerous shift away from scaled up ITN distributions or even a total reduction in international funding. If this happens, kids will die.
A new study which just appeared in Malaria Journal, however, calls this optimism into question.
This review presents two central arguments: (i) that empirical studies measuring change are biased towards low transmission settings and not necessarily representative of high-endemic Africa where declines will be hardest-won; and (ii) that current modelled estimates of broad scale intervention impact are inadequate and now need to be augmented by detailed measurements of change across the diversity of African transmission settings.
So, our ability to accurately determine whether transmission intensity has declined is hampered by the fact that most studies of the disease occur in areas of low transmission. This would make sense. It is much easier for us to evaluate the malaria situation in Kenyan context than in the Democratic Republic of Congo due to availability of surveillance infrastructure, official mechanisms which allow research projects to move forward, and security issues.
The obvious problem with this, is the relationship of governance, economy an instability to malaria itself. People in the poorest countries are at the highest risk for malaria and people in the poorest parts of the poorest countries are at the highest risk of all. The trouble is, despite being the populations we are most concerned about, they are the hardest to reach, and the hardest to help.
Worse yet, the estimates of malaria prevalence found in a number of studies were considerably lower than estimates for the entire African continent.
The combined study area represented by measurements of change was 3.6 million km2 (Figure 1), approximately 16% of the area of Africa at any risk of malaria . The level of endemicity within these studied areas (mean PfPR2-10 = 16%) was systematically lower than across the continent as a whole (mean PfPR2-10 = 31%) (Figure 2). While 40% of endemic Africa experienced ‘high-endemic’ transmission in 2010 (PfPR2-10 in excess of 40%) , only 9% of the studied areas were from these high transmission settings.
This is a huge issue and one that shouldn’t be limited to malaria. While it is helpful to hear good news of malaria declines in formerly afflicted areas, we need to be careful about overstating the impact of interventions. Funding for malaria projects such as the distribution of insecticide treated bed nets was incredibly high throughout the 00’s but it is unlikely that trend will continue. Offering an positive picture can show that our efforts are valuable, but might also lead policy makers and donors to suggest that money be put toward other goals. If Sri Lanka is any indication, where malaria was nearly eliminated at one time but experienced a rapid and devastating resurgence, even a brief relaxation of malaria control efforts could erase current gains completely.
It’s an old paper, but I just came across The Colonial Origins of Comparative Development: An Empirical Investigation
by Daron Acemoglu, Simon Johnson and James A. Robinson, originally published in the The American Economic Review back in 2001.
They take rough data of settler deaths back in the seventeenth and eighteenth centuries and plot them against the GDP of several countries from 1995. I’ve included the plot on the right. What they found was that a higher number of European settler deaths was associated with a long term decline in economic output.
Settling in the seventeenth and eighteenth centuries was a dangerous business, particularly in Sub-Saharan Africa and less so in what is now the United States, New Zealand and Australia. Malaria and yellow fever were responsible for killing up to 100% of groups brave enough to attempt the journey.
Acemoglu, et al.’s argument is as follows:
1. There were different types of colonization policies which created different sets of institutions. At one extreme, European powers set up “extractive states,” exemplified by the Belgian colonization of the Congo. These institutions did not introduce much protection for private property, nor did they provide checks and balances against government expropriation. In fact, the main purpose of the extractive state was to transfer as much of the resources of the colony to the colonizer. At the other extreme, many Europeans migrated and settled in a number of colonies, creating what the historian Alfred Crosby (1986) calls “Neo-Europes.” The settlers tried to replicated European institutions, with strong emphasis on private property and checks against government power. Primary examples of this include Australia, New Zealand, Canada, and the United States.
2. The colonization strategy was influenced by the feasibility of settlements. In places where the disease environment was not favorable to European settlement, the cards were stacked against the creation of Neo-Europes, and the formation of the extractive state was more likely.
3. The colonial state and institutions persisted even after independence.
They argue that the disease environment determined the nature of settlements, which determine the nature of institutions which, in term, determined the economic trajectory of a country.
Interestingly, they control for all of the things that one might control for, such as distance from the equator and the percentage of inhabitants that were European, being landlocked and the ruling power, ruling out the effect of some obvious potential influences. Property rights, a solid judiciary and limits on political power in the colonies and upon independence, they argue, had a greater effect on long term GDP, and the development of those institutions was enabled or inhibited by early settler mortality.
It’s a fairly compelling argument, though not without its critics.
A few gems from the paper interested me. One, the return on investment in the British colonies during the nineteenth century was a whopping 25%, far more than one could have expected domestically. In the late 19th and early 20th centuries, this dropped so that returns on colonial and domestic investments were the same.
I found (finally!) a reference to indicate the willful choosing of high altitude and thus less malarious areas for colonial settlements. Note that in Europe and the US, the location of cities is often along river ways and sea sides, where in Africa large cities tend to be placed inland (with some exceptions). There has been no industrial revolution in Africa and little regional trade (a condition which persists to this day) so that cities along water based shipping routes are not necessary. Extraction in Africa was largely done by rail, further alleviating the need to be close to rivers.
Not that the Economist has ever made a habit of ignoring tropical diseases. Far from it, the Economist as a British magazine is quite good at reporting on the Isles former colonies.
Here they’ve written on the issues of mass drug administrations as a tool in malaria eradication. Specifically, they focus on a Chinese group seeking to ramp up efforts to create a successful regimen of artemisinin and piperaquine to eliminate the disease by prophylacticly preventing infection, and interrupting the cycle of transmission long enough to eliminate the parasite entirely.
Dr Li’s approach is to attack not the mosquito, but the disease-causing parasite itself. This parasite’s life cycle alternates between its insect host (the mosquito) and its vertebrate one (human beings). Crucially, as far as is known, humans are its only vertebrate host. Deny it them and it will, perforce, wither away—an approach that worked for the smallpox virus, which had a similarly picky appetite. In the case of smallpox, a vaccine was used to make humans hostile territory for the pathogen. Since there is no vaccine against malaria, Dr Li is instead using drugs.
To date, the group has been running trials in the Comoros islands off the coast of Mozambique and had some success, but haven’t come close to full elimination. Elimination on islands surrounded by salt water (mosquitoes which transmit malaria breed in fresh water) should be a fairly easy proposition, but the issue of human mobility from the African continent guarantees reintroduction.
I’m personally involved in an island malaria elimination project in Kenya, but am under no illusions that results from an island are in the least bit generalization to the continent. Falciparum malaria is far too efficient and the lack of a winter renders transmission far too consistent to allow easy elimination. Add the issue of the intense mobility of Africans and one can’t help but be discouraged.
Dr. Li from the Guangzhou group seems to be optimistically under the mistaken impression that all it will take to eradicate malaria is the right combination of magic pills, but he’s gravely mistaken. The only thing that will consistently control malaria on the continent will be a full on, sustained assault using every tool known, along with intense economic development. The continent has only seen gains in malaria control during the 00’s, when incredible amounts of money and effort was thrown at the disease and, not coincidentally, when African economies finally started to take off. Eradicating malaria won’t be about a few pills.
More troubling to me are the ethical issues. Mass drug administrations require the participation. If even a small group of people refuse the medication, the entire effort might be for naught. Obtaining full, informed consent, however, is near impossible in these areas. While most people are willing to participate once the benefits are explained to them, the risks are often glossed over. Moreover, as communities will often follow the behavior of their neighbors or community leaders, it is difficult to judge whether people participate of their own volition or whether they are merely bowing to community pressure. Educational barriers might also compromise the ability to obtain truly informed consent.
Further, I don’t doubt the intent of the Guangzhou group, but I do wonder if Chinese institutions truly have the same level of ethical review and monitoring that United States’ institutions have (which isn’t even perfect and sometimes ill suited to developing countries). I’m sure that China would love to claim a success like malaria elimination, but I worry that a zeal for victory might lead to a violation of basic ethics and even a masking of failures, complicating the issue in the long term. I hope that I’m wrong.
Every year, Bill and Melinda Gates release a letter on the state of the Gates Foundation and the current situation of global development and health. This time Gates set out to dispel three common myths on development, namely that poor countries are doomed to be poor forever, foreign aid is a total waste and that development will just lead to overpopulation.
The first is the most cynical, but even for us development/public health folks, it’s easy to be discouraged. Pessimism aside, the data don’t bear out the assumption that developing countries are entrenched in poverty. Just about all Sub-Saharan African countries experience consistent economic growth throughout the 00’s and have seen rapid improvements in just about all of the common health indicators. People are living longer, fewer kids are dying and they’re making more money to pay for school and health care.
Over the past five years that I’ve been going to Sub-Saharan Africa I’ve seen this change on the ground. Cars are in better shape, there’s more goods on the shelves, kids are better nourished and security has vastly improved. Does this mean that all of the problems are magically going away? No, there are still vast challenges to infrastructure development, access to health care and affordable medications, educational quality, gender issues and basic business development. However, these improvements do signal that Sub-Saharan African countries are reaching a point where sustained development is possible.
I have a hard time disagreeing with Gates here, but I did find his “before” and “after” pictures of Nairobi a bit bizarre. Though Nairobi is currently going through a construction boom, I fail to see how it would look any different in 2014 than it did in 1969 after more than three decades of stagnation.
Gates second point and the hardest myth to dispel is that of the alleged ineffectiveness of aid. Bill Easterly has made a career out of aid bashing, and, unfortunately, given cynical politicians looking for policy scapegoats a point to scream to their angry constituents. In a broader sense, the screaming over aid is really a questioning of developmental policies themselves. Certainly, there are development failures. The neo-classically informed structural adjustment policies of the World Bank and the IMF during the 80’s and 90’s were, on the surface, colossal failures (Read Beyond the World Bank Agenda: An Institutional Approach to Development by Howard Stein for a great analysis). On a smaller scale, we can easily cherry pick misguided but well meaning development projects or plans that simply went awry for any number of unforeseen reasons. The recent takedown of Jeff Sachs (The Idealist: Jeffrey Sachs and the Quest to End Poverty) and the massive problems of the Millenium Village in North East Province in Kenya is a great example of the challenges a development project can face.
However, in ever insular post Iraq America, the question that is most often asked is why we should even care and does our presence merely serve to make things worse. The truth is, and the point most often overlooked, is that most development projects are international collaborations. Many projects are conducted with partners in target countries and, more often than not, projects often make up for shortfalls that hobbled governments are unable (or sometimes unwilling) to provide. Health care is one example.
Jeff Sachs wrote a nice article this morning on how effective free insecticide treated nets have been in reducing malaria incidence and mortality in Sub-Saharan Africa. Nearly half a billion free nets have been given out worldwide as of 2014 and a lot of kids are alive today who would have been dead had they been born ten years earlier. Malaria is 100% associated with poverty. Wealthy people do not get malaria, even in malaria endemic countries. Though some of the decline in malaria incidence has been due to increased affluence and urbanization of African countries, a major percentage of this decline has been due to aid programs which provide bed nets and have expanded access to life-saving malaria medications. Certainly, not all aid works, but nothing works 100% of the time, particularly when humans are involved.
Which brings us to the most cynical and offensive of Gates’ three myths. Some people truly believe that saving African kids is a bad thing. One day there will be too many of them and they will suck up the ability for the world to sustain life. Honestly, this view couldn’t be more wrong.
The poorest parts of the world are the areas which are seeing the most rapid population growth. The average Malawian woman has 8 children in her lifetime, often starting when she isn’t even yet 15 years old. It has been said that if Malawi continues on it’s current trajectory, that it will have a population equivalent to that of Japan’s by 2050. Women in water and food constrained pastoralist communities can have ten or more children. The most affluent areas of Africa are the places with the slowest population growth.
Even more incorrect is the assumption that poverty is less harmful to the environment than development. Malawi is almost entirely deforested due to extensive use of charcoal for heating and tobacco cultivation. Deforestation not only robs the earth of potential carbon sinks, but also reduces need biodiversity and directly impacts precious water resources. Africa burns unclean fuels such as charcoal and coal for heating, and the poor condition of vehicles make it a major potential source of greenhouse gases. The air in Nairobi on any given weekday is so filled with exhaust that one can become dizzy just walking around town. It is, of course, unreasonable (and stupid) to deny Africans transportation and cooking fuel, but well meaning though poorly informed armchair environmentalists in the United States would happily suggest doing just that.
Which bring me to my final point. The case against development is one that assumes that the status quo is somehow preferable to anything that might come after. The assumption is that Africans were just fine without Europeans and their planet destroying ways. There is, of course, little data on what Africa was like before Europeans started extracting resources from the continent. We do, however, know a lot about underdeveloped areas of Africa. There is evidence to suggest that some do fine. There is however, much evidence to suggest that other simply do not. The worst parts of Africa are the parts which are the least developed. They are the areas where the market doesn’t function. The areas where there is little education, no access to health care, no roads, no economy, kids regularly die, where old people are a venerated since they are so rare, where there’s violence and instability and people are entirely marginalized from any level of political participation. While development likely will never solve the worst problems (like those in Somalia), there is no case to be made that the current state of the ultra poor is acceptable on any measure, even to the poor themselves!
Alright, off to bed.