According to the “pathogen stress theory of values,” the evolutionary case that Thornhill and his colleagues have put forward, our behavioral immune systems—our group responses to local disease threats—play a decisive role in shaping our various political systems, religions, and shared moral views.
If they are right, Thornhill and his colleagues may be on their way to unlocking some of the most stubborn mysteries of human behavior. Their theory may help explain why authoritarian governments tend to persist in certain latitudes while democracies rise in others; why some cultures are xenophobic and others are relatively open to strangers; why certain peoples value equality and individuality while others prize hierarchical structures and strict adherence to tradition. What’s more, their work may offer a clear insight into how societies change.
This is a reasonable view, and something I’ve long observed from working on infectious diseases in developing countries. The developmental trajectory of a country is influenced by the deliberate avoidance of illness. An example can be seen in the locations of African cities. Many African administrative capitals are located on isolated, cool hilltops, far away from rivers and lakes. Colonialists would intentionally set up shop in areas where they were unlikely to encounter malaria.
Developmentally, this has had major implications for trade within Africa. European cities are often placed along water ways amenable to domestic European trade. The lack of trade between African countries is one of the reasons the continent has developed so poorly. This is the direct result of not only colonial priorities of resource extraction to Europe, but also the unfortunate placement of economic centers in response to malaria.
Certainly, the nature of cities themselves have much to do with the control of infectious diseases. Public works often involve the management of sewage waste and the delivery of clean water. Thornhill might suggest that the development of democracy, citizen involvement and taxation to pay for urban improvements are in direct response to enteric diseases.
However, while it is interesting to try to apply this view, it can be taken to the extreme:
Fincher (a graduate student of Thornhill) suspected that many behaviors in collectivist cultures might be masks for behavioral immune responses. To take one key example, collectivist cultures tend to be both more xenophobic and more ethnocentric than individualist cultures. Keeping strangers away might be a valuable defense against foreign pathogens, Fincher thought. And a strong preference for in-group mating might help maintain a community’s hereditary immunities to local disease strains. To test his hypothesis, Fincher set out to see whether places with heavier disease loads also tended toward these sorts of collectivist values.
I’m not sure it’s that easy to boil down political differences between Asia and Europe to a need to manage infectious disease. Certainly, Sweden is more collectivist than England, but I wouldn’t say that their infectious disease profiles are all that different.
Worse yet, if taken to the extreme, this “hunt for significance” will provide one with evidence to support any crazy theory at all. Pathogens exist wherever humans do. Moreover, we risk attributing the contribution of pathogens to human development based on current conditions, assuming that the present is deterministically preordained centuries ago. Until very recently, nearly the entire world was at risk for malaria, but despite this, various societies have embarked on different social and political trajectories.
The biggest problem I have with the theory is in its basic in rational theory. It assumes that humans are making rational choices based on pathogen threats, when we know, and particularly those of us who work in the tropics, that humans often have poor conceptions of disease transmission and causes of illness. At times, despite very obvious threate, humans will act in manners which exacerbate that threat. The history of enteric disease is filled with tales of ignorance and folly.
If we are going to subscribe to a rational model of political and social development which includes pathogens, then we have to also address first, the ability of pathogens to hijack human behavior to create new opportunities for replication and survival and second, that social changes can exacerbate the worst effects of infection. For the first point, I would look to the development of international trade systems which allow pathogens such as influenza to move around the world quickly, increasing opportunities for mutation to avoid immune responses. For the second I would point to polio, a disease which becomes a problem on after the introduction of water sanitation practices.
Thornhill’s ideas are interesting, and certainly provide good material for the popular press and BBQ conversation, but they require that the reader suspend too much consideration of the details of the complex history of human social and political development. Taken with restraint, as in the example of the locations of African cities, they can provide interesting insights into how current conditions are impacted by past pathogenic threats.
Every once in a while, you run across something that just gives you the chills.
“A report presented to the World Health Organization (WHO) in 1948 states: “It is not enough to quote that about 3,000,000 deaths are caused yearly by malaria in the world, or that every year about 300,000,000 cases of malaria occur …… that malaria is prevalent in tropical and subtropical areas where food production and agricultural resources are potentially very high, and that, by affecting the mass of rural workers, it decreases their vitality and reduces their working capacity and thus hampers the exploitation of the natural resources of the country. At a time when the world is poor, it seems that control of malaria should be the first aim to achieve in order to increase agricultural output” (WHO, 1948).
Snow RW, Amratia P, Kabaria CW, Noor AM, Marsh K: The changing limits and incidence of malaria in Africa: 1939-2009. Adv Parasitol 2012, 78:169-262.
But Ron Paul did.
I’m checking out the graphic below, and, first, wondering why anyone ever thought that gold was the only investment to make given it’s bubblish nature, and second, wondering what it must have been like to have investments in the 19th century. Granted, most people didn’t, and some people were even the targets of investment themselves. The wide volatility in the inflation rate must have driven people nuts.
If you owed money, one year, you’d make out like gangbusters, watching inflation obliterate your debt obligations, the next year, you’d watch your world crumble as the currency became worthless. If people owed you money, you’d be in the opposite pinch. Either way, you were screwed and had little ability to plan for the future. By the time you rode out the constant rough spots, though, you’d end up with the same amount of money you started with decades earlier. I’ll take steady inflation and reasonable economic certainty over crazyland, but Ron Paul might be into it, I guess.
The blog Uneasymoney, posted an article this morning claiming that policies which encouraged the production of biofuels was responsible for the crazy run in commodity prices throughout the 2000′s and was ultimately responsible for the 2007/2008 crash.
The post refers to an article in the Journal of Economic Perspectives, which I am reading now but the results of which are summed up here:
the research of Wright et al. shows definitively that the runup in commodities prices after 2005 was driven by a concerted policy of intervention in commodities markets, with the fervent support of many faux free-market conservatives serving the interests of big donors, aimed at substituting biofuels for fossil fuels by mandating the use of biofuels like ethanol.
What does this have to do with the financial crisis of 2008? Simple. ..the Federal Open Market Committee, after reducing its Fed Funds target rates to 2% in March 2008 in the early stages of the downturn that started in December 2007, refused for seven months to further reduce the Fed Funds target because the Fed, disregarding or unaware of a rapidly worsening contraction in output and employment in the third quarter of 2008. Why did the Fed ignore or overlook a rapidly worsening economy for most of 2008 — even for three full weeks after the Lehman debacle? Because the Fed was focused like a laser on rapidly rising commodities prices, fearing that inflation expectations were about to become unanchored – even as inflation expectations were collapsing in the summer of 2008. But now, thanks to Wright et al., we know that rising commodities prices had nothing to do with monetary policy, but were caused by an ethanol mandate that enjoyed the bipartisan support of the Bush administration, Congressional Democrats and Congressional Republicans. Ah the joy of bipartisanship.
So then, what I’m gathering here is that the Fed was obsessive about commodity prices fearing inflation, despite the fact that the Fed was in no position to influence commodities markets. This distracted the Fed from focusing on the real causes of the crash and the Lehman disaster, making a bad situation worse.
I’m not sure that this correctly connects the dots, given that there is little evidence that the run in commodity prices had anything to do with biofuels. Even as biofuel consumption increased throughout the 00′s, overall production of corn and yield per acre also increased. Assuming that commodity prices are in part dictated by supply, I would (from an armchair economist perspective) assume that prices should remain somewhat constant.
I’m interested to see that the article disregards financialization of commodities, following a loosening of rules of speculation on ag products in the 90′s and the move toward commodities following the equity bust of 2000 as not being a major factor in the rise in corn prices. This is particularly strange when we consider that non-energy commodities also exhibited rapid price increases and violent fluctuations throughout the 00′s. I fail to see how energy policy could result in increases and volatility in, for example, copper.
It’s a tempting thesis, and made more tempting by the explicit identification of individuals who suggested and implemented such policy, but not one borne out by the data, in my limited, amateurish opinion. The list of potential factors which influenced the run in commodities is a long and confusing one (climate change, increased demand from China and India, global instability, etc. etc.), but I don’t think that the effect of Wall Street greed can be discounted as a major determinant. Interestingly, despite the overall themes of the paper, the author does a poor job of discounting the effect of financialization in the creating of commodity price bubbles.
In reading this paper now, I’m somewhat confused. On the one hand, he confirms many of my initial suspicions that the rising price of food is unrelated to supply and demand factors as growth of both supply and demand were more or less constant, despite localized climate shocks. On the other, he seems to blame a rise in prices during the crash to a shift in energy policy toward biofuels, while overlooking that commodities were already volatile and rising, beginning with the crash of the tech bubble in 2000. I am thining that much of the rise in commodities during 2007/8 was due to panicky speculation as real estate markets tumbled, not to any change in energy policy. Certainly, it may be the case the the policy influence traders to try to exploit potential areas of growth, but it’s hard, then, to discount the effect of financial speculation in commodities outright.
I can at least agree with this:
The rises in food prices since 2004 have generated huge wealth transfers global landholders, agricultural input suppliers, and biofuels producers. The losers lobal landholders, agricultural input suppliers, and biofuels producers. The losers have been net consumers of food, including large numbers of the world’s poorest ave been net consumers of food, including large numbers of the world’s poorest peoples.
It’s an old paper, but I just came across The Colonial Origins of Comparative Development: An Empirical Investigation
by Daron Acemoglu, Simon Johnson and James A. Robinson, originally published in the The American Economic Review back in 2001.
They take rough data of settler deaths back in the seventeenth and eighteenth centuries and plot them against the GDP of several countries from 1995. I’ve included the plot on the right. What they found was that a higher number of European settler deaths was associated with a long term decline in economic output.
Settling in the seventeenth and eighteenth centuries was a dangerous business, particularly in Sub-Saharan Africa and less so in what is now the United States, New Zealand and Australia. Malaria and yellow fever were responsible for killing up to 100% of groups brave enough to attempt the journey.
Acemoglu, et al.’s argument is as follows:
1. There were different types of colonization policies which created different sets of institutions. At one extreme, European powers set up “extractive states,” exemplified by the Belgian colonization of the Congo. These institutions did not introduce much protection for private property, nor did they provide checks and balances against government expropriation. In fact, the main purpose of the extractive state was to transfer as much of the resources of the colony to the colonizer. At the other extreme, many Europeans migrated and settled in a number of colonies, creating what the historian Alfred Crosby (1986) calls “Neo-Europes.” The settlers tried to replicated European institutions, with strong emphasis on private property and checks against government power. Primary examples of this include Australia, New Zealand, Canada, and the United States.
2. The colonization strategy was influenced by the feasibility of settlements. In places where the disease environment was not favorable to European settlement, the cards were stacked against the creation of Neo-Europes, and the formation of the extractive state was more likely.
3. The colonial state and institutions persisted even after independence.
They argue that the disease environment determined the nature of settlements, which determine the nature of institutions which, in term, determined the economic trajectory of a country.
Interestingly, they control for all of the things that one might control for, such as distance from the equator and the percentage of inhabitants that were European, being landlocked and the ruling power, ruling out the effect of some obvious potential influences. Property rights, a solid judiciary and limits on political power in the colonies and upon independence, they argue, had a greater effect on long term GDP, and the development of those institutions was enabled or inhibited by early settler mortality.
It’s a fairly compelling argument, though not without its critics.
A few gems from the paper interested me. One, the return on investment in the British colonies during the nineteenth century was a whopping 25%, far more than one could have expected domestically. In the late 19th and early 20th centuries, this dropped so that returns on colonial and domestic investments were the same.
I found (finally!) a reference to indicate the willful choosing of high altitude and thus less malarious areas for colonial settlements. Note that in Europe and the US, the location of cities is often along river ways and sea sides, where in Africa large cities tend to be placed inland (with some exceptions). There has been no industrial revolution in Africa and little regional trade (a condition which persists to this day) so that cities along water based shipping routes are not necessary. Extraction in Africa was largely done by rail, further alleviating the need to be close to rivers.
Though I’ve ripped this off the Big Picture blog (which my good friend Chris introduced me to), I’ll repeat it again here (since it was ripped off the Fed of New York anyway).
I never considered the problem of having to physically move money (read: metal coins) around to make foreign investments. Moving it would be an incredible risk, as it would likely be stolen along the way. Turns out, you could just pay the money to the central bank in Rome, and the Romans would just deduct the amount of money you wanted to transfer from their tax collection in whatever region it was going to.
This is worth the read. I promise I’ll write something of substance after I’m done dissertating.
Historical Echoes: Cash or Credit? Payments and Finance in Ancient Rome
Marco Del Negro and Mary Tao
Imagine yourself a Roman citizen in the 1st Century B.C. You’ve gone shopping with your partner, who’s trying to convince you to buy a particular item. The thing’s pretty expensive, and you demur because you’re short of cash. You may think that back then such an excuse would get you off scot-free. What else can you possibly do: Write a check? Well, yes, writes the poet Ovid in his “Ars Amatoria, Book I.” And since your partner knows it, you have no way out (the example below shows some gender bias on Ovid’s part. Fortunately, a few things have changed over the past 2,000 years):
But when she has her purchase in her eye,
She hugs thee close, and kisses thee to buy;
“Tis what I want, and ‘tis a pen’orth too;
In many years I will not trouble you.”
If you complain you have no ready coin,
No matter, ‘tis but writing of a line;
A little bill, not to be paid at sight:
(Now curse the time when thou wert taught to write.)
In a previous Historical Echoes post, we describe some of the characters in early Roman high and low finance. Here, we look at their modus operandi.
Large sums of money changed hands in Roman times. People bought real estate, financed trade, and invested in the provinces occupied by the Roman legions. How did that happen? Cicero writes, in Epistulae ad Familiares 5.6 and Epistulae ad Atticum 13.31, respectively: “I have bought that very house for 3.5 million sesterces” and “Gaius Albanius is the nearest neighbor: he bought 1,000 iugera [625 acres] of M. Pilius, as far as I can remember, for 11.5 million sesterces.” How? asks historian H. W. Harris (in “The Nature of Roman Money”)–“mechanically speaking, did Cicero pay three and half million sesterces he laid out for his famous house in the Palatine . . . . That would have meant packing and carrying some three and half tons of coins through the streets of Rome. When C. Albanius bought an estate from C. Pilius for eleven and half million sesterces, did he physically send the sum in silver coins?” Harris’ answer is: “Without much doubt, these were at least for the most part documentary [i.e., paper] transactions. The commonest procedure for large property purchases in this period was the one casually alluded to by Cicero [De Officiis 3.59] . . . ‘nomina facit, negotium conficit’ . . . provides the credit [or ‘bonds’–nomina], completes the purchase.”
What exactly are these nomina?–from which, by the way, comes the term “nominal,” so commonly used in economics. In his Ph.D. dissertation “Bankers, Moneylenders, and Interest Rates in the Roman Republic,” C. T. Barlow writes (pp. 156-7): “An entry in an account book was called a nomen. Originally the word meant just that–a name with some numbers attached. By Cicero’s day . . . [n]omen could also mean “debt,” referring to the entries in the creditor’s and the debtor’s account books.” And this “debt was in fact the lifeblood of the Roman economy, at all levels . . . nomina were a completely standard part of the lives of people of property, as well as being an everyday fact of life for a great number of others” (Harris, p. 184). Pliny the Younger writes, for example, (in Epistulae 3.19): “Perhaps you will ask whether I can raise these three millions without difficulty. Well, nearly all my capital is invested in land, but I have some money out at interest and I can borrow without any trouble.”
For concreteness, say that some fellow, Sempronius, owes you one million sesterces. You–or in case you’re a wealthy senator, or eques, your financial advisor (procurator–Titus Pomponius Atticus was Cicero’s)–would record the debt in the ledger. What if you suddenly needed the money to buy some property? Do you have to wait for Sempronius to bring you a bag with 1 million sesterces? No! As long as Sempronius is a worthy creditor (a bonum nomen [see Barlow, p. 156]; in the modern parlance of credit rating agencies, a triple-A creditor), you’d do what Cicero says: transfer the nomina, strike the deal. For example, Cicero writes to his financial advisor Atticus (Ad Atticum 12.31): “If I were to sell my claim on Faberius, I don’t doubt my being able to settle for the grounds of Silius even by a ready money payment.” As Harris (p. 192) observes: “Nomina were transferable, and by the second century B.C., if not earlier, were routinely used as a means of payment for other assets . . . . The Latin term for the procedure by which the payer transferred a nomen that was owed to him to the seller was delegatio.”
So, we’ve seen that Romans could settle payments by transferring nomina. But was there a market for nomina, just like there’s one today in, say, mortgage-backed securities? According to both Barlow and Harris, the answer is yes. They claim that the Romans took the transferability one step further and essentially turned “mere entries in account books” into “negotiable notes” (see Barlow, p. 159, and Harris, p. 192). Not everyone agrees. The economic historian P. Temin (“Financial Intermediation in the Early Roman Empire”) also reports evidence of assignability of loans, opening the possibility of “wider negotiability, but,” he adds, “we do not have any evidence that it happened” (p. 721). Yet some indirect evidence is there. For instance, the idea of negotiable notes appears to be well understood by Roman jurists, such as Ulpian (The Digest of Justinian XXX.I.44): “A party who bequeaths a note bequeaths the claim and not merely the material on which the writing appears. This is proved by a sale, for when a note is sold, the debt by which it is evidenced is also considered to be sold.”
What if you had to transfer money to somebody in a different part of the globe? As the Roman dominions expanded into Greece, Spain, North Africa, and Asia, Roman finance actually faced this logistical problem. If you’re in Rome and want to, say, finance Caius’ mines in Thapsus, North Africa, how do you get him the money? He needs the silver to buy material, slaves, and other things, but you’re naturally very reluctant to see your money sail away for Africa, as the chances of it getting there aren’t that high (see pirates, shipwrecks, etc.). “Permutatio, the transfer of funds from place to place through paper transactions, was Rome’s great contribution to ancient banking” (Barlow, p. 168). It worked as follows: The publicani were private companies in charge of tax collection in the provinces (as well as many other tasks; see “Publicani,” by U. Malmendier). They had a branch in Rome and one in Thapsus. So, you’d give them the silver in Rome (or transfer them some nomina) and they’d divert some of their tax collection in North Africa to Caius. This is also how the Republic would finance its public spending overseas. Since taxes were collected throughout the provinces, by trading claims on taxes Romans could transfer funds across the globe–or at least to the part of the globe they had conquered.
Interestingly, some historians measure the sophistication of Roman finance “by the extent banks were present” (Temin, p. 719). While it is true that we have no evidence of a 1st Century B.C. Wells Fargo, this may not necessarily imply lack of sophistication. Prior to the Great Recession in the United States, a large chunk of financial intermediation didn’t involve banks–it went through the “shadow banking system.” Roman high finance “functioned primarily on the basis of brokerage” (K. Verboven, “Faeneratores, Negotiatores and Financial Intermediation in the Roman World,” p. 12), and hence was a bit like a proto-shadow banking system, as we suggest in our prior post. Like the shadow-banking system in the United States, it was fragile. Going back to our earlier example, we note that if whomever you want to buy property from starts wondering about the creditworthiness of Sempronius, she will not accept his nomina in payment and will want cash. That’ll force you to call in the loan to Sempronius, who in order to pay you will call in his loan to Titus, and so on. But financial crises in ancient Rome are the subject of a future post.
We are grateful to Cameron Hawkins of the University of Chicago for help navigating the literature.
A search for all articles with “malaria” in the text yields an amazing 33,800 results. Browsing through the headlines is like reading a brief history of the disease as seen through an American lens.
The oldest article is from 1889, a report on a malaria outbreak on the upper Hudson in New York: “An epidemic of a malarial nature is reported from towns along the upper Hudson, one physician in Newburg reporting more than seventy cases under his care. Newburg is famous for its breakneck streets.”
The article is notable because in 1889, very little was known about the disease. Of course, in 2012, we know much, much more, but the challenges (problems in diagnosis, complex and often contradictory observations on ecological factors and socio-economic infection gradients) are the same now as they were then.
“30 INSANE PARETICS CURED BY MALARIA; Long Island College Hospital Reports Marked Success With New Treatment. Thirty patients regarded as hopelessly insane are back at work and leading normal lives after being artificially inoculated with malaria, allowed to suffer chills and fever for two weeks or so and then treated with drugs, according to an announcement yesterday by the Long Island College Hospital.”
I don’t think that anyone really knew what the “paretics” were suffering from, but it was likely syphilis. Malaria was used briefly to treat a variety of neurological disorders caused by infectious agents, with varying degrees of success and failure.
Vaccines have long been “just around the corner,” only to die in sad failure. The most overly optimistic claim came in 1984 from then head of USAID, M. Peter McPherson (who later became President of Michigan State University):
M. Peter McPherson, administrator of the Agency for International Development, said he expected that a vaccine would be ready for trial in humans within 12 to 18 months and widely available throughout the world within five years. ”We think this is a practical schedule,” he told a news conference at the State Department today.
A classic case of overstatement, I’m sure that he regrets this event to this day. No wonder scientists have to be wishy washy with their predictions. Statement like this live in sad perpetuity. We still don’t have a vaccine, and the outlook for having one any time soon hasn’t gotten much better now than in 1984.
1889 North River Malaria
1925 30 INSANE PARETICS CURED BY MALARIA
1925 WAR ON MALARIA BEGUN BY LEAGUE
1938 MALARIA SCOURGE FOUGHT BY THE TYA
1943 Malaria Problem; Our Knowledge Is Still in an Unsatisfactory State
1944 us HEALTH SERVICE COMBATS MALARIA
1945 New Drugs to Combat Malaria Are Tested in Prisons for Army
1946 CURE FOR MALARIA BARED BY CHEMISTS
1948 NEW DRUGS TO END MALARIA SCOURGE
1951 Army Tests Drug as Malaria Cure; Doses Given Troops
1952 un GAINS GROUND AGAINST MALARIA
1957 World-Wide Battle On Malaria Mapped
1961 New Malaria Threat Is Studied At Infectious Diseases Center
1965 A ‘NEW’ MALARIA RAGES IN VIETNAM
1966 Leprosy Drug Reduces Malaria Among gi’s
1970 Malaria Up Sharply in Nation; Most Cases Traced to Vietnam
1971 Drug Users Spur Malaria Revival
1974 Prison Official in Illinois Halts Malaria Research on Inmates
1977 Malaria Spreading in Central America as Resistance to Sprays Grows
1984 MALARIA VACCINE IS NEAR, U.S. HEALTH OFFICIALS SAY
1987 Drug Combinations Offer New Hope in Fighting Malaria
1988 Scientists Report Advances In Vaccine Against Malaria
1991 Outwitted by Malaria, Desperate Doctors Seek New Remedies
1991 Hope of Human Malaria Vaccine Is Offered
1993 Mefloquine Is Found Best Against Malaria
1994 Vaccine Cuts Malaria Cases In Africa Test
1995 Vaccine for Malaria Failed in New Test
1996 Tests of Malaria Drug From China Bring Hope and Cautionary Tales
Clearly, it was well known that cigarettes caused cancer and strokes, even back in 1915, despite the tobacco industry’s fight against scientific claims that carried well into the 60′s and 70′s.
Zion, apparently, was founded as a Christian oasis in a country fraught with sin by a Mr. John Alexander Dowie. In addition to regular (and popular) faith healings, he was also known for waging a “Prayer Duel” with self-appointed Muslim prophet, Hadhrat Mirza Ghulam Ahmad. Ahmad was a complicated figure himself.
It was said that whoever died first during the duel would be exposed as a fraud. Dowie died a year before Ahmad of alcoholism.
Afghanistan’s health profile could be considered to be the worst in the entire world. Infant (1.65/10 births) and maternal mortality (1.4/100 births) are high and life expectancy short (46 years) (World Bank) After years of warfare, an anti-woman Taliban regime, it can be said that even the most basic of health needs have remained unattended to, largely ignored and out of the public discourse.
In 2002, post invasion, the Afghan Ministry of Public Health along with the World Health Organization, UNICEF and United Nations Population Fund established a framework of basic services, which included essential mother-child health care, basic vaccinations, control of TB and malaria, nutrition and basic mental health services. Tuberculosis and malaria (largely vivax) run rampant throughout Afghanistan. Through the proactive efforts of Rural Expansion of Afghanistan’s Community Based Health Care, health care access in Afghanistan has gone from 40 to 77 percent in the past 8 years, but that still leaves more than 7 million people without any access to even the most basic of care. To put it in perspective, this would be equivalent to the entire population of Michigan having no access to any type of health care at all.
While pictures we see of Afghanistan here are largely from the large population center of Kabul, it is forgotten that Afghanistan is roughly the size of Texas and provides home to nearly 30 million people. Afghan residents are spread in nearly every quarter of Afghanistan and largely have little access to basics such as electricity and schools. One of the poorest countries and lagely inaccessible places on the planet, it is no surprise that the country has massive internal challenges to surpass.
Afghan Health Services
The Afghan MoPH maintains a listing of all Basic Package of Health Services facilities throughout the country and has made a database freely available online. There are nearly 800 facilities spread throughout Afghanistan, consisting of District Hospitals, Basic Health Centers and Mother Child Health Clinics. Kabul has the largest number of facilities at 79. Assuming the 115 District Hospitals accept any Afghani seeking care, the average catchment of an Afghan District Hospital would include nearly 270,000 people. To put this into perspective, Michigan, with a population of approximately 10 million people, has nearly 1,320 hospitals. That’s one hospital for 8000 people. Accounting for population and potential catchment areas, there are hospitals (that are likely understaffed and underfunded) which serve more than 1.7 million potential patients (Chahar Burjak Hospital), whereas hospitals near Kabul and Kandahar which serve less than 200,000 people, still an incredible number when placed against the United States.
Hospital Catchment Areas and Civilian Casualties
It is doubtful that local small health facilities are equipped to handle seriously injured individuals. Thus, civilians wounded in conflict events must either make their way to a district hospital, hope for the best from the local facility or do nothing and potentially die. Thus, it would be of interest which facilities potentially serve the largest number of civilian casualties and where they might be located. The map on the right shows the number of civilian casualties as a function of the underlying catchment population. The units in the legend are odd due the the catchments being in millions, but the relative color scales not. Facilities in the southern districts are disproportionately overloaded due the high number of civilian casualties within their respective catchments.
Geographic Access to Health Services
Hospital access, in addition to overburdend by the sheer numbers of the surrounding population are mostly inaccessible to the Afghan population, as the figure on the left confirms. Accounting for elevation, slope and the rudimentary road system, the brunt of Afghanistan has no access to health services. Most areas of Afghanistan are located more than 300 or more kilometers from the nearest hospital.in developing country contexts, 5 km or more is considered to be a market of lack of access to health services. As in all developing countries, facility utilization is strongly related to proximity to services (O’Donnell 2007).
Conflict Events at Health Facilities
Although completely reprehensible, conflict events do occur at health facilities, particularly those which are located in urban areas. The recent “Afghan War Diary”, unfortunately, confirms that they not only have occurred, but are fairly commonplace. For the purpose of this analysis, I considered any event within 100 meters of a health facility to be at the facility itself. Summing over all the events within the 100m buffer, I discovered that none to as many as 25 events occur at facilities, specifically at the Hilmand District Hospital.As many as 31 people have died in attacks on health facilities, and as many as 13 have been wounded in events on or directly proximal to a hospital or clinic. It is well worth noting, that the largest numbers of attacks on health facilities occurs not within crowded Kabul, but rather in the rural northern areas.
Relationship of Distance from Health Facility to Civilian Casualties
Calculating the mean number of casualties per facility by deciles of distance to health facility from the conflict event, I found that the most casualties occur near facilities. Facilities are often located near infrastructure and market centers, raising the likelihood of civilian casualty should a conflict event occur. yet, this calculation is restricted to actual events. Although the mean difference is only slight, the pattern of decreasing death and injury with distance is striking. However, without data on the distribution of households in relation to health facilities, true effects are difficult to determine.
Environmental Determinants of Civilian Casualties
Using available data from GIS sources, such as elevation, distance to water, distance to roadways and distance to nearest health facility, I was able to relate the number of wounded civilians in a conflict event to environmental variables. Using a negative binomial model to determine the statistical significance of possible predictive covariates, I found a best model included only distance to road and distance to the nearest health facility. In fact, both variables required a quadratic term, and both distance to road and distance to health facilities were related to a sharp decrease in civilian casualties as distance increased. Analyzing the estimated coefficients, I found that civilian deaths were at a minimum at 20 km from the nearest health facility, and 7 kilometers from the nearest road. Both were at a maximum directly at the facility, and at the road. This result is, or course, hardly surprising, as people most often reside close to road and close to infrastructure. Still, the pattern of these two variables was interesting, and more interesting was that both retained significance even when included in the same model.
glm.nb(formula = CivilianCasu ~ DisttoHF + DisttoHF2 + DisttoRoad + DisttoRoad2,
data = subx, init.theta = 0.05989177641, link = log)
|Estimate||Std. Error||z value||Pr(>|z|)|
|(Intercept)||-0.8944109||0.0708122||-12.631||< 2e-16 ***|
|Distance to health facility||-0.0806837||0.0116815||-6.907||4.95e-12 ***|
|Distance to health facility^2||0.0023833||0.0003205||7.437||1.03e-13 ***|
|Distance to Road||-7.7975250||2.0432720||-3.816||0.000136 ***|
|Distance to Road^2||24.6081933||9.8147377||2.507||0.012167 *|
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The Afghani health system, already strained to the gills with immense public health challenges, suffers under the brunt of a lack of public funds in a non-existent economy, warfare, a dearth of trained physicians and the massive populations which they must serve. That conflict events which result in civilian casualties occur in the proximity to health facilities is unforgiveable and all parties in this senseless conflict would do well to respect the safety of the Afghani civilian population. While the outlook under the present government is miles above that which existed (or didn’t) under the Taliban, there is still much to do.
O’DONNELL, Owen. Access to health care in developing countries: breaking down demand side barriers. Cad. Saúde Pública [online]. 2007, vol.23, n.12 [cited 2010-11-23], pp. 2820-2834 . Available from: . ISSN 0102-311X. doi: 10.1590/S0102-311X2007001200003.
Zwarenstein, M., D. Krige, and B. Wolff. 1991. The use of a geographical information system for hospital catchment area research in Natal/KwaZulu. South African Medical Journal 80: 497-500.