The blog Uneasymoney, posted an article this morning claiming that policies which encouraged the production of biofuels was responsible for the crazy run in commodity prices throughout the 2000′s and was ultimately responsible for the 2007/2008 crash.
The post refers to an article in the Journal of Economic Perspectives, which I am reading now but the results of which are summed up here:
the research of Wright et al. shows definitively that the runup in commodities prices after 2005 was driven by a concerted policy of intervention in commodities markets, with the fervent support of many faux free-market conservatives serving the interests of big donors, aimed at substituting biofuels for fossil fuels by mandating the use of biofuels like ethanol.
What does this have to do with the financial crisis of 2008? Simple. ..the Federal Open Market Committee, after reducing its Fed Funds target rates to 2% in March 2008 in the early stages of the downturn that started in December 2007, refused for seven months to further reduce the Fed Funds target because the Fed, disregarding or unaware of a rapidly worsening contraction in output and employment in the third quarter of 2008. Why did the Fed ignore or overlook a rapidly worsening economy for most of 2008 — even for three full weeks after the Lehman debacle? Because the Fed was focused like a laser on rapidly rising commodities prices, fearing that inflation expectations were about to become unanchored – even as inflation expectations were collapsing in the summer of 2008. But now, thanks to Wright et al., we know that rising commodities prices had nothing to do with monetary policy, but were caused by an ethanol mandate that enjoyed the bipartisan support of the Bush administration, Congressional Democrats and Congressional Republicans. Ah the joy of bipartisanship.
So then, what I’m gathering here is that the Fed was obsessive about commodity prices fearing inflation, despite the fact that the Fed was in no position to influence commodities markets. This distracted the Fed from focusing on the real causes of the crash and the Lehman disaster, making a bad situation worse.
I’m not sure that this correctly connects the dots, given that there is little evidence that the run in commodity prices had anything to do with biofuels. Even as biofuel consumption increased throughout the 00′s, overall production of corn and yield per acre also increased. Assuming that commodity prices are in part dictated by supply, I would (from an armchair economist perspective) assume that prices should remain somewhat constant.
I’m interested to see that the article disregards financialization of commodities, following a loosening of rules of speculation on ag products in the 90′s and the move toward commodities following the equity bust of 2000 as not being a major factor in the rise in corn prices. This is particularly strange when we consider that non-energy commodities also exhibited rapid price increases and violent fluctuations throughout the 00′s. I fail to see how energy policy could result in increases and volatility in, for example, copper.
It’s a tempting thesis, and made more tempting by the explicit identification of individuals who suggested and implemented such policy, but not one borne out by the data, in my limited, amateurish opinion. The list of potential factors which influenced the run in commodities is a long and confusing one (climate change, increased demand from China and India, global instability, etc. etc.), but I don’t think that the effect of Wall Street greed can be discounted as a major determinant. Interestingly, despite the overall themes of the paper, the author does a poor job of discounting the effect of financialization in the creating of commodity price bubbles.
In reading this paper now, I’m somewhat confused. On the one hand, he confirms many of my initial suspicions that the rising price of food is unrelated to supply and demand factors as growth of both supply and demand were more or less constant, despite localized climate shocks. On the other, he seems to blame a rise in prices during the crash to a shift in energy policy toward biofuels, while overlooking that commodities were already volatile and rising, beginning with the crash of the tech bubble in 2000. I am thining that much of the rise in commodities during 2007/8 was due to panicky speculation as real estate markets tumbled, not to any change in energy policy. Certainly, it may be the case the the policy influence traders to try to exploit potential areas of growth, but it’s hard, then, to discount the effect of financial speculation in commodities outright.
I can at least agree with this:
The rises in food prices since 2004 have generated huge wealth transfers global landholders, agricultural input suppliers, and biofuels producers. The losers lobal landholders, agricultural input suppliers, and biofuels producers. The losers have been net consumers of food, including large numbers of the world’s poorest ave been net consumers of food, including large numbers of the world’s poorest peoples.
It’s an old paper, but I just came across The Colonial Origins of Comparative Development: An Empirical Investigation
by Daron Acemoglu, Simon Johnson and James A. Robinson, originally published in the The American Economic Review back in 2001.
They take rough data of settler deaths back in the seventeenth and eighteenth centuries and plot them against the GDP of several countries from 1995. I’ve included the plot on the right. What they found was that a higher number of European settler deaths was associated with a long term decline in economic output.
Settling in the seventeenth and eighteenth centuries was a dangerous business, particularly in Sub-Saharan Africa and less so in what is now the United States, New Zealand and Australia. Malaria and yellow fever were responsible for killing up to 100% of groups brave enough to attempt the journey.
Acemoglu, et al.’s argument is as follows:
1. There were different types of colonization policies which created different sets of institutions. At one extreme, European powers set up “extractive states,” exemplified by the Belgian colonization of the Congo. These institutions did not introduce much protection for private property, nor did they provide checks and balances against government expropriation. In fact, the main purpose of the extractive state was to transfer as much of the resources of the colony to the colonizer. At the other extreme, many Europeans migrated and settled in a number of colonies, creating what the historian Alfred Crosby (1986) calls “Neo-Europes.” The settlers tried to replicated European institutions, with strong emphasis on private property and checks against government power. Primary examples of this include Australia, New Zealand, Canada, and the United States.
2. The colonization strategy was influenced by the feasibility of settlements. In places where the disease environment was not favorable to European settlement, the cards were stacked against the creation of Neo-Europes, and the formation of the extractive state was more likely.
3. The colonial state and institutions persisted even after independence.
They argue that the disease environment determined the nature of settlements, which determine the nature of institutions which, in term, determined the economic trajectory of a country.
Interestingly, they control for all of the things that one might control for, such as distance from the equator and the percentage of inhabitants that were European, being landlocked and the ruling power, ruling out the effect of some obvious potential influences. Property rights, a solid judiciary and limits on political power in the colonies and upon independence, they argue, had a greater effect on long term GDP, and the development of those institutions was enabled or inhibited by early settler mortality.
It’s a fairly compelling argument, though not without its critics.
A few gems from the paper interested me. One, the return on investment in the British colonies during the nineteenth century was a whopping 25%, far more than one could have expected domestically. In the late 19th and early 20th centuries, this dropped so that returns on colonial and domestic investments were the same.
I found (finally!) a reference to indicate the willful choosing of high altitude and thus less malarious areas for colonial settlements. Note that in Europe and the US, the location of cities is often along river ways and sea sides, where in Africa large cities tend to be placed inland (with some exceptions). There has been no industrial revolution in Africa and little regional trade (a condition which persists to this day) so that cities along water based shipping routes are not necessary. Extraction in Africa was largely done by rail, further alleviating the need to be close to rivers.
Though I’ve ripped this off the Big Picture blog (which my good friend Chris introduced me to), I’ll repeat it again here (since it was ripped off the Fed of New York anyway).
I never considered the problem of having to physically move money (read: metal coins) around to make foreign investments. Moving it would be an incredible risk, as it would likely be stolen along the way. Turns out, you could just pay the money to the central bank in Rome, and the Romans would just deduct the amount of money you wanted to transfer from their tax collection in whatever region it was going to.
This is worth the read. I promise I’ll write something of substance after I’m done dissertating.
Historical Echoes: Cash or Credit? Payments and Finance in Ancient Rome
Marco Del Negro and Mary Tao
Imagine yourself a Roman citizen in the 1st Century B.C. You’ve gone shopping with your partner, who’s trying to convince you to buy a particular item. The thing’s pretty expensive, and you demur because you’re short of cash. You may think that back then such an excuse would get you off scot-free. What else can you possibly do: Write a check? Well, yes, writes the poet Ovid in his “Ars Amatoria, Book I.” And since your partner knows it, you have no way out (the example below shows some gender bias on Ovid’s part. Fortunately, a few things have changed over the past 2,000 years):
But when she has her purchase in her eye,
She hugs thee close, and kisses thee to buy;
“Tis what I want, and ‘tis a pen’orth too;
In many years I will not trouble you.”
If you complain you have no ready coin,
No matter, ‘tis but writing of a line;
A little bill, not to be paid at sight:
(Now curse the time when thou wert taught to write.)
In a previous Historical Echoes post, we describe some of the characters in early Roman high and low finance. Here, we look at their modus operandi.
Large sums of money changed hands in Roman times. People bought real estate, financed trade, and invested in the provinces occupied by the Roman legions. How did that happen? Cicero writes, in Epistulae ad Familiares 5.6 and Epistulae ad Atticum 13.31, respectively: “I have bought that very house for 3.5 million sesterces” and “Gaius Albanius is the nearest neighbor: he bought 1,000 iugera [625 acres] of M. Pilius, as far as I can remember, for 11.5 million sesterces.” How? asks historian H. W. Harris (in “The Nature of Roman Money”)–“mechanically speaking, did Cicero pay three and half million sesterces he laid out for his famous house in the Palatine . . . . That would have meant packing and carrying some three and half tons of coins through the streets of Rome. When C. Albanius bought an estate from C. Pilius for eleven and half million sesterces, did he physically send the sum in silver coins?” Harris’ answer is: “Without much doubt, these were at least for the most part documentary [i.e., paper] transactions. The commonest procedure for large property purchases in this period was the one casually alluded to by Cicero [De Officiis 3.59] . . . ‘nomina facit, negotium conficit’ . . . provides the credit [or ‘bonds’–nomina], completes the purchase.”
What exactly are these nomina?–from which, by the way, comes the term “nominal,” so commonly used in economics. In his Ph.D. dissertation “Bankers, Moneylenders, and Interest Rates in the Roman Republic,” C. T. Barlow writes (pp. 156-7): “An entry in an account book was called a nomen. Originally the word meant just that–a name with some numbers attached. By Cicero’s day . . . [n]omen could also mean “debt,” referring to the entries in the creditor’s and the debtor’s account books.” And this “debt was in fact the lifeblood of the Roman economy, at all levels . . . nomina were a completely standard part of the lives of people of property, as well as being an everyday fact of life for a great number of others” (Harris, p. 184). Pliny the Younger writes, for example, (in Epistulae 3.19): “Perhaps you will ask whether I can raise these three millions without difficulty. Well, nearly all my capital is invested in land, but I have some money out at interest and I can borrow without any trouble.”
For concreteness, say that some fellow, Sempronius, owes you one million sesterces. You–or in case you’re a wealthy senator, or eques, your financial advisor (procurator–Titus Pomponius Atticus was Cicero’s)–would record the debt in the ledger. What if you suddenly needed the money to buy some property? Do you have to wait for Sempronius to bring you a bag with 1 million sesterces? No! As long as Sempronius is a worthy creditor (a bonum nomen [see Barlow, p. 156]; in the modern parlance of credit rating agencies, a triple-A creditor), you’d do what Cicero says: transfer the nomina, strike the deal. For example, Cicero writes to his financial advisor Atticus (Ad Atticum 12.31): “If I were to sell my claim on Faberius, I don’t doubt my being able to settle for the grounds of Silius even by a ready money payment.” As Harris (p. 192) observes: “Nomina were transferable, and by the second century B.C., if not earlier, were routinely used as a means of payment for other assets . . . . The Latin term for the procedure by which the payer transferred a nomen that was owed to him to the seller was delegatio.”
So, we’ve seen that Romans could settle payments by transferring nomina. But was there a market for nomina, just like there’s one today in, say, mortgage-backed securities? According to both Barlow and Harris, the answer is yes. They claim that the Romans took the transferability one step further and essentially turned “mere entries in account books” into “negotiable notes” (see Barlow, p. 159, and Harris, p. 192). Not everyone agrees. The economic historian P. Temin (“Financial Intermediation in the Early Roman Empire”) also reports evidence of assignability of loans, opening the possibility of “wider negotiability, but,” he adds, “we do not have any evidence that it happened” (p. 721). Yet some indirect evidence is there. For instance, the idea of negotiable notes appears to be well understood by Roman jurists, such as Ulpian (The Digest of Justinian XXX.I.44): “A party who bequeaths a note bequeaths the claim and not merely the material on which the writing appears. This is proved by a sale, for when a note is sold, the debt by which it is evidenced is also considered to be sold.”
What if you had to transfer money to somebody in a different part of the globe? As the Roman dominions expanded into Greece, Spain, North Africa, and Asia, Roman finance actually faced this logistical problem. If you’re in Rome and want to, say, finance Caius’ mines in Thapsus, North Africa, how do you get him the money? He needs the silver to buy material, slaves, and other things, but you’re naturally very reluctant to see your money sail away for Africa, as the chances of it getting there aren’t that high (see pirates, shipwrecks, etc.). “Permutatio, the transfer of funds from place to place through paper transactions, was Rome’s great contribution to ancient banking” (Barlow, p. 168). It worked as follows: The publicani were private companies in charge of tax collection in the provinces (as well as many other tasks; see “Publicani,” by U. Malmendier). They had a branch in Rome and one in Thapsus. So, you’d give them the silver in Rome (or transfer them some nomina) and they’d divert some of their tax collection in North Africa to Caius. This is also how the Republic would finance its public spending overseas. Since taxes were collected throughout the provinces, by trading claims on taxes Romans could transfer funds across the globe–or at least to the part of the globe they had conquered.
Interestingly, some historians measure the sophistication of Roman finance “by the extent banks were present” (Temin, p. 719). While it is true that we have no evidence of a 1st Century B.C. Wells Fargo, this may not necessarily imply lack of sophistication. Prior to the Great Recession in the United States, a large chunk of financial intermediation didn’t involve banks–it went through the “shadow banking system.” Roman high finance “functioned primarily on the basis of brokerage” (K. Verboven, “Faeneratores, Negotiatores and Financial Intermediation in the Roman World,” p. 12), and hence was a bit like a proto-shadow banking system, as we suggest in our prior post. Like the shadow-banking system in the United States, it was fragile. Going back to our earlier example, we note that if whomever you want to buy property from starts wondering about the creditworthiness of Sempronius, she will not accept his nomina in payment and will want cash. That’ll force you to call in the loan to Sempronius, who in order to pay you will call in his loan to Titus, and so on. But financial crises in ancient Rome are the subject of a future post.
We are grateful to Cameron Hawkins of the University of Chicago for help navigating the literature.
A search for all articles with “malaria” in the text yields an amazing 33,800 results. Browsing through the headlines is like reading a brief history of the disease as seen through an American lens.
The oldest article is from 1889, a report on a malaria outbreak on the upper Hudson in New York: “An epidemic of a malarial nature is reported from towns along the upper Hudson, one physician in Newburg reporting more than seventy cases under his care. Newburg is famous for its breakneck streets.”
The article is notable because in 1889, very little was known about the disease. Of course, in 2012, we know much, much more, but the challenges (problems in diagnosis, complex and often contradictory observations on ecological factors and socio-economic infection gradients) are the same now as they were then.
“30 INSANE PARETICS CURED BY MALARIA; Long Island College Hospital Reports Marked Success With New Treatment. Thirty patients regarded as hopelessly insane are back at work and leading normal lives after being artificially inoculated with malaria, allowed to suffer chills and fever for two weeks or so and then treated with drugs, according to an announcement yesterday by the Long Island College Hospital.”
I don’t think that anyone really knew what the “paretics” were suffering from, but it was likely syphilis. Malaria was used briefly to treat a variety of neurological disorders caused by infectious agents, with varying degrees of success and failure.
Vaccines have long been “just around the corner,” only to die in sad failure. The most overly optimistic claim came in 1984 from then head of USAID, M. Peter McPherson (who later became President of Michigan State University):
M. Peter McPherson, administrator of the Agency for International Development, said he expected that a vaccine would be ready for trial in humans within 12 to 18 months and widely available throughout the world within five years. ”We think this is a practical schedule,” he told a news conference at the State Department today.
A classic case of overstatement, I’m sure that he regrets this event to this day. No wonder scientists have to be wishy washy with their predictions. Statement like this live in sad perpetuity. We still don’t have a vaccine, and the outlook for having one any time soon hasn’t gotten much better now than in 1984.
1889 North River Malaria
1925 30 INSANE PARETICS CURED BY MALARIA
1925 WAR ON MALARIA BEGUN BY LEAGUE
1938 MALARIA SCOURGE FOUGHT BY THE TYA
1943 Malaria Problem; Our Knowledge Is Still in an Unsatisfactory State
1944 us HEALTH SERVICE COMBATS MALARIA
1945 New Drugs to Combat Malaria Are Tested in Prisons for Army
1946 CURE FOR MALARIA BARED BY CHEMISTS
1948 NEW DRUGS TO END MALARIA SCOURGE
1951 Army Tests Drug as Malaria Cure; Doses Given Troops
1952 un GAINS GROUND AGAINST MALARIA
1957 World-Wide Battle On Malaria Mapped
1961 New Malaria Threat Is Studied At Infectious Diseases Center
1965 A ‘NEW’ MALARIA RAGES IN VIETNAM
1966 Leprosy Drug Reduces Malaria Among gi’s
1970 Malaria Up Sharply in Nation; Most Cases Traced to Vietnam
1971 Drug Users Spur Malaria Revival
1974 Prison Official in Illinois Halts Malaria Research on Inmates
1977 Malaria Spreading in Central America as Resistance to Sprays Grows
1984 MALARIA VACCINE IS NEAR, U.S. HEALTH OFFICIALS SAY
1987 Drug Combinations Offer New Hope in Fighting Malaria
1988 Scientists Report Advances In Vaccine Against Malaria
1991 Outwitted by Malaria, Desperate Doctors Seek New Remedies
1991 Hope of Human Malaria Vaccine Is Offered
1993 Mefloquine Is Found Best Against Malaria
1994 Vaccine Cuts Malaria Cases In Africa Test
1995 Vaccine for Malaria Failed in New Test
1996 Tests of Malaria Drug From China Bring Hope and Cautionary Tales
Clearly, it was well known that cigarettes caused cancer and strokes, even back in 1915, despite the tobacco industry’s fight against scientific claims that carried well into the 60′s and 70′s.
Zion, apparently, was founded as a Christian oasis in a country fraught with sin by a Mr. John Alexander Dowie. In addition to regular (and popular) faith healings, he was also known for waging a “Prayer Duel” with self-appointed Muslim prophet, Hadhrat Mirza Ghulam Ahmad. Ahmad was a complicated figure himself.
It was said that whoever died first during the duel would be exposed as a fraud. Dowie died a year before Ahmad of alcoholism.
Afghanistan’s health profile could be considered to be the worst in the entire world. Infant (1.65/10 births) and maternal mortality (1.4/100 births) are high and life expectancy short (46 years) (World Bank) After years of warfare, an anti-woman Taliban regime, it can be said that even the most basic of health needs have remained unattended to, largely ignored and out of the public discourse.
In 2002, post invasion, the Afghan Ministry of Public Health along with the World Health Organization, UNICEF and United Nations Population Fund established a framework of basic services, which included essential mother-child health care, basic vaccinations, control of TB and malaria, nutrition and basic mental health services. Tuberculosis and malaria (largely vivax) run rampant throughout Afghanistan. Through the proactive efforts of Rural Expansion of Afghanistan’s Community Based Health Care, health care access in Afghanistan has gone from 40 to 77 percent in the past 8 years, but that still leaves more than 7 million people without any access to even the most basic of care. To put it in perspective, this would be equivalent to the entire population of Michigan having no access to any type of health care at all.
While pictures we see of Afghanistan here are largely from the large population center of Kabul, it is forgotten that Afghanistan is roughly the size of Texas and provides home to nearly 30 million people. Afghan residents are spread in nearly every quarter of Afghanistan and largely have little access to basics such as electricity and schools. One of the poorest countries and lagely inaccessible places on the planet, it is no surprise that the country has massive internal challenges to surpass.
Afghan Health Services
The Afghan MoPH maintains a listing of all Basic Package of Health Services facilities throughout the country and has made a database freely available online. There are nearly 800 facilities spread throughout Afghanistan, consisting of District Hospitals, Basic Health Centers and Mother Child Health Clinics. Kabul has the largest number of facilities at 79. Assuming the 115 District Hospitals accept any Afghani seeking care, the average catchment of an Afghan District Hospital would include nearly 270,000 people. To put this into perspective, Michigan, with a population of approximately 10 million people, has nearly 1,320 hospitals. That’s one hospital for 8000 people. Accounting for population and potential catchment areas, there are hospitals (that are likely understaffed and underfunded) which serve more than 1.7 million potential patients (Chahar Burjak Hospital), whereas hospitals near Kabul and Kandahar which serve less than 200,000 people, still an incredible number when placed against the United States.
Hospital Catchment Areas and Civilian Casualties
It is doubtful that local small health facilities are equipped to handle seriously injured individuals. Thus, civilians wounded in conflict events must either make their way to a district hospital, hope for the best from the local facility or do nothing and potentially die. Thus, it would be of interest which facilities potentially serve the largest number of civilian casualties and where they might be located. The map on the right shows the number of civilian casualties as a function of the underlying catchment population. The units in the legend are odd due the the catchments being in millions, but the relative color scales not. Facilities in the southern districts are disproportionately overloaded due the high number of civilian casualties within their respective catchments.
Geographic Access to Health Services
Hospital access, in addition to overburdend by the sheer numbers of the surrounding population are mostly inaccessible to the Afghan population, as the figure on the left confirms. Accounting for elevation, slope and the rudimentary road system, the brunt of Afghanistan has no access to health services. Most areas of Afghanistan are located more than 300 or more kilometers from the nearest hospital.in developing country contexts, 5 km or more is considered to be a market of lack of access to health services. As in all developing countries, facility utilization is strongly related to proximity to services (O’Donnell 2007).
Conflict Events at Health Facilities
Although completely reprehensible, conflict events do occur at health facilities, particularly those which are located in urban areas. The recent “Afghan War Diary”, unfortunately, confirms that they not only have occurred, but are fairly commonplace. For the purpose of this analysis, I considered any event within 100 meters of a health facility to be at the facility itself. Summing over all the events within the 100m buffer, I discovered that none to as many as 25 events occur at facilities, specifically at the Hilmand District Hospital.As many as 31 people have died in attacks on health facilities, and as many as 13 have been wounded in events on or directly proximal to a hospital or clinic. It is well worth noting, that the largest numbers of attacks on health facilities occurs not within crowded Kabul, but rather in the rural northern areas.
Relationship of Distance from Health Facility to Civilian Casualties
Calculating the mean number of casualties per facility by deciles of distance to health facility from the conflict event, I found that the most casualties occur near facilities. Facilities are often located near infrastructure and market centers, raising the likelihood of civilian casualty should a conflict event occur. yet, this calculation is restricted to actual events. Although the mean difference is only slight, the pattern of decreasing death and injury with distance is striking. However, without data on the distribution of households in relation to health facilities, true effects are difficult to determine.
Environmental Determinants of Civilian Casualties
Using available data from GIS sources, such as elevation, distance to water, distance to roadways and distance to nearest health facility, I was able to relate the number of wounded civilians in a conflict event to environmental variables. Using a negative binomial model to determine the statistical significance of possible predictive covariates, I found a best model included only distance to road and distance to the nearest health facility. In fact, both variables required a quadratic term, and both distance to road and distance to health facilities were related to a sharp decrease in civilian casualties as distance increased. Analyzing the estimated coefficients, I found that civilian deaths were at a minimum at 20 km from the nearest health facility, and 7 kilometers from the nearest road. Both were at a maximum directly at the facility, and at the road. This result is, or course, hardly surprising, as people most often reside close to road and close to infrastructure. Still, the pattern of these two variables was interesting, and more interesting was that both retained significance even when included in the same model.
glm.nb(formula = CivilianCasu ~ DisttoHF + DisttoHF2 + DisttoRoad + DisttoRoad2,
data = subx, init.theta = 0.05989177641, link = log)
|Estimate||Std. Error||z value||Pr(>|z|)|
|(Intercept)||-0.8944109||0.0708122||-12.631||< 2e-16 ***|
|Distance to health facility||-0.0806837||0.0116815||-6.907||4.95e-12 ***|
|Distance to health facility^2||0.0023833||0.0003205||7.437||1.03e-13 ***|
|Distance to Road||-7.7975250||2.0432720||-3.816||0.000136 ***|
|Distance to Road^2||24.6081933||9.8147377||2.507||0.012167 *|
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
The Afghani health system, already strained to the gills with immense public health challenges, suffers under the brunt of a lack of public funds in a non-existent economy, warfare, a dearth of trained physicians and the massive populations which they must serve. That conflict events which result in civilian casualties occur in the proximity to health facilities is unforgiveable and all parties in this senseless conflict would do well to respect the safety of the Afghani civilian population. While the outlook under the present government is miles above that which existed (or didn’t) under the Taliban, there is still much to do.
O’DONNELL, Owen. Access to health care in developing countries: breaking down demand side barriers. Cad. Saúde Pública [online]. 2007, vol.23, n.12 [cited 2010-11-23], pp. 2820-2834 . Available from: . ISSN 0102-311X. doi: 10.1590/S0102-311X2007001200003.
Zwarenstein, M., D. Krige, and B. Wolff. 1991. The use of a geographical information system for hospital catchment area research in Natal/KwaZulu. South African Medical Journal 80: 497-500.
Using STIS (Space Time Intelligence System) from TerraSeer, I was able to make this animated movie of all US bombing events in Laos from 1965-1973. Dots are sized proportional to the total pounds of explosives dropped.
Note what happens when you get to about 1970.
During the Vietnam War, the US spread combat operations to neighboring Laos. The US secretly waged widespread bombing runs on nearly every corner of the country, as illustrated by the map on the left. Laos experienced more than 30,000 casualties during the bombings, more than 20,000 people have died since bombing ceased in 1974 due to leftover unexploded munitions, and many more tens of thousands were needlessly displaced. A UN report notes that Laos is, per capita, the most bombed country on the planet, with .84 tons of explosives dropped per person from the years 1965 to 1974.
The true extent of the carnage was not known until Clinton declassified military records for the entire Vietnam War. The US military keeps meticulous records of all combat operations, recording the date, precise location, type and number of aircraft and total pounds of explosives dropped. The Defense Security Cooperation Agency’s Office of Humanitarian Demining has been working with the Laotian government to assist in the clean up of leftover landmines and unexploded ordnance. It is estimated that it may take up to 3000 years to clean up all unexploded ordnance in Laos alone.
The U.S. Government spent nearly 17 million dollars every single day to bomb Laos. What it has spent to clean it up, is, as of yet, a pittance (2.7 million a year) and the State Department has reduced this amount even further for 2011. Over 280 million bombs were dropped on Laos. It’s estimated that up to 80 million of them never exploded.
It is through a Laotian demining group that I was able to get a hold of this data set.
The Pattern of Bombing
The United States bombed Laos almost daily for nine years, a country we were not even at war with. Out of 2,858 total days, the United States Air Force bombed Loas for 2,290. Even the Air Force gets weekends and holidays off. Things got really intense in 1968-70 during Operation Menu (Nixon’s secret bombing campaign of Cambodia and Laos), and then spiked again just before the Vietnam War ended.
The military, as in Afghanistan and Iraq, followed seasonal bombing patterns, peaking in summer and falling back during the Christmas season. A time series decomposition confirms an overall peak in 69 to 70, but while the number of bombing runs may have peaked then, the intensity was only magnified. As larger and larger planes came in to the fold (such as the B-52) and smaller craft such as the A-1′s became phased out in favor of the F-4′s, the US military became more efficient in it’s bombing runs, becoming able to drop more tonnage of explosives using fewer aircraft. (It’s incredible what you can learn from data)
The Spatial Distribution of Bombing
The United States bombed nearly every quarter of Laos, but some areas were hit worse than others. In particular, the eastern end of the southern part of Laos, and the area around the province of Xieng Khouang. Areas along the Thai and Cambodian borders suffered less bombing but probably experience the largest influx of refugees.
Relative to the population Xieng Khouang had the largest tonnage of explosives per person dropped on it, followed by the Southernmost province, Attapu. Bombing runs were not uniformly spread across provinces, but appeared to target specific areas more than others in terms of overall tonnage dropped. There appear to be specific hot spots in the south, which could represent any number of things, but none of which are in this data set.
The Vietnam War is widely perceived as having been an incredible policy blunder. That the American government was unwilling to cut it’s losses and stop early was not only a sign of incredible American arrogance, but has resulted in decades of ruined economies, loss of life, and a series of disastrous South East Asian governments, not the least of which was the brutal regime of the Khmer Rouge. This data set, while historically important, should also serve as a reminder of things to come, as the aftermath of the invasion of Iraq comes to the fore. It’s unfortunate that while the Vietnam war is a part of the daily lives of all Laotians, that it rarely registers on the radar of the average American, and if it does, it’s considered to be a problem exclusive to those who served. While the effects of the war on those who fought in Vietnam cannot be understated, the incredible burden that generations of Laotians will experience cannot be forgotten.
Knowing that we were not at war with Laos, the most troubling part of this data set is realizing the incredible monetary expense of the operation. 17 million dollars per day. More than 4 million tons of explosives were levied on Laos. All of which were provided by private contractors such as McDonnell Douglas. I could imagine (although I have no evidence), that the bombing campaigns were less strategic and more corrupt, a dangerous collusion of profit and policy. The secrecy surrounding the bombings make me all the more suspicious. The connections between defense contractors and actions in the Vietnam War and the possibility that the War was extended by those with monetary interests is well worth pursuing. Investigations into the mistakes of Vietnam could go far to inform present day discussions of the merits/demerits of entering long term conflicts. Of course, in the case of Iraq, the milk has already been spilled.
War is devastating in the long term for the US economy. Government spending which could be used to invest in infrastructure and social development projects, is diverted to support an endless war effort. In the short term, however, defense contractors and those involved in defense manufacturing profit. It has been suggested that the workers during the Vietnam war were dependent on defense related manufacturing, so much so, that Reagan’s promises of expanding defense spending helped usher him in office. While our manufacturing jobs may trickle overseas, defense manufacturing must remain in the United States. This creates an internal economy that is dependent on endless war around the world, supported by people who don’t have to fight it. Remember the incredible uproar over the cancellation of the F-22?
I don’t know where I stand on Chomsky besides thinking that he has interesting opinions, but I found this clip interesting. It would be worthwhile to know whether his claims can be verified or not:
NRA, “National Survey of UXO Victims and Accidents, Phase 1,” Vientiane, undated but 2009, p. 39.
Between 1513 and 1867, more than 10 million people were brought to the Americas as slaves. It’s a miserable chapter in human history, yet played a disgustingly key role in the creation of the United States. Researchers at Emory University have gathered records from more than 35,000 slavery voyages and created slavevoyages.org, a research tool that allows you not only to download their incredible data set, but also to create your own reports and visualizations.
Of course, it is estimated that between 12 and 27 million people live in slavery in 2010. In absolute numbers, that’s more than at any time in human history. More than 1.3 million children are trafficked every year, and more than 300 million people, mostly women and children, while not considered slaves in the traditional sense, work under conditions of forced labor. We have much to do.