I was just reading this column in a special section of the NYT from Maano Ramutsindela, geographer from the University of Cape Town.
The partitioning of Africa by European empires has had devastating social, economic, political and psychological impacts, and millions of lives have been lost in post-independence Africa defending colonial borders. We are overdue for an African renaissance, completing the decolonization – which remains unfinished business until boundaries are changed.
His piece is mostly about the issue of parks, but the following came to mind.
1. Perhaps the author does not realize that millions of European lives have also been lost over the issue of borders. Historically, countries in Europe also haven’t fully represented linguistic groups (what is a language anyway?).
2. While from afar it may seem self evident to create states based on language, I’m wondering how that plays out in a country like Kenya, where there are more than 40 languages spoken and where, since often ethnic groups represent occupational groups, linguistic regions overlap. The distinctions between languages and cultures are often artificial anyway. Though the Maasai and the Samburu speak mutually intelligible languages and share almost identical cultural practices, they are bitter enemies and have been at war with one another for centuries.
3. Perhaps we might hope that African states worry more about how to keep themselves together and how to mend their internal divisions, rather than arbitrarily create more. It’s bad enough that the Kenyan government is weak and unresponsive to the needs of its citizenry, but the local governments haven’t shown themselves to be much more effective.
4. Perhaps, instead of dividing Africa even further, we might hope that African states learn to trade amongst one another. One of the main impediments to development is that fact that most African countries don’t trade with one another. There is no domestic trade economy. Could one imagine a world where European countries like Switzerland and Germany only traded with China and not each other? Cause that’s what’s happening in Africa.
5. Worse yet, it assumes that there is such a thing as a “natural” political unit. There is no such thing. All countries are artificially and have been created through mostly undemocratic means.
Worrying about colonial borders is a low hanging side show. While the colonial borders certainly impacted the ways in which modern Africa formed, in the end focusing on the issue is a convenient way of not having to dig more deeply into the complexity of present day facts. Present day Kenya is not a basket case simply because of misplaced borders. I think we should give Kenyans much more credit. These narratives often do to little to take African countries themselves to task for their own failings.
As much as we’d like to believe it, babies aren’t a blank slate. Babies not only bear the social and economic legacies of the families which produce the, but also the scars of a lifetime of immunological insults.
This week, a paper, “Does in utero Exposure to Illness Matter? The 1918 Influenza Epidemic in Taiwan as a Natural Experiment,” appeared in the journal of the National Bureau of Economic Research which tracks the long term effects of the 1918-1920 worldwide influenza pandemic.
Turns out that babies which were born to mothers in that period were, on average, shorter than people born in other years, had more developmental problems, and, possibly, suffered from long term problems of chronic disease.
This paper tests whether in utero conditions affect long-run developmental outcomes using the 1918 influenza pandemic in Taiwan as a natural experiment. Combining several historical and current datasets, we find that cohorts in utero during the pandemic are shorter as children/adolescents and less educated compared to other birth cohorts. We also find that they are more likely to have serious health problems including kidney disease, circulatory and respiratory problems, and diabetes in old age. Despite possible positive selection on health outcomes due to high infant mortality rates during this period (18 percent), our paper finds a strong negative impact of in utero exposure to influenza.
It’s interesting to me, in that it’s a study of health on one of Japan’s former colonies, but also because Taiwan’s indicators in 1918 were atrocious. More than a fifth of babies didn’t live to see their fifth birthday, deaths in childbirth were common and life was short. In other words, it’s a lot like a lot of African contexts today.
The long term outcomes of common developing world diseases have mostly been ignored. There is every reason to believe that one of the reasons African countries suffer economically is that people’s developmental trajectory is set before even exiting the womb. SO we’re fighting against not only a bleak economic past, but also against a constant legacy of infectious insults.
And to moms in the developed world…. get your flu shots.
According to the “pathogen stress theory of values,” the evolutionary case that Thornhill and his colleagues have put forward, our behavioral immune systems—our group responses to local disease threats—play a decisive role in shaping our various political systems, religions, and shared moral views.
If they are right, Thornhill and his colleagues may be on their way to unlocking some of the most stubborn mysteries of human behavior. Their theory may help explain why authoritarian governments tend to persist in certain latitudes while democracies rise in others; why some cultures are xenophobic and others are relatively open to strangers; why certain peoples value equality and individuality while others prize hierarchical structures and strict adherence to tradition. What’s more, their work may offer a clear insight into how societies change.
This is a reasonable view, and something I’ve long observed from working on infectious diseases in developing countries. The developmental trajectory of a country is influenced by the deliberate avoidance of illness. An example can be seen in the locations of African cities. Many African administrative capitals are located on isolated, cool hilltops, far away from rivers and lakes. Colonialists would intentionally set up shop in areas where they were unlikely to encounter malaria.
Developmentally, this has had major implications for trade within Africa. European cities are often placed along water ways amenable to domestic European trade. The lack of trade between African countries is one of the reasons the continent has developed so poorly. This is the direct result of not only colonial priorities of resource extraction to Europe, but also the unfortunate placement of economic centers in response to malaria.
Certainly, the nature of cities themselves have much to do with the control of infectious diseases. Public works often involve the management of sewage waste and the delivery of clean water. Thornhill might suggest that the development of democracy, citizen involvement and taxation to pay for urban improvements are in direct response to enteric diseases.
However, while it is interesting to try to apply this view, it can be taken to the extreme:
Fincher (a graduate student of Thornhill) suspected that many behaviors in collectivist cultures might be masks for behavioral immune responses. To take one key example, collectivist cultures tend to be both more xenophobic and more ethnocentric than individualist cultures. Keeping strangers away might be a valuable defense against foreign pathogens, Fincher thought. And a strong preference for in-group mating might help maintain a community’s hereditary immunities to local disease strains. To test his hypothesis, Fincher set out to see whether places with heavier disease loads also tended toward these sorts of collectivist values.
I’m not sure it’s that easy to boil down political differences between Asia and Europe to a need to manage infectious disease. Certainly, Sweden is more collectivist than England, but I wouldn’t say that their infectious disease profiles are all that different.
Worse yet, if taken to the extreme, this “hunt for significance” will provide one with evidence to support any crazy theory at all. Pathogens exist wherever humans do. Moreover, we risk attributing the contribution of pathogens to human development based on current conditions, assuming that the present is deterministically preordained centuries ago. Until very recently, nearly the entire world was at risk for malaria, but despite this, various societies have embarked on different social and political trajectories.
The biggest problem I have with the theory is in its basic in rational theory. It assumes that humans are making rational choices based on pathogen threats, when we know, and particularly those of us who work in the tropics, that humans often have poor conceptions of disease transmission and causes of illness. At times, despite very obvious threate, humans will act in manners which exacerbate that threat. The history of enteric disease is filled with tales of ignorance and folly.
If we are going to subscribe to a rational model of political and social development which includes pathogens, then we have to also address first, the ability of pathogens to hijack human behavior to create new opportunities for replication and survival and second, that social changes can exacerbate the worst effects of infection. For the first point, I would look to the development of international trade systems which allow pathogens such as influenza to move around the world quickly, increasing opportunities for mutation to avoid immune responses. For the second I would point to polio, a disease which becomes a problem on after the introduction of water sanitation practices.
Thornhill’s ideas are interesting, and certainly provide good material for the popular press and BBQ conversation, but they require that the reader suspend too much consideration of the details of the complex history of human social and political development. Taken with restraint, as in the example of the locations of African cities, they can provide interesting insights into how current conditions are impacted by past pathogenic threats.
Every once in a while, you run across something that just gives you the chills.
“A report presented to the World Health Organization (WHO) in 1948 states: “It is not enough to quote that about 3,000,000 deaths are caused yearly by malaria in the world, or that every year about 300,000,000 cases of malaria occur …… that malaria is prevalent in tropical and subtropical areas where food production and agricultural resources are potentially very high, and that, by affecting the mass of rural workers, it decreases their vitality and reduces their working capacity and thus hampers the exploitation of the natural resources of the country. At a time when the world is poor, it seems that control of malaria should be the first aim to achieve in order to increase agricultural output” (WHO, 1948).
Snow RW, Amratia P, Kabaria CW, Noor AM, Marsh K: The changing limits and incidence of malaria in Africa: 1939-2009. Adv Parasitol 2012, 78:169-262.
But Ron Paul did.
I’m checking out the graphic below, and, first, wondering why anyone ever thought that gold was the only investment to make given it’s bubblish nature, and second, wondering what it must have been like to have investments in the 19th century. Granted, most people didn’t, and some people were even the targets of investment themselves. The wide volatility in the inflation rate must have driven people nuts.
If you owed money, one year, you’d make out like gangbusters, watching inflation obliterate your debt obligations, the next year, you’d watch your world crumble as the currency became worthless. If people owed you money, you’d be in the opposite pinch. Either way, you were screwed and had little ability to plan for the future. By the time you rode out the constant rough spots, though, you’d end up with the same amount of money you started with decades earlier. I’ll take steady inflation and reasonable economic certainty over crazyland, but Ron Paul might be into it, I guess.
The blog Uneasymoney, posted an article this morning claiming that policies which encouraged the production of biofuels was responsible for the crazy run in commodity prices throughout the 2000’s and was ultimately responsible for the 2007/2008 crash.
The post refers to an article in the Journal of Economic Perspectives, which I am reading now but the results of which are summed up here:
the research of Wright et al. shows definitively that the runup in commodities prices after 2005 was driven by a concerted policy of intervention in commodities markets, with the fervent support of many faux free-market conservatives serving the interests of big donors, aimed at substituting biofuels for fossil fuels by mandating the use of biofuels like ethanol.
What does this have to do with the financial crisis of 2008? Simple. ..the Federal Open Market Committee, after reducing its Fed Funds target rates to 2% in March 2008 in the early stages of the downturn that started in December 2007, refused for seven months to further reduce the Fed Funds target because the Fed, disregarding or unaware of a rapidly worsening contraction in output and employment in the third quarter of 2008. Why did the Fed ignore or overlook a rapidly worsening economy for most of 2008 — even for three full weeks after the Lehman debacle? Because the Fed was focused like a laser on rapidly rising commodities prices, fearing that inflation expectations were about to become unanchored – even as inflation expectations were collapsing in the summer of 2008. But now, thanks to Wright et al., we know that rising commodities prices had nothing to do with monetary policy, but were caused by an ethanol mandate that enjoyed the bipartisan support of the Bush administration, Congressional Democrats and Congressional Republicans. Ah the joy of bipartisanship.
So then, what I’m gathering here is that the Fed was obsessive about commodity prices fearing inflation, despite the fact that the Fed was in no position to influence commodities markets. This distracted the Fed from focusing on the real causes of the crash and the Lehman disaster, making a bad situation worse.
I’m not sure that this correctly connects the dots, given that there is little evidence that the run in commodity prices had anything to do with biofuels. Even as biofuel consumption increased throughout the 00’s, overall production of corn and yield per acre also increased. Assuming that commodity prices are in part dictated by supply, I would (from an armchair economist perspective) assume that prices should remain somewhat constant.
I’m interested to see that the article disregards financialization of commodities, following a loosening of rules of speculation on ag products in the 90’s and the move toward commodities following the equity bust of 2000 as not being a major factor in the rise in corn prices. This is particularly strange when we consider that non-energy commodities also exhibited rapid price increases and violent fluctuations throughout the 00’s. I fail to see how energy policy could result in increases and volatility in, for example, copper.
It’s a tempting thesis, and made more tempting by the explicit identification of individuals who suggested and implemented such policy, but not one borne out by the data, in my limited, amateurish opinion. The list of potential factors which influenced the run in commodities is a long and confusing one (climate change, increased demand from China and India, global instability, etc. etc.), but I don’t think that the effect of Wall Street greed can be discounted as a major determinant. Interestingly, despite the overall themes of the paper, the author does a poor job of discounting the effect of financialization in the creating of commodity price bubbles.
In reading this paper now, I’m somewhat confused. On the one hand, he confirms many of my initial suspicions that the rising price of food is unrelated to supply and demand factors as growth of both supply and demand were more or less constant, despite localized climate shocks. On the other, he seems to blame a rise in prices during the crash to a shift in energy policy toward biofuels, while overlooking that commodities were already volatile and rising, beginning with the crash of the tech bubble in 2000. I am thining that much of the rise in commodities during 2007/8 was due to panicky speculation as real estate markets tumbled, not to any change in energy policy. Certainly, it may be the case the the policy influence traders to try to exploit potential areas of growth, but it’s hard, then, to discount the effect of financial speculation in commodities outright.
I can at least agree with this:
The rises in food prices since 2004 have generated huge wealth transfers global landholders, agricultural input suppliers, and biofuels producers. The losers lobal landholders, agricultural input suppliers, and biofuels producers. The losers have been net consumers of food, including large numbers of the world’s poorest ave been net consumers of food, including large numbers of the world’s poorest peoples.
It’s an old paper, but I just came across The Colonial Origins of Comparative Development: An Empirical Investigation
by Daron Acemoglu, Simon Johnson and James A. Robinson, originally published in the The American Economic Review back in 2001.
They take rough data of settler deaths back in the seventeenth and eighteenth centuries and plot them against the GDP of several countries from 1995. I’ve included the plot on the right. What they found was that a higher number of European settler deaths was associated with a long term decline in economic output.
Settling in the seventeenth and eighteenth centuries was a dangerous business, particularly in Sub-Saharan Africa and less so in what is now the United States, New Zealand and Australia. Malaria and yellow fever were responsible for killing up to 100% of groups brave enough to attempt the journey.
Acemoglu, et al.’s argument is as follows:
1. There were different types of colonization policies which created different sets of institutions. At one extreme, European powers set up “extractive states,” exemplified by the Belgian colonization of the Congo. These institutions did not introduce much protection for private property, nor did they provide checks and balances against government expropriation. In fact, the main purpose of the extractive state was to transfer as much of the resources of the colony to the colonizer. At the other extreme, many Europeans migrated and settled in a number of colonies, creating what the historian Alfred Crosby (1986) calls “Neo-Europes.” The settlers tried to replicated European institutions, with strong emphasis on private property and checks against government power. Primary examples of this include Australia, New Zealand, Canada, and the United States.
2. The colonization strategy was influenced by the feasibility of settlements. In places where the disease environment was not favorable to European settlement, the cards were stacked against the creation of Neo-Europes, and the formation of the extractive state was more likely.
3. The colonial state and institutions persisted even after independence.
They argue that the disease environment determined the nature of settlements, which determine the nature of institutions which, in term, determined the economic trajectory of a country.
Interestingly, they control for all of the things that one might control for, such as distance from the equator and the percentage of inhabitants that were European, being landlocked and the ruling power, ruling out the effect of some obvious potential influences. Property rights, a solid judiciary and limits on political power in the colonies and upon independence, they argue, had a greater effect on long term GDP, and the development of those institutions was enabled or inhibited by early settler mortality.
It’s a fairly compelling argument, though not without its critics.
A few gems from the paper interested me. One, the return on investment in the British colonies during the nineteenth century was a whopping 25%, far more than one could have expected domestically. In the late 19th and early 20th centuries, this dropped so that returns on colonial and domestic investments were the same.
I found (finally!) a reference to indicate the willful choosing of high altitude and thus less malarious areas for colonial settlements. Note that in Europe and the US, the location of cities is often along river ways and sea sides, where in Africa large cities tend to be placed inland (with some exceptions). There has been no industrial revolution in Africa and little regional trade (a condition which persists to this day) so that cities along water based shipping routes are not necessary. Extraction in Africa was largely done by rail, further alleviating the need to be close to rivers.
Though I’ve ripped this off the Big Picture blog (which my good friend Chris introduced me to), I’ll repeat it again here (since it was ripped off the Fed of New York anyway).
I never considered the problem of having to physically move money (read: metal coins) around to make foreign investments. Moving it would be an incredible risk, as it would likely be stolen along the way. Turns out, you could just pay the money to the central bank in Rome, and the Romans would just deduct the amount of money you wanted to transfer from their tax collection in whatever region it was going to.
This is worth the read. I promise I’ll write something of substance after I’m done dissertating.
Historical Echoes: Cash or Credit? Payments and Finance in Ancient Rome
Marco Del Negro and Mary Tao
Imagine yourself a Roman citizen in the 1st Century B.C. You’ve gone shopping with your partner, who’s trying to convince you to buy a particular item. The thing’s pretty expensive, and you demur because you’re short of cash. You may think that back then such an excuse would get you off scot-free. What else can you possibly do: Write a check? Well, yes, writes the poet Ovid in his “Ars Amatoria, Book I.” And since your partner knows it, you have no way out (the example below shows some gender bias on Ovid’s part. Fortunately, a few things have changed over the past 2,000 years):
But when she has her purchase in her eye,
She hugs thee close, and kisses thee to buy;
“Tis what I want, and ‘tis a pen’orth too;
In many years I will not trouble you.”
If you complain you have no ready coin,
No matter, ‘tis but writing of a line;
A little bill, not to be paid at sight:
(Now curse the time when thou wert taught to write.)
In a previous Historical Echoes post, we describe some of the characters in early Roman high and low finance. Here, we look at their modus operandi.
Large sums of money changed hands in Roman times. People bought real estate, financed trade, and invested in the provinces occupied by the Roman legions. How did that happen? Cicero writes, in Epistulae ad Familiares 5.6 and Epistulae ad Atticum 13.31, respectively: “I have bought that very house for 3.5 million sesterces” and “Gaius Albanius is the nearest neighbor: he bought 1,000 iugera [625 acres] of M. Pilius, as far as I can remember, for 11.5 million sesterces.” How? asks historian H. W. Harris (in “The Nature of Roman Money”)–“mechanically speaking, did Cicero pay three and half million sesterces he laid out for his famous house in the Palatine . . . . That would have meant packing and carrying some three and half tons of coins through the streets of Rome. When C. Albanius bought an estate from C. Pilius for eleven and half million sesterces, did he physically send the sum in silver coins?” Harris’ answer is: “Without much doubt, these were at least for the most part documentary [i.e., paper] transactions. The commonest procedure for large property purchases in this period was the one casually alluded to by Cicero [De Officiis 3.59] . . . ‘nomina facit, negotium conficit’ . . . provides the credit [or ‘bonds’–nomina], completes the purchase.”
What exactly are these nomina?–from which, by the way, comes the term “nominal,” so commonly used in economics. In his Ph.D. dissertation “Bankers, Moneylenders, and Interest Rates in the Roman Republic,” C. T. Barlow writes (pp. 156-7): “An entry in an account book was called a nomen. Originally the word meant just that–a name with some numbers attached. By Cicero’s day . . . [n]omen could also mean “debt,” referring to the entries in the creditor’s and the debtor’s account books.” And this “debt was in fact the lifeblood of the Roman economy, at all levels . . . nomina were a completely standard part of the lives of people of property, as well as being an everyday fact of life for a great number of others” (Harris, p. 184). Pliny the Younger writes, for example, (in Epistulae 3.19): “Perhaps you will ask whether I can raise these three millions without difficulty. Well, nearly all my capital is invested in land, but I have some money out at interest and I can borrow without any trouble.”
For concreteness, say that some fellow, Sempronius, owes you one million sesterces. You–or in case you’re a wealthy senator, or eques, your financial advisor (procurator–Titus Pomponius Atticus was Cicero’s)–would record the debt in the ledger. What if you suddenly needed the money to buy some property? Do you have to wait for Sempronius to bring you a bag with 1 million sesterces? No! As long as Sempronius is a worthy creditor (a bonum nomen [see Barlow, p. 156]; in the modern parlance of credit rating agencies, a triple-A creditor), you’d do what Cicero says: transfer the nomina, strike the deal. For example, Cicero writes to his financial advisor Atticus (Ad Atticum 12.31): “If I were to sell my claim on Faberius, I don’t doubt my being able to settle for the grounds of Silius even by a ready money payment.” As Harris (p. 192) observes: “Nomina were transferable, and by the second century B.C., if not earlier, were routinely used as a means of payment for other assets . . . . The Latin term for the procedure by which the payer transferred a nomen that was owed to him to the seller was delegatio.”
So, we’ve seen that Romans could settle payments by transferring nomina. But was there a market for nomina, just like there’s one today in, say, mortgage-backed securities? According to both Barlow and Harris, the answer is yes. They claim that the Romans took the transferability one step further and essentially turned “mere entries in account books” into “negotiable notes” (see Barlow, p. 159, and Harris, p. 192). Not everyone agrees. The economic historian P. Temin (“Financial Intermediation in the Early Roman Empire”) also reports evidence of assignability of loans, opening the possibility of “wider negotiability, but,” he adds, “we do not have any evidence that it happened” (p. 721). Yet some indirect evidence is there. For instance, the idea of negotiable notes appears to be well understood by Roman jurists, such as Ulpian (The Digest of Justinian XXX.I.44): “A party who bequeaths a note bequeaths the claim and not merely the material on which the writing appears. This is proved by a sale, for when a note is sold, the debt by which it is evidenced is also considered to be sold.”
What if you had to transfer money to somebody in a different part of the globe? As the Roman dominions expanded into Greece, Spain, North Africa, and Asia, Roman finance actually faced this logistical problem. If you’re in Rome and want to, say, finance Caius’ mines in Thapsus, North Africa, how do you get him the money? He needs the silver to buy material, slaves, and other things, but you’re naturally very reluctant to see your money sail away for Africa, as the chances of it getting there aren’t that high (see pirates, shipwrecks, etc.). “Permutatio, the transfer of funds from place to place through paper transactions, was Rome’s great contribution to ancient banking” (Barlow, p. 168). It worked as follows: The publicani were private companies in charge of tax collection in the provinces (as well as many other tasks; see “Publicani,” by U. Malmendier). They had a branch in Rome and one in Thapsus. So, you’d give them the silver in Rome (or transfer them some nomina) and they’d divert some of their tax collection in North Africa to Caius. This is also how the Republic would finance its public spending overseas. Since taxes were collected throughout the provinces, by trading claims on taxes Romans could transfer funds across the globe–or at least to the part of the globe they had conquered.
Interestingly, some historians measure the sophistication of Roman finance “by the extent banks were present” (Temin, p. 719). While it is true that we have no evidence of a 1st Century B.C. Wells Fargo, this may not necessarily imply lack of sophistication. Prior to the Great Recession in the United States, a large chunk of financial intermediation didn’t involve banks–it went through the “shadow banking system.” Roman high finance “functioned primarily on the basis of brokerage” (K. Verboven, “Faeneratores, Negotiatores and Financial Intermediation in the Roman World,” p. 12), and hence was a bit like a proto-shadow banking system, as we suggest in our prior post. Like the shadow-banking system in the United States, it was fragile. Going back to our earlier example, we note that if whomever you want to buy property from starts wondering about the creditworthiness of Sempronius, she will not accept his nomina in payment and will want cash. That’ll force you to call in the loan to Sempronius, who in order to pay you will call in his loan to Titus, and so on. But financial crises in ancient Rome are the subject of a future post.
We are grateful to Cameron Hawkins of the University of Chicago for help navigating the literature.
A search for all articles with “malaria” in the text yields an amazing 33,800 results. Browsing through the headlines is like reading a brief history of the disease as seen through an American lens.
The oldest article is from 1889, a report on a malaria outbreak on the upper Hudson in New York: “An epidemic of a malarial nature is reported from towns along the upper Hudson, one physician in Newburg reporting more than seventy cases under his care. Newburg is famous for its breakneck streets.”
The article is notable because in 1889, very little was known about the disease. Of course, in 2012, we know much, much more, but the challenges (problems in diagnosis, complex and often contradictory observations on ecological factors and socio-economic infection gradients) are the same now as they were then.
“30 INSANE PARETICS CURED BY MALARIA; Long Island College Hospital Reports Marked Success With New Treatment. Thirty patients regarded as hopelessly insane are back at work and leading normal lives after being artificially inoculated with malaria, allowed to suffer chills and fever for two weeks or so and then treated with drugs, according to an announcement yesterday by the Long Island College Hospital.”
I don’t think that anyone really knew what the “paretics” were suffering from, but it was likely syphilis. Malaria was used briefly to treat a variety of neurological disorders caused by infectious agents, with varying degrees of success and failure.
Vaccines have long been “just around the corner,” only to die in sad failure. The most overly optimistic claim came in 1984 from then head of USAID, M. Peter McPherson (who later became President of Michigan State University):
M. Peter McPherson, administrator of the Agency for International Development, said he expected that a vaccine would be ready for trial in humans within 12 to 18 months and widely available throughout the world within five years. ”We think this is a practical schedule,” he told a news conference at the State Department today.
A classic case of overstatement, I’m sure that he regrets this event to this day. No wonder scientists have to be wishy washy with their predictions. Statement like this live in sad perpetuity. We still don’t have a vaccine, and the outlook for having one any time soon hasn’t gotten much better now than in 1984.
1889 North River Malaria
1925 30 INSANE PARETICS CURED BY MALARIA
1925 WAR ON MALARIA BEGUN BY LEAGUE
1938 MALARIA SCOURGE FOUGHT BY THE TYA
1943 Malaria Problem; Our Knowledge Is Still in an Unsatisfactory State
1944 us HEALTH SERVICE COMBATS MALARIA
1945 New Drugs to Combat Malaria Are Tested in Prisons for Army
1946 CURE FOR MALARIA BARED BY CHEMISTS
1948 NEW DRUGS TO END MALARIA SCOURGE
1951 Army Tests Drug as Malaria Cure; Doses Given Troops
1952 un GAINS GROUND AGAINST MALARIA
1957 World-Wide Battle On Malaria Mapped
1961 New Malaria Threat Is Studied At Infectious Diseases Center
1965 A ‘NEW’ MALARIA RAGES IN VIETNAM
1966 Leprosy Drug Reduces Malaria Among gi’s
1970 Malaria Up Sharply in Nation; Most Cases Traced to Vietnam
1971 Drug Users Spur Malaria Revival
1974 Prison Official in Illinois Halts Malaria Research on Inmates
1977 Malaria Spreading in Central America as Resistance to Sprays Grows
1984 MALARIA VACCINE IS NEAR, U.S. HEALTH OFFICIALS SAY
1987 Drug Combinations Offer New Hope in Fighting Malaria
1988 Scientists Report Advances In Vaccine Against Malaria
1991 Outwitted by Malaria, Desperate Doctors Seek New Remedies
1991 Hope of Human Malaria Vaccine Is Offered
1993 Mefloquine Is Found Best Against Malaria
1994 Vaccine Cuts Malaria Cases In Africa Test
1995 Vaccine for Malaria Failed in New Test
1996 Tests of Malaria Drug From China Bring Hope and Cautionary Tales