Often people will mention that we are “adapted” to do this or another thing, either indicating some crime of modernity (of course, ignoring the fact that a larger percentage of babies are surviving and people are living longer and healthier than at any time in human history) or trying to point out some example of the glaring perfection of our creation, with either an implicit or vocal reference to divine creation.
For example, obesity is attributed to fat and protein rich modern diets since we aren’t “adapted” to eat these types of foods (despite having found the food in East Africa so unpalatable that we had to learn to crush or cook it to digest it efficiently). Our bad disposition is blamed on a lack of sleep since we aren’t “adapted” to sleep as little as we do (this might be true). Most recently, a book writer blamed our problems with depression on a divorced relationship to nature, given that we are “adapted” to hunt and kill for food and then revel over the blood stained corpse (of course, the writer doesn’t consider that people in antiquity might have been depressed, too).
There may be some truth to some of this. However, “adaptation” implies something about the individual, when evolution, in fact, is about reproduction. We aren’t “adapted” to anything. Rather, certain traits are selected for based on the survival of at least two generations of living things, at least for complex social animals like ourselves.
Simply surviving as an individual does not insure the survival of a species. Living things must first survive long enough to reproduce and then, at least in humans, insure that the children make it to reproductive age. Human babies are horribly weak in contrast to sharks, which are ready to go even before they leave the mother. Further, in the case of humans, a full three generations must live at once to insure long term survival.
Thus, we maintain a tenuous relationship with out environment, where traits do not necessarily favor a single individual, but rather an entire family unit, and these traits may or may not imply perfect harmony with an environment, but rather do the job at least satisfactorily.
Nature cares little for quality as numerous examples throughout our physiology show. To claim that we are somehow “perfectly suited” to a specific environment is just simply wrong. Merely, we have come to a brokered peace (after generations of brutal trial and error, what we eat today is thanks to the deaths of millions, mostly children, who had to die to allow us to do so) with wherever we live in order to allow a few of our kids and grandkids to survive.
This, of course, has deep implications for public health. Some public health problems are known to be passed down from parents to children, but in a multi-generational evolutionary framework, it is possible that certain public health problems can be passed through 3 or more generations at a time, complicating interventions. Certainly, the multi-generational health problems of the descendants of African slaves can be an example of this. How can we intervene to protect the public health over a full century?
OK, back to work.
The essence of epidemiologic field trials is the RCT (randomized control trial). A random set of people get some sort of treatment (like a new drug), another random set of people don’t and we compare the results. It’s pretty simple stuff.
The trouble with RCTs is that they don’t necessarily work well when people from the two groups are able to influence each other’s outcomes. As a simple example, a trial of a vaccine which prevents people from getting infected with some pathogen might have impacts on people who don’t get the vaccine, since the number of opportunities for transmission are reduced. This is a welcome outcome (and may even be the point of the study), but it doesn’t help us to understand exactly how effective the vaccine is in the individuals who actually receive the vaccine.
Many RCTs make the (flawed) assumption that individuals are independent entities, following a long tradition of statistical analysis. This is a reasonable assumption to make in some cases, but entirely wrong in others (i.e. most public health outcomes).
Development economists have recently adopted the RCT as a means of evaluating the effectiveness of programs intended to relieve poverty or improve human well-being. On the surface, there’s nothing wrong with adopted public health methods to deal with economic problems, as most public health problems have their roots in economics. Jeff Sachs, or course, would argue that many economic problems have their roots in public health problems.
The major problem with RCTs is that while we do our best to control for all of the possible other factors that might impact outcomes given a particular treatment, without a trove of detailed data and prior knowledge of context and contingencies, we really have no idea at all whether and how some public health intervention works. Epidemiology tends to fall back on the “reasonable suspicion” argument, backing up claims of effectiveness with potentially reasonable assumptions of causal pathways. This is clearly quite easy when doing drug trials, where animals models and a century-plus of medical research has given us a reasonably clear pictures of the pathophysiological pathways that might lie between drug and outcome.
But with issues of human behavior and economics (which is essentially a science which seeks to uncover mysteries of human behavior), the causal pathways are much more difficult to assess and the factors which lie between intervention and outcome are for more difficult to measure. For example, assessing the outcome of an education program on reproductive behavior is really, really difficult without monitoring all of the possible things that happened between the time that a woman attended an NGO sponsored event at a clinic and the time when she chose to use or not use a condom. In fact, we can’t even really verify that she used the condom, since we weren’t around to observe it.
But we assume, and assume to the point of falling back to faith that our efforts did what we intended them to do.
Lant Pritchett, a Harvard economist that I’m a great fan of for his work on economic measurement in developing countries, penned an interesting article on the website of the Center for Global Development seemingly questioning the merit of the RCT as an rigorous and necessary evaluation tool for poverty alleviation development programs.
First of all, the argument that RCTs had, until recently, been used sparingly, if at all, and yet are important in achieving good outcomes sits in kind of embarrassing counterpoint with the obvious fact that lots of countries have really good outcomes. That is whether one uses the Human Development Index or the OECD Better Life Index or any social indicator—from poverty to education to health to life satisfaction—there is a similar set of countries near the top. (In the HDI the top five are Norway, Australia, USA, Netherlands, and Germany. In the OECD Better Life Index they are Australia, Sweden, Canada, Norway, and Switzerland.) No one has ever made the arguments that these countries are developed and prosperous because they used rigorous evidence—much less RCTs—in formulating policy and programs. While one might have faith that RCTs can help along the path to development, RCTs didn’t help for those that are there now.
It is very true that development in the United States occurred without the help of RCTs. In fact, malaria elimination in the United States occurred without any of the complex set of interventions that we’re so desperately selling to malaria-endemic countries. It’s even true that, despite more than a decade of research on ITNs, that we aren’t really sure whether the declines in malaria that we’ve seen all over Sub-Saharan Africa are due to ITNs or just simply due to processes associated with urbanization and development (as in the US). Actually, a lot of research is telling us that the declines in malaria might be false and that we are simply suffering from a paucity of accurate measurement in malaria endemic countries.
And this is where Pritchett comes in. He’s right. Research in developing countries is inherently challenging to the point where the conclusions we draw from research are somewhat contentious at best, and the result of blind faith at worst.
But coarse and incomplete data and loose assumptions shouldn’t discourage public health (or even economic) professionals from doing research in developing countries. While I have issues with the condescending, neo-classical nature of RCTs in economics (another discussion, but can a peasant lady’s behavior in Western Kenya be reduced to that of Homo economicus? ), the truth is that policy makers don’t care about data. They care that people are making the case for action in an impassioned and convincing way. While academics should strive to be as rigorous as possible, the sell won’t happen based on our complex data collection strategies and statistical methodologies. They (and the public) are convinced through impassioned calls for action.
I was just reading an article on the NYT which described a new method that doctors can use to decide whether to put patients on statins or not.
Statins work to prevent cardiac events by reducing cholesterol levels. While widely used, they are controversial as a means to prevent heart attacks in people without cardio-vascular disease.
As a public health guy, I’m interested in health diagnostics. So when I noticed that this hi-tech “calculator” was available for download from the NYT article, I immediately opened it up. I expected some sweet Java-based interface, with boxes to check a number of things like age, weight, ethnicity, dietary and exercise habits and family health history.
To my dismay, I found that it’s an Excel spreadsheet with space to enter ten items:
As you can see, I filled in my own information (as best I could, based on recall).
I was happy to see that my lifetime risk of CVD is a mere 5% and, as I resist taking medications of any kind, even more happy to see that a doctor would likely not prescribe me statins.
However, I’m not sure what this really tell me. Does this say that I don’t have to exercise anymore? Can I eat tons of fried crap and finally ignore my family’s host of health problems now?
Mostly, I’m struck by how arbitrary this is. Smoking, high blood pressure, advanced age, being male and being African American are all known predictors of CVD. The absence of potentially modifying factors such as diet and exercise just make the picture even more arbitrary. I may be old, black and male, but I may have made lifestyle changes to counter those factors, thus reducing my risk for disease.
More frightening to me is how ad hoc medical care is. Doctors are praised for “shooting from the hip,” or making diagnoses based on gut feeling. In most cases, this is satisfactory. Sometimes health problems resolve themselves, or are too serious to effectively treat in the first place. Other times, this can lead to over prescription of medications or uninformed (praise god for the internet!) “patient directed medical care.” Though this calculator attempts to counter that behavior, I fear that its cheap simplicity makes a mockery of quantitative diagnoses of health risks, undermining the project’s goals.
The truth is, of course, that we really don’t know enough about complex diseases such as CVD. Though medicine is able to spot obvious candidates for serious disease (particularly after one develops it), the truth is that no one can say for certain whether, absent the obvious, one person is more likely to become ill than another. This calculator doesn’t change that.
It was a great epiphany for me when I realized that, though I am at low risk for CVD, given my expected long life expectancy, I am more likely to die due to, you guessed it, heart disease.