The essence of epidemiologic field trials is the RCT (randomized control trial). A random set of people get some sort of treatment (like a new drug), another random set of people don’t and we compare the results. It’s pretty simple stuff.
The trouble with RCTs is that they don’t necessarily work well when people from the two groups are able to influence each other’s outcomes. As a simple example, a trial of a vaccine which prevents people from getting infected with some pathogen might have impacts on people who don’t get the vaccine, since the number of opportunities for transmission are reduced. This is a welcome outcome (and may even be the point of the study), but it doesn’t help us to understand exactly how effective the vaccine is in the individuals who actually receive the vaccine.
Many RCTs make the (flawed) assumption that individuals are independent entities, following a long tradition of statistical analysis. This is a reasonable assumption to make in some cases, but entirely wrong in others (i.e. most public health outcomes).
Development economists have recently adopted the RCT as a means of evaluating the effectiveness of programs intended to relieve poverty or improve human well-being. On the surface, there’s nothing wrong with adopted public health methods to deal with economic problems, as most public health problems have their roots in economics. Jeff Sachs, or course, would argue that many economic problems have their roots in public health problems.
The major problem with RCTs is that while we do our best to control for all of the possible other factors that might impact outcomes given a particular treatment, without a trove of detailed data and prior knowledge of context and contingencies, we really have no idea at all whether and how some public health intervention works. Epidemiology tends to fall back on the “reasonable suspicion” argument, backing up claims of effectiveness with potentially reasonable assumptions of causal pathways. This is clearly quite easy when doing drug trials, where animals models and a century-plus of medical research has given us a reasonably clear pictures of the pathophysiological pathways that might lie between drug and outcome.
But with issues of human behavior and economics (which is essentially a science which seeks to uncover mysteries of human behavior), the causal pathways are much more difficult to assess and the factors which lie between intervention and outcome are for more difficult to measure. For example, assessing the outcome of an education program on reproductive behavior is really, really difficult without monitoring all of the possible things that happened between the time that a woman attended an NGO sponsored event at a clinic and the time when she chose to use or not use a condom. In fact, we can’t even really verify that she used the condom, since we weren’t around to observe it.
But we assume, and assume to the point of falling back to faith that our efforts did what we intended them to do.
Lant Pritchett, a Harvard economist that I’m a great fan of for his work on economic measurement in developing countries, penned an interesting article on the website of the Center for Global Development seemingly questioning the merit of the RCT as an rigorous and necessary evaluation tool for poverty alleviation development programs.
First of all, the argument that RCTs had, until recently, been used sparingly, if at all, and yet are important in achieving good outcomes sits in kind of embarrassing counterpoint with the obvious fact that lots of countries have really good outcomes. That is whether one uses the Human Development Index or the OECD Better Life Index or any social indicator—from poverty to education to health to life satisfaction—there is a similar set of countries near the top. (In the HDI the top five are Norway, Australia, USA, Netherlands, and Germany. In the OECD Better Life Index they are Australia, Sweden, Canada, Norway, and Switzerland.) No one has ever made the arguments that these countries are developed and prosperous because they used rigorous evidence—much less RCTs—in formulating policy and programs. While one might have faith that RCTs can help along the path to development, RCTs didn’t help for those that are there now.
It is very true that development in the United States occurred without the help of RCTs. In fact, malaria elimination in the United States occurred without any of the complex set of interventions that we’re so desperately selling to malaria-endemic countries. It’s even true that, despite more than a decade of research on ITNs, that we aren’t really sure whether the declines in malaria that we’ve seen all over Sub-Saharan Africa are due to ITNs or just simply due to processes associated with urbanization and development (as in the US). Actually, a lot of research is telling us that the declines in malaria might be false and that we are simply suffering from a paucity of accurate measurement in malaria endemic countries.
And this is where Pritchett comes in. He’s right. Research in developing countries is inherently challenging to the point where the conclusions we draw from research are somewhat contentious at best, and the result of blind faith at worst.
But coarse and incomplete data and loose assumptions shouldn’t discourage public health (or even economic) professionals from doing research in developing countries. While I have issues with the condescending, neo-classical nature of RCTs in economics (another discussion, but can a peasant lady’s behavior in Western Kenya be reduced to that of Homo economicus? ), the truth is that policy makers don’t care about data. They care that people are making the case for action in an impassioned and convincing way. While academics should strive to be as rigorous as possible, the sell won’t happen based on our complex data collection strategies and statistical methodologies. They (and the public) are convinced through impassioned calls for action.