New publication: An urban-to-rural continuum of malaria risk: new analytic approaches characterize patterns in Malawi
12 years in the making! Our new paper from partners at the University of Michigan and the #Malawi College of Medicine on new approaches to defining urban and rural environments in the context of malaria risk is now out in #Malaria Journal.
It was the last chapter in my dissertation to be published (all the rest were published when I was still in grad school.)Short version: malaria is complicated and really local. Malaria transmits poorly in urban and environments and well in rural environments. There’s urban like spaces in “rural” areas and rural-like spaces in “urban” areas, demanding a more nuanced view of what those terms really mean.
We know that malaria is a “rural” problem, but not all “rural” spaces are the same. Even in the country, there are “urban like” spaces and in “rural like” spaces even in the largest cities in Sub-Saharan Africa. Could those spaces impact malaria risk? If so, shouldn’t we redefine what we mean by urban vs. rural to inform intervention strategies to better target resources?
Here, we combine GIS and statistical methods with a house to house malaria survey in Malawi to create and test a new composite index of urbanicity and apply that to create a more nuanced risk map.
The urban–rural designation has been an important risk factor in infectious disease epidemiology. Many studies rely on a politically determined dichotomization of rural versus urban spaces, which fails to capture the complex mosaic of infrastructural, social and environmental factors driving risk. Such evaluation is especially important for Plasmodium transmission and malaria disease. To improve targeting of anti-malarial interventions, a continuous composite measure of urbanicity using spatially-referenced data was developed to evaluate household-level malaria risk from a house-to-house survey of children in Malawi.
Children from 7564 households from 8 districts in Malawi were tested for presence of Plasmodium parasites through finger-prick blood sampling and slide microscopy. A survey questionnaire was administered and latitude and longitude coordinates were recorded for each household. Distances from households to features associated with high and low levels of development (health facilities, roads, rivers, lakes) and population density were used to produce a principal component analysis (PCA)-based composite measure for all centroid locations of a fine geo-spatial grid covering Malawi. Regression methods were used to test associations of the urbanicity measure against Plasmodium infection status and to predict parasitaemia risk for all locations in Malawi.
Infection probability declined with increasing urbanicity. The new urbanicity metric was more predictive than either a governmentally defined rural/urban dichotomous variable or a population density variable. One reason for this was that 23% of cells within politically defined rural areas exhibited lower risk, more like those normally associated with “urban” locations.
Why malaria? Over-researched, over-funded, diminishing returns? Rambling on the need for student mentorship.
Last week I gave an informal lecture on survey sampling to a small group of graduate students from a number of countries. With only one exception, all of the students were working on various aspects of malaria, primarily in basic sciences. The lone non-malaria student was from Vietnam and is interested in Dengue fever.
I praised her for working on Dengue. Dengue presents a serious threat to human health in all countries where the vectors exist, but the burden of disease will be particularly felt in rapidly urbanizing areas of developing countries.
Developing countries are ill equipped to deal with Dengue, and the antiquated nature of their health care systems, leftover by the colonialists, means that diagnostics are mostly non-existent and drugs wholly unavailable. Any fever in most of Sub-Saharan Africa is diagnosed simply as malaria, drugs administered and the patient left on their own.
We have extensive experience, however, with malaria. While there are numerous challenges to reducing malaria incidence, preventing recrudescence and postponing drug resistance, the basic fact is that the best way to eliminate or control malaria is to simply make people less poor. Even countries with holoendemic transmission, wealthier people get malaria less often than poor people, and poor people who live in wealthier areas get sick less than wealthier people in poor areas. This is known (in Game of Thrones parlance).
So, as we discussed the topic during lecture, I softly tried to encourage the students to look at other areas where they might be able to better apply their skills. They were mostly unresponsive, which is fine. Someone has to tell them, it might as well be me.
One of the students, however, indicated that “malaria is where the money is.” I couldn’t disagree. The reason that we put so much money and effort into diseases like malaria and HIV is simply because they yield marketable products. Medications for diseases like tungiasis (jiggers) are so simple as to not be profitable, customers too poor to buy them, and governments and donors too distracted by big diseases like malaria, HIV and TB to be concerned with dumping money to provide them for free.
And this is where the problem lies. We have a self propagating system of companies, researchers and donors, which simply float money between one another with little regard for the needs of the poorest of the poor. Breaking the cycle is difficult, but it starts with academics who need to push students to do work with neglected, overlooked or under-researched diseases. Even small grants can support small, but meaningful projects.
We have reached a point where malaria funding for malaria research is yielding ever diminishing returns. Money needs to be put into programs to deliver the tools we have and make ITNs, ACTs and IRS available to the people who need them, who often have trouble getting them. Moreover, we need economic development to make people less poor in developing coutnries so that fewer of their babies die. Human resources in developed countries need to start focusing on emerging (or already emerged but ignored) threats lke antibiotic resistance, Dengue fever, emerging zoonotics and others. That starts with us as mentors.
Infectious disease transmission dynamics and the ethics of intervention based public health research
I think a lot about ethics and ethical issues. Research in Sub-Saharan Africa presents unique risks for ethical breaches. Given income and power disparities between individuals and foreign researchers and even between individuals and local political leaders the possibility of coercive research is ever present. Pressure to produce can lead to unrealistic assumptions of risks and benefits to very poor individuals. Inadequate knowledge or willful ignorance of local political issues can compromise future research activities, both by international and domestic groups.
Recently, though, an interesting situation came across my desk that included an intersection of ethics and the dynamics of infectious disease transmission.
As everyone knows, not all infectious diseases are the same. Some, like measles, impart full immunity upon exposure, whereas diseases such as malaria impart only partial immunity, requiring repeated exposures to acquire full or adequate immunity to prevent death or serious injury. Moreover, as immunity and immune reactions change over the life course, the time (age) of exposure are sometimes crucial to prevent serious disease. Polio is a great example. Exposure in infancy leads merely to diarrhea, where exposure at older ages can lead to debilitating paralysis.
I was thinking of an population based intervention study which provides some sort of malaria medication to a small population in a holo-endemic area. Given the year round nature of malaria transmission in this area, we would expect that even with a depression in symptomatic and asymptomatic cases, active transmission in the surrounding areas would lead to recrudescence within a very short time. Given the short time frame, we would assume very little interruption in the development of immunity in small children and might even see a short term reduction of childhood mortality. Assuming that this medication presented little or no risk of serious side effects, I believe that there is little reason to assume an ethical breach. A short term reduction in malaria would suggest that the benefits far outweigh the risks.
However, conducting the same study on a very large population in the same area might have very different outcomes. Delivering a malaria medication to, say, an entire county surrounded by other areas of extremely high transmission would indicate that recrudescence is also inevitable but that the time required to return to pre-intervention levels is extended. Infectious disease transmission requires a chain of hosts. The longer that chain, the longer it will take for new hosts to become newly infected.
Theoretically, this could delay infections in small children and it is theoretically possible that we might see a spike in childhood mortality, since the timing of initial malaria infection and frequency of infections are crucial to preventing the worst outcomes.
Of course, I’m not suggesting that people should just get infected to induce immunity, but I am suggesting that a study which seeks to reduce transmission through pharmaceuticals given only intermittently (as opposed to prophylactically) consider all possible implications. Insecticide treated nets (ITNs) provide protection over time and are a form of vector control. A medication given at a single time point merely clears the parasite, but does nothing to prevent bites or kill mosquitoes.
Though I could be overthinking the issue, my worry is that ethical approvals approach the issue of mass distributions of pharmaceuticals as a one size fits all issue without taking other factors such as population size and acquired immunity into account. Malaria, as a complex vector borne disease introduces complexities that, say, measles does not. Researchers, IRBs and ethics board would do well to consider this complexity.
Doing research in developing countries is not easy. However, with a bit of care and planning, one can do quality work which can have an impact on how much we know about the public health in poor countries and provide quality data where data is sadly scarce.
The root of a survey, however, is sampling. A good sample does its best to successfully represent a population of interest and can at least qualify all of the ways in which it does not. A bad sample either 1) does not represent the population (bias) and no way to account for it or 2) has no idea what it represents.
Without being a hater, my least favorite study design is the “school based survey.” Researchers like this design for a number of reasons.
First, it is logistically simple to conduct. If one is interested in kids, it helps to have a large number of them in one place. Visiting households individually is time consuming, expensive and one only has a small window of opportunity to catch kids at home since they are probably at school!
Second, since the time required to conduct a school based survey is short, researchers aren’t required to make extensive time commitments in developing countries. They can simply helicopter in for a couple of days and run away to the safety of wherever. Also, there is no need to manage large teams of survey workers over the long term. Data can be collected within a few days under the supervision of foreign researchers.
Third, school based surveys don’t require teams to lug around large diagnostic or sampling supplies (e.g. coolers for serum samples).
However, from a sampling perspective, assuming that one wishes to say something about the greater community, the “school based survey” is a TERRIBLE design.
The biases should be obvious. Schools tend to concentrate students which are similar to one another. Students are of similar socio-economic backgrounds, ethnicity or religion. Given the fee based structure of most schools in most African countries, sampling from schools will necessarily exclude the absolute poorest of the poor. Moreover, if one does not go out of the way to select more privileged private schools, one will exclude the wealthy, an important control if one wants to draw conclusions about socio-economic status and health.
Further, schools based surveys are terrible for studies of health since the sickest kids won’t attend school. School based surveys are biased in favor of healthy children.
So, after this long intro (assuming anyone has read this far) how does this work in practice?
I have a full dataset of socio-econonomic indicators for approximately 17,000 households in an area of western Kenya. We collect information on basic household assets such as possession of TVs, cars, radios and type of house construction (a la DHS). I boiled these down into a single continuous measure, where each households gets a wealth “score” so that we can compare one or more households to others in the community ( a la Filmer & Pritchett).
We also have a data set of school based samples from a malaria survey which comprises ~800 primary school kids. I compared the SES scores for the school based survey to the entire data set to see if the distribution of wealth for the school based sample compared with the distribution of wealth for the entire community. If they are the same, we have no problems of socio-economic bias.
We can see, however, from the above plot that the distributions differ. The distribution of SES scores for the school based survey is far more bottom heavy than that of the great community; the school based survey excludes wealthier households. The mean wealth score for the school based survey is well under that of the community as a whole (-.025 vs. -.004, t=-19.32, p<.0001).
Just from this, we can see that the school based survey is likely NOT representative of the community and that the school based sample is far more homogeneous than the community from which the kids are drawn.
Researchers find working with continuous measure of SES unwieldy and difficult to present. To solve this problem, they will often place households into socio-economic "classes" by dividing the data set up into . quantiles. These will represent households which range from "ultra poor" to "wealthy." A problem with samples is that these classifications may not be the same over the range of samples, and only some of them will accurately reflect the true population level classification.
In this case, when looking at a table of how these classes correspond to one another, we find the following:
Assuming that these SES “classes” are at all meaningful (another discussion) We can see that for all but the wealthiest households more than 80% of households have been misclassified! Further, due to the sensitivity of the method (multiple correspondence analysis) used to create the composite, 17 of households classified as “ultra poor” in the full survey have suddenly become “wealthy.”
Now, whether these misclassifications impact the results of the study remains to be seen. It may be that they do not. It also may be the case that investigators may not be interested in drawing conclusions about the community and may only want to say something about children who attend particular types of schools (though this distinction is often vague in practice). Regardless, sampling matters. A properly designed survey can improve data quality vastly.
I was just reading a post from development economist Ed Carr’s blog, where he reflects on a book he wrote almost five years ago. Reflection is a pretty depressing excercise for any academic, but Carr seems to remain positive about his book.
He sums it up in three points:
“1. Most of the time, we have no idea what the global poor are doing or why they are doing it.
2. Because of this, most of our projects are designed for what we think is going on, which rarely aligns with reality
3. This is why so many development projects fail, and if we keep doing this, the consequences will get dire”
Well, yeah. This is a huge problem. In academics, we filter the experiences of the poor through a lens of academic frameworks, which we haphazardly impose with often no consultation with our subjects. Granted, this is likely inevtiable, but when designing public health interventions, it helps to have some idea of what the poorest of the poor do and why or our efforts are doomed to fail.
I remember a set of arguments a few years back on bed nets. Development and public health people were all upset because people were seen using nets for fishing. The reaction, particularly from in country workers was that poor people are stupid and will shoot themselves in the foot at any opportunity.
I couldn’t really understand the condescension and was rather fascinated that people were taking a new product and adapting it to their own needs. Business would see this as an opportunity and would seek to figure out why people were using nets for things other than malaria prevention and attempt to develop some new strategy to satisfy both needs (fishing and malaria prevention) at once. Academics simply weren’t interested.
To work with the poor, we have to understand them and understanding them requires that we respect their agency. If we don’t do this, we risk alienating the people we seek to help.
New Publication (from me): “Insecticide-treated net use before and after mass distribution in a fishing community along Lake Victoria, Kenya: successes and unavoidable pitfalls”
This was was years in the making but it is finally out in Malaria Journal and ready for the world’s perusal. Done.
Insecticide-treated net use before and after mass distribution in a fishing community along Lake Victoria, Kenya: successes and unavoidable pitfalls
Peter S Larson, Noboru Minakawa, Gabriel O Dida, Sammy M Njenga, Edward L Ionides and Mark L Wilson
Insecticide-treated nets (ITNs) have proven instrumental in the successful reduction of malaria incidence in holoendemic regions during the past decade. As distribution of ITNs throughout sub-Saharan Africa (SSA) is being scaled up, maintaining maximal levels of coverage will be necessary to sustain current gains. The effectiveness of mass distribution of ITNs, requires careful analysis of successes and failures if impacts are to be sustained over the long term.
Mass distribution of ITNs to a rural Kenyan community along Lake Victoria was performed in early 2011. Surveyors collected data on ITN use both before and one year following this distribution. At both times, household representatives were asked to provide a complete accounting of ITNs within the dwelling, the location of each net, and the ages and genders of each person who slept under that net the previous night. Other data on household material possessions, education levels and occupations were recorded. Information on malaria preventative factors such as ceiling nets and indoor residual spraying was noted. Basic information on malaria knowledge and health-seeking behaviours was also collected. Patterns of ITN use before and one year following net distribution were compared using spatial and multi-variable statistical methods. Associations of ITN use with various individual, household, demographic and malaria related factors were tested using logistic regression.
After infancy (<1 year), ITN use sharply declined until the late teenage years then began to rise again, plateauing at 30 years of age. Males were less likely to use ITNs than females. Prior to distribution, socio-economic factors such as parental education and occupation were associated with ITN use. Following distribution, ITN use was similar across social groups. Household factors such as availability of nets and sleeping arrangements still reduced consistent net use, however.
Comprehensive, direct-to-household, mass distribution of ITNs was effective in rapidly scaling up coverage, with use being maintained at a high level at least one year following the intervention. Free distribution of ITNs through direct-to-household distribution method can eliminate important constraints in determining consistent ITN use, thus enhancing the sustainability of effective intervention campaigns.
In 2012, my friend Akira and I went hiking in the mountains outside Osaka. It was a pretty easy hike, but on the way down Akira twisted his ankle and sort of lumbered down the rest of the trail. After a few days, the pain got worse and he had to cancel an upcoming research trip to Vanuatu. He asked me to go in his place and offered to pay my expenses. I was due to go on a couple of other research trips that summer so I couldn’t commit, but the only other gringo on the trip begged me and at the last minute I decided to go.
Long story short, it was a crazy set of interpersonal dynamics, we suffered bacterial infections, got stuck on an island for ten days because a plane needed to be repaired, one of us didn’t eat or drink water for ten days, much fish was eaten (but the people who ate), much kava was drank and stories were told. Our diet alternated between delicious seafood and fresh fruits to ramen noodles over rice.
It was a surreal experience. I lost ~16 pounds, down from 175 to 159, came back with numerous skin infections and was a general physical wreck for months, more so than usual. It was challenging, but an experience I am unlikely to forget. I hope to go back one day.
The paper can be found here.
Pictures from Vanuatu (back when I took pictures) are here.
Insecticide-treated nets (ITNs) are an integral piece of any malaria elimination strategy, but compliance remains a challenge and determinants of use vary by location and context. The Health Belief Model (HBM) is a tool to explore perceptions and beliefs about malaria and ITN use. Insights from the model can be used to increase coverage to control malaria transmission in island contexts.
A mixed methods study consisting of a questionnaire and interviews was carried out in July 2012 on two islands of Vanuatu: Ambae Island where malaria transmission continues to occur at low levels, and Aneityum Island, where an elimination programme initiated in 1991 has halted transmission for several years.
For most HBM constructs, no significant difference was found in the findings between the two islands: the fear of malaria (99%), severity of malaria (55%), malaria-prevention benefits of ITN use (79%) and willingness to use ITNs (93%). ITN use the previous night on Aneityum (73%) was higher than that on Ambae (68%) though not statistically significant. Results from interviews and group discussions showed that participants on Ambae tended to believe that risk was low due to the perceived absence of malaria, while participants on Aneityum believed that they were still at risk despite the long absence of malaria. On both islands, seasonal variation in perceived risk, thermal discomfort, costs of replacing nets, a lack of money, a lack of nets, nets in poor condition and the inconvenience of hanging had negative influences, while free mass distribution with awareness campaigns and the malaria-prevention benefits had positive influences on ITN use.
The results on Ambae highlight the challenges of motivating communities to engage in elimination efforts when transmission continues to occur, while the results from Aneityum suggest the possibility of continued compliance to malaria elimination efforts given the threat of resurgence. Where a high degree of community engagement is possible, malaria elimination programmes may prove successful.”
In my seminal paper, “Distance to health services influences insecticide-treated net possession and use among six to 59 month-old children in Malawi,” I indicated that Euclidean (straight line) measures of distance were just as good as more complicated, network based measures.
I didn’t include the graph showing how correlated the two were, but I wish I had and I can’t find it here my computer.
Every time I’ve done presentations of research of the association of distances to various things and health outcomes, someone inevitably asks why I didn’t use a more complex measure of actual travel paths. The idea is that no one walks in a straight line anywhere, but rather follows a road network, or even utilizes a number of transportation options which might be lost in a simple measure.
I always respond that a straight line distance is as good as any other when investigating relationships on a coarse scale. Inevitably, audiences are never convinced.
A new paper came out today, “Methods to measure potential spatial access to delivery care in low- and middle-income countries: a case study in rural Ghana” which compared the Euclidean measure with a number of more complex measurements.
The conclusion confirmed what I already knew, that the Euclidean measure is just as good in most cases, and the pain and cost of producing sexy and complicated ways of calculating distance just isn’t worth it.
It’s a pretty decent paper, but I wish they had put some graphs in to illustrate their points. It would be good to see exactly where the measures disagree.
Access to skilled attendance at childbirth is crucial to reduce maternal and newborn mortality. Several different measures of geographic access are used concurrently in public health research, with the assumption that sophisticated methods are generally better. Most of the evidence for this assumption comes from methodological comparisons in high-income countries. We compare different measures of travel impedance in a case study in Ghana’s Brong Ahafo region to determine if straight-line distance can be an adequate proxy for access to delivery care in certain low- and middle-income country (LMIC) settings.
We created a geospatial database, mapping population location in both compounds and village centroids, service locations for all health facilities offering delivery care, land-cover and a detailed road network. Six different measures were used to calculate travel impedance to health facilities (straight-line distance, network distance, network travel time and raster travel time, the latter two both mechanized and non-mechanized). The measures were compared using Spearman rank correlation coefficients, absolute differences, and the percentage of the same facilities identified as closest. We used logistic regression with robust standard errors to model the association of the different measures with health facility use for delivery in 9,306 births.
Non-mechanized measures were highly correlated with each other, and identified the same facilities as closest for approximately 80% of villages. Measures calculated from compounds identified the same closest facility as measures from village centroids for over 85% of births. For 90% of births, the aggregation error from using village centroids instead of compound locations was less than 35 minutes and less than 1.12 km. All non-mechanized measures showed an inverse association with facility use of similar magnitude, an approximately 67% reduction in odds of facility delivery per standard deviation increase in each measure (OR = 0.33).
Different data models and population locations produced comparable results in our case study, thus demonstrating that straight-line distance can be reasonably used as a proxy for potential spatial access in certain LMIC settings. The cost of obtaining individually geocoded population location and sophisticated measures of travel impedance should be weighed against the gain in accuracy.
Was reading Chris Blattman’s list of books that development people should read but don’t and found this in the Amazon description of “The Anti-Politics Machine: Development, Depoliticization, and Bureaucratic Power in Lesotho.”
Development, it is generally assumed, is good and necessary, and in its name the West has intervened, implementing all manner of projects in the impoverished regions of the world. When these projects fail, as they do with astonishing regularity, they nonetheless produce a host of regular and unacknowledged effects, including the expansion of bureaucratic state power and the translation of the political realities of poverty and powerlessness into “technical” problems awaiting solution by “development” agencies and experts.
Note that I do not harbor any ill will toward development or even, as a general rule, “technical solutions.” Having been involved with bed net distributions and having watched the outcomes of reproductive health interventions, for example, I can say that there are many positive outcomes of development projects. In my area, fewer kids are dying and women are becoming pregnant a whole lot less, decreasing the risk of maternal mortality.
Disclaimers aside, there is no doubt that development projects often fail for a number of reasons, the first of which is that leaders have no interest in seeing that they succeed. While leaders are indifferent to the outcomes, they happily take on the power that comes with them, embracing bureaucratic reforms, which are mostly just expansions of power at all levels of government.
This wouldn’t necessarily be a bad thing, except that African countries never embraced many of the protections of individual rights which restrict the powers of the state. Independence movements in much of Africa was predicated on an eventual return of power to the majority. Not many (none?) of these movements sought to protect the rights of the minority, much less the individual. Thus, there is little restriction on the types of rules which may be created and since many of these development projects influence policy, development projects unwittingly feed into the autocracy machine.
In the past, surveys were done on paper, either through a designed questionnaire or by someone frantically writing down interview responses. When computers came around, people would be hired to type in responses for later analysis.
Nowadays, with the advent of cheap and portable computing, research projects are rapidly moving toward fully digital methods of data collection. Tablet computers are easy to operate, can be cheaply replaced, and now can access the internet for easy uploading of data from the field.
Surveyors like them because large teams can be spread out over a wide space, data can be completely standardized and the tedious process of data entry can be avoided.
Of interest to me, however, is whether the technology is influencing the nature of the responses given. That is, will someone provide that same set of responses in a survey using digital data collection methods as in a paper survey?
Recently, we attempted using a tablet based software for a small project on livestock possession and management on Mbita Point in Western Kenya. I intended it as a test to see if a particular software package might be a good fit for another project I`m working on (the one that`s paying the bills).
We had only limited success. The survey workers found the tablets clunky and a number of problems with the Android operating system made it more trouble than the survey was actually worth. Of interest, though, was how the technology distracted the enumerators from their principle task, which was to collect data.
Enumerators would become so wrapped up in trying to navigate the various buttons and options of the software that they couldn`t effectively concentrate on performing the survey. Often they appeared to skip questions out of frustration or would just frantically select one of the many options in the hope of moving on to the next one.
In a survey of more than 100 questions, the process started taking far more time than households were willing to give. We eventually had to abandon the software and revert to a paper based method.
Surveys went from lasting more than one hour, to taking under 30 minutes. Workers were more confident and had more time to interact with the respondents. Respondents had more of an opportunity to ask questions and consider the meaning of what they were being asked. They offered far more information than we expected and felt that they were participating in the survey as a partner and not just as a passive victim.
One of our enumerators noted that people react differently to a surveyor collecting data on the tablets than with paper. She described collecting data with technology as being “self absorbed” and alienating to the respondents. Collecting data on paper, however, was seen as a plus. “They can see me writing down what they say and feel like their words are important.”
I`m thinking that the nature of the responses themselves might be different as well. Particularly with complex questions of health and disease, often the surveyors will have to explain the question and give a respondent a chance to ask for further clarification. Technology appears to inhibit this process, perhaps compromising the chance for a truly reasoned response.
While I am absolutely not opposed to the use of technology in surveys, I think that the survey strategy has to be properly thought through and the challenges considered. At the same time, however, data collection is a team effort and requires a proper rapport between community members and surveyors who often know each other.
Is technology restricting our ability to gather good data? Could the use of technology even impact the nature of the response by pushing them in ways which really only tell us what we want to believe rather than what actually exists?