Tunga penetrans is native to South America, was brought to West Africa through the slave trade. In the mid 19th century it was brought on an English shipping vessel and made its way through trade routes and is now found everywhere throughout the continent.
Bacteria opportunistically invades the site and super-infections (multiple pathogens) are common. Victims suffer from itching and pain and multiple fleas are common. Due to the location of the bite, people often have trouble walking and due to the disgusting nature of the infection, victims are stigmatized and marginalized. Worse yet, the site can becomes gangrenous and auto-amputations of digits and feet and eventually death are not uncommon.
The Parliaments of both Kenya and Uganda have introduced bills in the past calling for the arrest of people suffering from jiggers. Of course, these ridiculous bills don’t come with public health actions to control the disease.
Jiggers are entirely preventable, treatable through either surgical excision or through various medications but risk factors for it are mostly unknown and the data contradictory and mostly inconclusive.
It sometimes occurs in travelers and is easily treated in a clinic on an outpatient basis but is a debilitating infection for poor communities. Thus, it is not taken seriously by international public health groups who choose to focus on big issues like HIV and malaria.
Jiggers are a classic example of the neglected tropical disease: it devastates the poorest of the poor but gets almost no attention from donors or the international press.
We gathered some data on jiggers back in 2011 along the coast of Kenya. Without presenting these results as official, I was drawn to the attached map.
Animals of various species have been implicated as reservoirs for the disease, most notably pigs and dogs. Less understood is the role of wildlife in maintaining transmission. On the map below, the large yellow dots represent cases. Note that they are nearly all located along the Shimba Hills Wildlife Reserve. I calculated the distance of each household to the park’s border (see the funny graph at the bottom), and found a graded relationship between distance and jiggers infections. Past 5km away from the park, the risk of jiggers is nearly zero.
What does this mean? I have ruled out domesticated animals, at least as a primary reservoir. People in this area tend to all own the same types and numbers of animals. Being Islamic, there are no pigs here, but dogs are found everywhere. Despite this, there are distinct spatial patterns which are associated with the park. Note that all of the cases are found between the parks border and a set of lakes, perhaps implying that certain wild animals are traveling there for water and food.
The ecology of jiggers is very poorly understood and, like many pathogens (like Ebola, for example), wildlife probably play an important role.
It’s worth paying me a lot of money to study it.
In my seminal paper, “Distance to health services influences insecticide-treated net possession and use among six to 59 month-old children in Malawi,” I indicated that Euclidean (straight line) measures of distance were just as good as more complicated, network based measures.
I didn’t include the graph showing how correlated the two were, but I wish I had and I can’t find it here my computer.
Every time I’ve done presentations of research of the association of distances to various things and health outcomes, someone inevitably asks why I didn’t use a more complex measure of actual travel paths. The idea is that no one walks in a straight line anywhere, but rather follows a road network, or even utilizes a number of transportation options which might be lost in a simple measure.
I always respond that a straight line distance is as good as any other when investigating relationships on a coarse scale. Inevitably, audiences are never convinced.
A new paper came out today, “Methods to measure potential spatial access to delivery care in low- and middle-income countries: a case study in rural Ghana” which compared the Euclidean measure with a number of more complex measurements.
The conclusion confirmed what I already knew, that the Euclidean measure is just as good in most cases, and the pain and cost of producing sexy and complicated ways of calculating distance just isn’t worth it.
It’s a pretty decent paper, but I wish they had put some graphs in to illustrate their points. It would be good to see exactly where the measures disagree.
Access to skilled attendance at childbirth is crucial to reduce maternal and newborn mortality. Several different measures of geographic access are used concurrently in public health research, with the assumption that sophisticated methods are generally better. Most of the evidence for this assumption comes from methodological comparisons in high-income countries. We compare different measures of travel impedance in a case study in Ghana’s Brong Ahafo region to determine if straight-line distance can be an adequate proxy for access to delivery care in certain low- and middle-income country (LMIC) settings.
We created a geospatial database, mapping population location in both compounds and village centroids, service locations for all health facilities offering delivery care, land-cover and a detailed road network. Six different measures were used to calculate travel impedance to health facilities (straight-line distance, network distance, network travel time and raster travel time, the latter two both mechanized and non-mechanized). The measures were compared using Spearman rank correlation coefficients, absolute differences, and the percentage of the same facilities identified as closest. We used logistic regression with robust standard errors to model the association of the different measures with health facility use for delivery in 9,306 births.
Non-mechanized measures were highly correlated with each other, and identified the same facilities as closest for approximately 80% of villages. Measures calculated from compounds identified the same closest facility as measures from village centroids for over 85% of births. For 90% of births, the aggregation error from using village centroids instead of compound locations was less than 35 minutes and less than 1.12 km. All non-mechanized measures showed an inverse association with facility use of similar magnitude, an approximately 67% reduction in odds of facility delivery per standard deviation increase in each measure (OR = 0.33).
Different data models and population locations produced comparable results in our case study, thus demonstrating that straight-line distance can be reasonably used as a proxy for potential spatial access in certain LMIC settings. The cost of obtaining individually geocoded population location and sophisticated measures of travel impedance should be weighed against the gain in accuracy.
In the past, surveys were done on paper, either through a designed questionnaire or by someone frantically writing down interview responses. When computers came around, people would be hired to type in responses for later analysis.
Nowadays, with the advent of cheap and portable computing, research projects are rapidly moving toward fully digital methods of data collection. Tablet computers are easy to operate, can be cheaply replaced, and now can access the internet for easy uploading of data from the field.
Surveyors like them because large teams can be spread out over a wide space, data can be completely standardized and the tedious process of data entry can be avoided.
Of interest to me, however, is whether the technology is influencing the nature of the responses given. That is, will someone provide that same set of responses in a survey using digital data collection methods as in a paper survey?
Recently, we attempted using a tablet based software for a small project on livestock possession and management on Mbita Point in Western Kenya. I intended it as a test to see if a particular software package might be a good fit for another project I`m working on (the one that`s paying the bills).
We had only limited success. The survey workers found the tablets clunky and a number of problems with the Android operating system made it more trouble than the survey was actually worth. Of interest, though, was how the technology distracted the enumerators from their principle task, which was to collect data.
Enumerators would become so wrapped up in trying to navigate the various buttons and options of the software that they couldn`t effectively concentrate on performing the survey. Often they appeared to skip questions out of frustration or would just frantically select one of the many options in the hope of moving on to the next one.
In a survey of more than 100 questions, the process started taking far more time than households were willing to give. We eventually had to abandon the software and revert to a paper based method.
Surveys went from lasting more than one hour, to taking under 30 minutes. Workers were more confident and had more time to interact with the respondents. Respondents had more of an opportunity to ask questions and consider the meaning of what they were being asked. They offered far more information than we expected and felt that they were participating in the survey as a partner and not just as a passive victim.
One of our enumerators noted that people react differently to a surveyor collecting data on the tablets than with paper. She described collecting data with technology as being “self absorbed” and alienating to the respondents. Collecting data on paper, however, was seen as a plus. “They can see me writing down what they say and feel like their words are important.”
I`m thinking that the nature of the responses themselves might be different as well. Particularly with complex questions of health and disease, often the surveyors will have to explain the question and give a respondent a chance to ask for further clarification. Technology appears to inhibit this process, perhaps compromising the chance for a truly reasoned response.
While I am absolutely not opposed to the use of technology in surveys, I think that the survey strategy has to be properly thought through and the challenges considered. At the same time, however, data collection is a team effort and requires a proper rapport between community members and surveyors who often know each other.
Is technology restricting our ability to gather good data? Could the use of technology even impact the nature of the response by pushing them in ways which really only tell us what we want to believe rather than what actually exists?
It’s a reasonable question to which no one really has an answer. I work in a field site located on Lake Victoria, the office of which is based out of the International Centre for Insect Physiology and Ecology (ICIPE) station on Mbita Point.
We do malaria field surveys and have a large health and demographic surveillance system that has monitored births, deaths, migration and health events of nearly 50,000 people over the past six years.
The goals of the project are to monitor changes in demographics, outbreaks and changes in the dynamics of the transmission of infectious diseases and gauge the effectiveness of interventions.
While I view those as scientifically important, I don’t think that people on the ground experience any immediate benefit from scientific research activities. In fact, I’m pretty sure that, unless they’re getting a free bednet, it’s mostly an annoyance. Of course, we appreciate their cooperation and they are free to tell us to bugger off at anytime.
We are seeing rapid declines in malaria incidence, infant mortality and fertility in the communities we study. This is, of course, cause for celebration. Less kids are dying and people are having fewer of them.
In fact, the shift in the age distribution was so dramatic from 2011 to 2012, that we thought it an aberration of the data: the mean age of 12,000 people rose nearly two years from the beginning of 2011 to the latter part of 2012. Old people died off, and fewer babies were there to replace them, resulting in an upward shift in the age distribution. Cause for celebration in an area where women normally have anywhere from 5 to 10 children, who often end up malnourished, poorly housed and uneducated.
But we have to ask ourselves, how much of this is representative of trends in communities similar to the ones we study and how much is directly influenced by the presence of the research station itself?
A recent article in Malaria Journal documents the positive impacts that a research facility had on the local community:
To make the community a real partner in the centre’s activities, a tacit agreement was made that priority would be given to local people, in a competitive manner, for all non-professional jobs (construction workers, drivers, cleaners, field workers, data clerks, and others). Of the 254 people employed at the CRUN, about one-third come from Nanoro. This has strengthened the sense of ownership of the centre’s activities by the community. Through the modest creation of new jobs, CRUN makes a substantial contribution to reducing poverty in the community. In addition, staff members residing in Nanoro contribute to the micro-economy there.
Another crucial benefit for Nanoro and CRUN stemming from their productive engagement was electrification for the area. This was made possible by the mayor of Nanoro leading the negotiations for extending the national electrical grid to the CRUN, and with it, to the village of Nanoro. Electrification spurred a lot of economic activity and social amenities that enhance the wellbeing of the community, such as: (1) improved water supply through use electricity instead of generator; (2) ability to use electrical devices, such as fans during the hot season (when temperatures can reach 45-47°C), lighting so students can study at night, the use of refrigeration to safely store food and the extension of business hours past sunset.
Health care services have been improved through CRUN’s new microbiology laboratory. Before this laboratory was established, local patients had to travel about 100 km to the capital city, Ouagadougou, for the service.
This agrees with my experience on Lake Victoria. The presence of the research facility (built originally in the 1960’s) and the subsequent scale up of research activities has been transformative for the area. As more and more people have moved to the area, a bridge to Rusinga Island has been built, two new ferry routes have been installed, the existing ferries have been upgraded, power has been extended to the area and finally, after years of waiting, a paved road has been built from Kisumu to Mbita Point.
..which brings back me to my initial question. It is clear that the building of research facilities can be a major spur for economic development and economic activity in a previously desolate and marginalized area. In case of Mbita Point, it is possible that these gains can be sustained even following an eventual cessation of research activities and strangled funding. In this sense, field research projects are doing at least some of the world good.
However, the gains which these communities are experience really have little to do with the research projects themselves and more to do with the influx of employment and infrastructure that come with research stations and research projects. This is non-controversial and I’m sure that the locals appreciate it.
But the quality and goals of research need to be assessed. Are the results we are seeing truly representative of communities which may be similar to the Mbita Point of the past? Are we unnecessarily influencing the outcomes of the research and then perhaps inappropriately generalizing them to contexts which little resemble our target communities? From a scientific perspective, this is troubling.
Of greater concern, however, are we claiming that gains against malaria are being made, when in fact, morbidity and mortality in communities we haven’t looked at is increasing? This could result in a dangerous shift away from scaled up ITN distributions or even a total reduction in international funding. If this happens, kids will die.