In my seminal paper, “Distance to health services influences insecticide-treated net possession and use among six to 59 month-old children in Malawi,” I indicated that Euclidean (straight line) measures of distance were just as good as more complicated, network based measures.
I didn’t include the graph showing how correlated the two were, but I wish I had and I can’t find it here my computer.
Every time I’ve done presentations of research of the association of distances to various things and health outcomes, someone inevitably asks why I didn’t use a more complex measure of actual travel paths. The idea is that no one walks in a straight line anywhere, but rather follows a road network, or even utilizes a number of transportation options which might be lost in a simple measure.
I always respond that a straight line distance is as good as any other when investigating relationships on a coarse scale. Inevitably, audiences are never convinced.
A new paper came out today, “Methods to measure potential spatial access to delivery care in low- and middle-income countries: a case study in rural Ghana” which compared the Euclidean measure with a number of more complex measurements.
The conclusion confirmed what I already knew, that the Euclidean measure is just as good in most cases, and the pain and cost of producing sexy and complicated ways of calculating distance just isn’t worth it.
It’s a pretty decent paper, but I wish they had put some graphs in to illustrate their points. It would be good to see exactly where the measures disagree.
Access to skilled attendance at childbirth is crucial to reduce maternal and newborn mortality. Several different measures of geographic access are used concurrently in public health research, with the assumption that sophisticated methods are generally better. Most of the evidence for this assumption comes from methodological comparisons in high-income countries. We compare different measures of travel impedance in a case study in Ghana’s Brong Ahafo region to determine if straight-line distance can be an adequate proxy for access to delivery care in certain low- and middle-income country (LMIC) settings.
We created a geospatial database, mapping population location in both compounds and village centroids, service locations for all health facilities offering delivery care, land-cover and a detailed road network. Six different measures were used to calculate travel impedance to health facilities (straight-line distance, network distance, network travel time and raster travel time, the latter two both mechanized and non-mechanized). The measures were compared using Spearman rank correlation coefficients, absolute differences, and the percentage of the same facilities identified as closest. We used logistic regression with robust standard errors to model the association of the different measures with health facility use for delivery in 9,306 births.
Non-mechanized measures were highly correlated with each other, and identified the same facilities as closest for approximately 80% of villages. Measures calculated from compounds identified the same closest facility as measures from village centroids for over 85% of births. For 90% of births, the aggregation error from using village centroids instead of compound locations was less than 35 minutes and less than 1.12 km. All non-mechanized measures showed an inverse association with facility use of similar magnitude, an approximately 67% reduction in odds of facility delivery per standard deviation increase in each measure (OR = 0.33).
Different data models and population locations produced comparable results in our case study, thus demonstrating that straight-line distance can be reasonably used as a proxy for potential spatial access in certain LMIC settings. The cost of obtaining individually geocoded population location and sophisticated measures of travel impedance should be weighed against the gain in accuracy.
In the past, surveys were done on paper, either through a designed questionnaire or by someone frantically writing down interview responses. When computers came around, people would be hired to type in responses for later analysis.
Nowadays, with the advent of cheap and portable computing, research projects are rapidly moving toward fully digital methods of data collection. Tablet computers are easy to operate, can be cheaply replaced, and now can access the internet for easy uploading of data from the field.
Surveyors like them because large teams can be spread out over a wide space, data can be completely standardized and the tedious process of data entry can be avoided.
Of interest to me, however, is whether the technology is influencing the nature of the responses given. That is, will someone provide that same set of responses in a survey using digital data collection methods as in a paper survey?
Recently, we attempted using a tablet based software for a small project on livestock possession and management on Mbita Point in Western Kenya. I intended it as a test to see if a particular software package might be a good fit for another project I`m working on (the one that`s paying the bills).
We had only limited success. The survey workers found the tablets clunky and a number of problems with the Android operating system made it more trouble than the survey was actually worth. Of interest, though, was how the technology distracted the enumerators from their principle task, which was to collect data.
Enumerators would become so wrapped up in trying to navigate the various buttons and options of the software that they couldn`t effectively concentrate on performing the survey. Often they appeared to skip questions out of frustration or would just frantically select one of the many options in the hope of moving on to the next one.
In a survey of more than 100 questions, the process started taking far more time than households were willing to give. We eventually had to abandon the software and revert to a paper based method.
Surveys went from lasting more than one hour, to taking under 30 minutes. Workers were more confident and had more time to interact with the respondents. Respondents had more of an opportunity to ask questions and consider the meaning of what they were being asked. They offered far more information than we expected and felt that they were participating in the survey as a partner and not just as a passive victim.
One of our enumerators noted that people react differently to a surveyor collecting data on the tablets than with paper. She described collecting data with technology as being “self absorbed” and alienating to the respondents. Collecting data on paper, however, was seen as a plus. “They can see me writing down what they say and feel like their words are important.”
I`m thinking that the nature of the responses themselves might be different as well. Particularly with complex questions of health and disease, often the surveyors will have to explain the question and give a respondent a chance to ask for further clarification. Technology appears to inhibit this process, perhaps compromising the chance for a truly reasoned response.
While I am absolutely not opposed to the use of technology in surveys, I think that the survey strategy has to be properly thought through and the challenges considered. At the same time, however, data collection is a team effort and requires a proper rapport between community members and surveyors who often know each other.
Is technology restricting our ability to gather good data? Could the use of technology even impact the nature of the response by pushing them in ways which really only tell us what we want to believe rather than what actually exists?
It’s a reasonable question to which no one really has an answer. I work in a field site located on Lake Victoria, the office of which is based out of the International Centre for Insect Physiology and Ecology (ICIPE) station on Mbita Point.
We do malaria field surveys and have a large health and demographic surveillance system that has monitored births, deaths, migration and health events of nearly 50,000 people over the past six years.
The goals of the project are to monitor changes in demographics, outbreaks and changes in the dynamics of the transmission of infectious diseases and gauge the effectiveness of interventions.
While I view those as scientifically important, I don’t think that people on the ground experience any immediate benefit from scientific research activities. In fact, I’m pretty sure that, unless they’re getting a free bednet, it’s mostly an annoyance. Of course, we appreciate their cooperation and they are free to tell us to bugger off at anytime.
We are seeing rapid declines in malaria incidence, infant mortality and fertility in the communities we study. This is, of course, cause for celebration. Less kids are dying and people are having fewer of them.
In fact, the shift in the age distribution was so dramatic from 2011 to 2012, that we thought it an aberration of the data: the mean age of 12,000 people rose nearly two years from the beginning of 2011 to the latter part of 2012. Old people died off, and fewer babies were there to replace them, resulting in an upward shift in the age distribution. Cause for celebration in an area where women normally have anywhere from 5 to 10 children, who often end up malnourished, poorly housed and uneducated.
But we have to ask ourselves, how much of this is representative of trends in communities similar to the ones we study and how much is directly influenced by the presence of the research station itself?
A recent article in Malaria Journal documents the positive impacts that a research facility had on the local community:
To make the community a real partner in the centre’s activities, a tacit agreement was made that priority would be given to local people, in a competitive manner, for all non-professional jobs (construction workers, drivers, cleaners, field workers, data clerks, and others). Of the 254 people employed at the CRUN, about one-third come from Nanoro. This has strengthened the sense of ownership of the centre’s activities by the community. Through the modest creation of new jobs, CRUN makes a substantial contribution to reducing poverty in the community. In addition, staff members residing in Nanoro contribute to the micro-economy there.
Another crucial benefit for Nanoro and CRUN stemming from their productive engagement was electrification for the area. This was made possible by the mayor of Nanoro leading the negotiations for extending the national electrical grid to the CRUN, and with it, to the village of Nanoro. Electrification spurred a lot of economic activity and social amenities that enhance the wellbeing of the community, such as: (1) improved water supply through use electricity instead of generator; (2) ability to use electrical devices, such as fans during the hot season (when temperatures can reach 45-47°C), lighting so students can study at night, the use of refrigeration to safely store food and the extension of business hours past sunset.
Health care services have been improved through CRUN’s new microbiology laboratory. Before this laboratory was established, local patients had to travel about 100 km to the capital city, Ouagadougou, for the service.
This agrees with my experience on Lake Victoria. The presence of the research facility (built originally in the 1960’s) and the subsequent scale up of research activities has been transformative for the area. As more and more people have moved to the area, a bridge to Rusinga Island has been built, two new ferry routes have been installed, the existing ferries have been upgraded, power has been extended to the area and finally, after years of waiting, a paved road has been built from Kisumu to Mbita Point.
..which brings back me to my initial question. It is clear that the building of research facilities can be a major spur for economic development and economic activity in a previously desolate and marginalized area. In case of Mbita Point, it is possible that these gains can be sustained even following an eventual cessation of research activities and strangled funding. In this sense, field research projects are doing at least some of the world good.
However, the gains which these communities are experience really have little to do with the research projects themselves and more to do with the influx of employment and infrastructure that come with research stations and research projects. This is non-controversial and I’m sure that the locals appreciate it.
But the quality and goals of research need to be assessed. Are the results we are seeing truly representative of communities which may be similar to the Mbita Point of the past? Are we unnecessarily influencing the outcomes of the research and then perhaps inappropriately generalizing them to contexts which little resemble our target communities? From a scientific perspective, this is troubling.
Of greater concern, however, are we claiming that gains against malaria are being made, when in fact, morbidity and mortality in communities we haven’t looked at is increasing? This could result in a dangerous shift away from scaled up ITN distributions or even a total reduction in international funding. If this happens, kids will die.
I know very little about lab sciences. A few months ago, Obokata Haruko, graduate of Waseda University and researcher at the Riken Center for Developmental Biology in Japan, discovered something “too good to be true.” She found a way of creating pluripotent cells, that is stem cells which can become anything, without the awful side effect of inducing cancer in a vertebrate host. Though my knowledge of such matters is sadly lacking, from what I understand, virae are usually used to induce the cell to convert, which can make the cell unstable, and likely to turn cancerous. Obokata discovered that the cell could be manipulated merely by stressing them with pressure.
The results followed the peer review process, were rejected once, and, after revising and resubmitting, were eventually published in Nature. Soon thereafter, the results were challenged and it was discovered that the images accompanying the paper did not represent the content of the paper and had likely been lifted from her doctoral dissertation. Obokata was disgraced.
Yesterday, an article appeared which claimed that even more improprieties were found in Obokata’s doctoral thesis and that she has formally requested that her dissertation be withdrawn. She supposedly lifted portions of her introduction from the NIH website without attribution and had doctored images. Even some of the chapter bibliographies were suspicious:
Each chapter in the dissertation has a separate bibliography. For chapter 3, there is a bibliography of 38 references even though there are no footnotes in that chapter. The bibliography contains the authors of the material referred to, the title, the journal and the pages on which the original article appeared.
However, the bibliography in question is almost exactly the same as the first 38 items in a bibliography containing 53 reference materials that was published in 2010 in a medical journal by researchers working at a Taiwanese hospital.
It is entirely possible that Obokata is a shoddy researcher. Actually, it’s quite likely given the mountain of evidence against her. What isn’t clear, is how her mentors allowed her train wreck of a career to happen. Research doesn’t occur in a box. I’d certainly be entirely happy if no one at all ever looked at my dissertation again (outside of the published papers from it). But, one would assume that the most egregious of infractions would be caught by her committee members (and her co-authors) before the work goes into print.
It’s worth nothing that Obokata, like a lot of academics, is quite odd. She had her lab repainted pink and yellow, and would don a Japanese smock more characteristic of kindergarten teachers rather than a traditional lab coat. Though I encourage such behavior, I’m not sure that her eccentric style is doing her career any favors at this point.
This incident brings more than a few conflicting ideas to mind.
First, Japan is a terrible, awful place to be a woman in a professional position. In fact, Japan is pretty much just a terrible place to be a woman at all. In terms of women’s empowerment, wages, education and political representation, Japan is 101st out of 135 countries, well under less developed countries like Kenya, El Salvador, Bangladesh and Indonesia, and among the worst in all of Asia.
Once, I remember when I was visiting Japan, a tenured faculty member who happened to be female was tasked with serving us men tea. I was enraged.
Though Obokata is likely less than professional, professionals are made, not born. I can imagine that, given her reproductive capabilities, her mentors refused to take her seriously, and slacked on their most important job, which was to create and nurture a responsible and talented scientist.
Of course, Obokata, though likely a victim of shoddy mentoring, has to shoulder some of the blame. Shoddy mentors create shoddy students, but shoddy students still have to take responsibility for their own actions. But I can’t help but thinking that it’s interesting that a young female is taking the heat for what should be a collective fuck up.
Obokata is currently being eviscerated in the press. After making numerous appearances on television as an eccentric though brilliant scientist, her downfall has brought out the worst. Many are alleging that Obokata slept with her mentors to attain her position (despite being trained at Harvard), following a narrative that women can’t attain privileged positions without having sex with someone. The vile depths of the interweb are even speculating that Obokata will start making porn, a common standby career for fallen actresses and swimsuit idols, again following a narrative that women are never degraded enough.
Does Obokata deserve the brutal punishment she’s receiving? While scientists need to held accountable for their work, given the amount of shoddy research out there, I would say that Obokata is probably being treated rather unfairly. It would seem though, that the extreme nature of her punishment is due to her gender, her age and the fact that she was unlucky enough to appear on Japanese television.
I was just reading a transcript of Benjamin Bratton’s takedown of TED, the immensely popular series of talks on science and innovation. Perhaps the word “talk” is a bit too specific. TED is more of a “format” for presenting ideas.
To be clear, I think that having smart people who do very smart things explain what they doing in a way that everyone can understand is a good thing. But TED goes way beyond that.
Let me tell you a story. I was at a presentation that a friend, an Astrophysicist, gave to a potential donor. I thought the presentation was lucid and compelling (and I’m a Professor of Visual Arts here at UC San Diego so at the end of the day, I know really nothing about Astrophysics). After the talk the sponsor said to him, “you know what, I’m gonna pass because I just don’t feel inspired… you should be more like Malcolm Gladwell.”
At this point I kind of lost it. Can you imagine?
Think about it: an actual scientist who produces actual knowledge should be more like a journalist who recycles fake insights! This is beyond popularization. This is taking something with value and substance and coring it out so that it can be swallowed without chewing. This is not the solution to our most frightening problems — rather this is one of our most frightening problems.
I couldn’t agree more. As scientists, we are required to be able to explain our research to the outside world. Aside from the important matter of justifying our existence and use of public funds, some of us would hope that our work improves the world. However, the process of explaining shouldn’t involve unnecessarily dumbing down or overstating the potential impact of our work.
TED demands that every presentation be centered around some success. We have to end the talk on some positive note, proudly declaring that our work went the way we wanted it to and had a profound impact on the world. We are there to create, innovate and inspire.
The trouble is that science is often hardly creative, sometimes not innovative and often wholly uninspiring. Mind you, I don’t consider these to be negatives.
Much of science involved the testing of previously held results, views and conclusions. We aren’t seeking to create something new, but rather to evaluate the validity of what has been created before or commonly assumed. We are pursuing knowledge with the hope of refining how the world sees itself using methods to create hypotheses, gather evidence and rigorous test our assumptions.
The outcome, of course, is that the road of science is paved with failure. We embark on our adventures with money in hand, a plan, the proper tools and the best intentions, but, in most cases, we find out that the money didn’t go as far as we would have liked, the plan was ill-conceived given the realities on the ground, the tools were insufficient and our intentions may have been misplaced. At least, that’s my experience of science.
Again, I don’t see this as a negative. In order to improve our ability to understand the world and potentially ameliorate it’s problems, we are required to fail. A child can’t learn to walk without falling down. I can’t learn how to not offend people in Japanese without offending people more than a few times. I can’t learn how not to bake a cake without creating an inedible mess.
TED talks overlook this process of failure, focusing exclusively on the positives and the successes and, more troubling, the inspirational nature of the work. But then, this is a problem that’s not unique to TED talks. I find that TED talks are really just symptomatic of a broader trend which discourages negative results to the point where scientists troll the data hoping to find at least something that can be labelled “successful.”
Most journals won’t publish papers with negative results and most people don’t want to read them. To me, though, there is as much to learn from a paper which found that the previously held view was correct than one which refutes it. There is as much to know from a project which failed miserably as one which was “successful.” At least in my discipline, where field work under pressing circumstances is the norm, it would be nice to hear where people went miserably wrong. We could waste a lot less time, money and experience a little less frustration.
This success driven culture isn’t, of course, limited to science. It permeates our culture, particularly our children. This young generation (and their parents) appears wholly frightened of failure, potentially to the point of paralysis. If we aren’t careful, we might turn into the stagnant Japan of the 00’s.
TED talks probably have to go. While they worked well in the Gates era where small technological fixes in isolated boxes were thought to solve mankind’s most pressing problems, we need to move on to a format which effectively looks to the process of exploration. We need to know and accept that we will fail and those potential sources of failure need to inform our current strategies.
We need to integrate people of many disciplines for mutual benefit. For example, as a quantitative scientist, I learn a lot from people in the humanities, who often hold viewpoints and perspectives completely different from my own but no less important.
In short, we need more discussion and less posturing. Failure is good because we learn from it. Let’s not let the the scientific forum, as Dr. Bratton noted, becomes like cheap, inspirational, yet myopic and wholly useless megachurches.