Archive | Statistics RSS for this section

Kenya 2017 Election Violence: Some Data Analysis

I’m getting used to the new version of ArcGIS (which is a vast improvement!) and gave it a test run on some data from the ACLED (Armed Conflict Location & Event Data Project) database, specifically on this years round of violence surrounding the Kenyan election. ACLED keeps real time data on violence and conflict around the globe, the latest entry in 2017 is Nov 24, just five days ago.

KenyaViolence2017

The first election occurred on August 8th, 2017. The opposition contested the results of the election, claiming problems in vote tallying by the IEBC, resulting in a nullification by the Supreme Court. A new election was called and was to be conducted within 60 days of the nullification. Raila Odinga, the opposition leader, claimed that the election again would not be fair, dropped out of the race and called for a national boycott. The election went ahead as place on October 26, 2017 and Uhuru Kenyatta was declared the winner.

NairobiViolence

 

 

There was violence at every stage of the process, both by rioters in support of the opposition and by the police and military who were known to fire live rounds into groups of demonstrators. Opposition supporters were known to set fire to Kikuyu businesses. Local Kikuyu gangs were reported to be going house to house rooting out people from tribal groups from the West and beating them in the street. Tribal groups in rural areas were reported to be fighting amongst one another. The police response has been heavy handed and disproportionate leading to a national crisis.

As of now, though not nearly as violent as the post election violence of 2007-08, the violence has not yet abated.

In the database, there were 420 events logged, including rioting, protests and violence against civilians by the state, police and local tribal militias. There are 306 recorded fatalities in the data base, but this number should be approached with some caution. There were likely more. The database is compiled from newspaper reports, which don’t count fatalities and don’t cover all events.
ViolentTSI made two maps (above), one for Nairobi, and the other for Kenya. They include all non Al-Shabaab events (a Somali Islamist group the Kenya Defense Force has been fighting for several years.) I also included a time series of both events and fatalities.

Some excerpts from the notes:

“Police raided houses of civilians in Kisumu, beating civilians and injuring dozens. Live bullets were used on some civilians, including a 14 year old boy. Of the 29 people injured, 26 had suffered gun shots.”

“One man was found dead in a sugar cane plantation one day after ethnic tensions between the Luo and Kalenjin communities got into an ethnic clash. The body had been hacked with a panga.”

“Rioters started throwing stones at the police in the morning, protesting against the elections to be held the next day. The police responded with teargas and water canons. The rioters were mostly from the Luo ethnic group and they took the opportunity to loot several stores, attack residents and to burn a store owned by an ethnic Kikuyu. One woman was raped.” *This was in Kawangware, not far from my apartment. I was eating at a local bbq place when this happened. 

 

 

“Police forces attacked supporters of the opposition that went to the Lucky Summer neighbourhood to check on a ritual of beheading of a sheep that was taking place (suspectedly by the Mungiki sect). The police shot at the civilians. The police confirmed that it shot a man and that the group performing the ritual had sought protection.”

“As a revenge to the previous event, the Kikuyu joined forces and attacked the Luo. The ethnic tensions and violence led to one severely injured person. Residents claims three were killed and dozens, including three school children, were injured.”

Advertisements

Does sampling design impact socio-economic classification?

DSC_2057Doing research in developing countries is not easy. However, with a bit of care and planning, one can do quality work which can have an impact on how much we know about the public health in poor countries and provide quality data where data is sadly scarce.

The root of a survey, however, is sampling. A good sample does its best to successfully represent a population of interest and can at least qualify all of the ways in which it does not. A bad sample either 1) does not represent the population (bias) and no way to account for it or 2) has no idea what it represents.

Without being a hater, my least favorite study design is the “school based survey.” Researchers like this design for a number of reasons.

First, it is logistically simple to conduct. If one is interested in kids, it helps to have a large number of them in one place. Visiting households individually is time consuming, expensive and one only has a small window of opportunity to catch kids at home since they are probably at school!

Second, since the time required to conduct a school based survey is short, researchers aren’t required to make extensive time commitments in developing countries. They can simply helicopter in for a couple of days and run away to the safety of wherever. Also, there is no need to manage large teams of survey workers over the long term. Data can be collected within a few days under the supervision of foreign researchers.

Third, school based surveys don’t require teams to lug around large diagnostic or sampling supplies (e.g. coolers for serum samples).

However, from a sampling perspective, assuming that one wishes to say something about the greater community, the “school based survey” is a TERRIBLE design.

The biases should be obvious. Schools tend to concentrate students which are similar to one another. Students are of similar socio-economic backgrounds, ethnicity or religion. Given the fee based structure of most schools in most African countries, sampling from schools will necessarily exclude the absolute poorest of the poor. Moreover, if one does not go out of the way to select more privileged private schools, one will exclude the wealthy, an important control if one wants to draw conclusions about socio-economic status and health.

Further, schools based surveys are terrible for studies of health since the sickest kids won’t attend school. School based surveys are biased in favor of healthy children.

So, after this long intro (assuming anyone has read this far) how does this work in practice?

I have a full dataset of socio-econonomic indicators for approximately 17,000 households in an area of western Kenya. We collect information on basic household assets such as possession of TVs, cars, radios and type of house construction (a la DHS). I boiled these down into a single continuous measure, where each households gets a wealth “score” so that we can compare one or more households to others in the community ( a la Filmer & Pritchett).

Distributions We also have a data set of school based samples from a malaria survey which comprises ~800 primary school kids. I compared the SES scores for the school based survey to the entire data set to see if the distribution of wealth for the school based sample compared with the distribution of wealth for the entire community. If they are the same, we have no problems of socio-economic bias.

We can see, however, from the above plot that the distributions differ. The distribution of SES scores for the school based survey is far more bottom heavy than that of the great community; the school based survey excludes wealthier households. The mean wealth score for the school based survey is well under that of the community as a whole (-.025 vs. -.004, t=-19.32, p<.0001).

Just from this, we can see that the school based survey is likely NOT representative of the community and that the school based sample is far more homogeneous than the community from which the kids are drawn.

Researchers find working with continuous measure of SES unwieldy and difficult to present. To solve this problem, they will often place households into socio-economic "classes" by dividing the data set up into . quantiles. These will represent households which range from "ultra poor" to "wealthy." A problem with samples is that these classifications may not be the same over the range of samples, and only some of them will accurately reflect the true population level classification.

In this case, when looking at a table of how these classes correspond to one another, we find the following:

Misclassification of households in school based sample

Assuming that these SES “classes” are at all meaningful (another discussion) We can see that for all but the wealthiest households more than 80% of households have been misclassified! Further, due to the sensitivity of the method (multiple correspondence analysis) used to create the composite, 17 of households classified as “ultra poor” in the full survey have suddenly become “wealthy.”

Now, whether these misclassifications impact the results of the study remains to be seen. It may be that they do not. It also may be the case that investigators may not be interested in drawing conclusions about the community and may only want to say something about children who attend particular types of schools (though this distinction is often vague in practice). Regardless, sampling matters. A properly designed survey can improve data quality vastly.

The mismeasurement of humans: classification as “othering”

I was part of a short, but interesting discussion last night regarding this very good article on the political implications of data analysis. The argument made (assuming I understood it correctly) was simply that statistical measures are inherently ideological since they impose a particular view of the world from one social group (us, the elite) on another (the non-elite). She takes this further, stating that though the voice of the elite can be heard through anecdotes (and opinionated blog posts), the experience of the non-elite relies on statistics and numbers. Statistics, then, is the language of power.

The conversation went further to discuss the implications of statistical methods themselves, particularly the measures of central tendency: the mean, median and mode. With perfectly symmetrical data, these measures are all the same, but, of course, no set of data is perfectly symmetrical, so that the application of each will produce different results. Though any responsible statistician would make statements of assumptions, limitations and appropriateness, with politics, these statements are overlooked and the method chosen is often that which best supports one’s political position, asking for trouble.

Moreover, the measure of central tendency itself in inherently flawed since it concentrates on the center and silences the extremes, supporting the status quo, or so it was argued. The choice of measure, I would argue, depends on the goals of the particular study. For example, a study which sought to determine if average graduation rates lower for blacks than whites would necessarily use a measure of central tendency, while a study on which students in a particular school are the least likely to graduate might look at outliers and extremes.

Either way, I agreed with the writer that, no matter what, we are influenced by our ideology. However, there is a difference between performing a study which seeks to maintain impartiality for the greater good and one which seeks to deceive in order to merely win a political battle, particularly among those who benefit from marginalizing, for example, the poor and disenfranchised.

However, I found this passage quite interesting and it can be applied to a post on this blog regarding what we do and don’t know about the poor:

Perhaps statistics should be considered a technology of mistrust—statistics are used when personal experience is in doubt because the analyst has no intimate knowledge of it. Statistics are consistently used as a technology of the educated elite to discuss the lower classes and subaltern populations, those individuals that are considered unknowable and untrustworthy of delivering their own accounts of their daily life. A demand for statistical proof is blatant distrust of someone’s lived experience. The very demand for statistical proof is otherizing because it defines the subject as an outsider, not worthy of the benefit of the doubt.

Part of my academic work focuses on the refinement of measurements of poverty. I am keenly aware of the “othering” of this process and how these measurements use a language of the educated elite (me) to speak for the daily experiences of people not like me.

This “othering” is not limited to statistics at all. Even merely referring to “the poor” is a condescending labeling of a group of people who are mostly powerless to speak for themselves within global power structures. Moreover, “the poor” ignores the diverse and varied experiences of most of humanity.

When I first entered the School of Public Health at UM, I was extremely uncomfortable with the language used in studies of ethnicity and public health in the United States. Studies would simply throw people into simplistic categories of black, white, hispanic, asian and “other” (whatever that is), ignoring the great diversity of people within, for example, urban slums. The method of categorization seemed to be a horrible anachronism and bought back awful memories of Mississippi. Simply putting people into neat categories risked continuing an already divisive view of the world.

However, the more I thought about it, the method is justified since we are looking at the effects of a racist view of the world on the very people who are the most burdened by it. Certainly, there are better ways of viewing the world, but when criticizing social power structures, it can be advantageous to speak its language. I still don’t like it, but I’m at least more understanding of it.

It’s a fine thread to walk. On the one hand, as advocates for “the poor,” we have to work within the very structures which oppress, exploit and ignore them. To succeed, however uncomfortable it may be, we may be required to adopt the language of those structures. On the other, we must remain aware of the potentially dire implications of the ways in which we describe those we advocate for and how they can be misused.

%d bloggers like this: