In my seminal paper, “Distance to health services influences insecticide-treated net possession and use among six to 59 month-old children in Malawi,” I indicated that Euclidean (straight line) measures of distance were just as good as more complicated, network based measures.
I didn’t include the graph showing how correlated the two were, but I wish I had and I can’t find it here my computer.
Every time I’ve done presentations of research of the association of distances to various things and health outcomes, someone inevitably asks why I didn’t use a more complex measure of actual travel paths. The idea is that no one walks in a straight line anywhere, but rather follows a road network, or even utilizes a number of transportation options which might be lost in a simple measure.
I always respond that a straight line distance is as good as any other when investigating relationships on a coarse scale. Inevitably, audiences are never convinced.
A new paper came out today, “Methods to measure potential spatial access to delivery care in low- and middle-income countries: a case study in rural Ghana” which compared the Euclidean measure with a number of more complex measurements.
The conclusion confirmed what I already knew, that the Euclidean measure is just as good in most cases, and the pain and cost of producing sexy and complicated ways of calculating distance just isn’t worth it.
It’s a pretty decent paper, but I wish they had put some graphs in to illustrate their points. It would be good to see exactly where the measures disagree.
Access to skilled attendance at childbirth is crucial to reduce maternal and newborn mortality. Several different measures of geographic access are used concurrently in public health research, with the assumption that sophisticated methods are generally better. Most of the evidence for this assumption comes from methodological comparisons in high-income countries. We compare different measures of travel impedance in a case study in Ghana’s Brong Ahafo region to determine if straight-line distance can be an adequate proxy for access to delivery care in certain low- and middle-income country (LMIC) settings.
We created a geospatial database, mapping population location in both compounds and village centroids, service locations for all health facilities offering delivery care, land-cover and a detailed road network. Six different measures were used to calculate travel impedance to health facilities (straight-line distance, network distance, network travel time and raster travel time, the latter two both mechanized and non-mechanized). The measures were compared using Spearman rank correlation coefficients, absolute differences, and the percentage of the same facilities identified as closest. We used logistic regression with robust standard errors to model the association of the different measures with health facility use for delivery in 9,306 births.
Non-mechanized measures were highly correlated with each other, and identified the same facilities as closest for approximately 80% of villages. Measures calculated from compounds identified the same closest facility as measures from village centroids for over 85% of births. For 90% of births, the aggregation error from using village centroids instead of compound locations was less than 35 minutes and less than 1.12 km. All non-mechanized measures showed an inverse association with facility use of similar magnitude, an approximately 67% reduction in odds of facility delivery per standard deviation increase in each measure (OR = 0.33).
Different data models and population locations produced comparable results in our case study, thus demonstrating that straight-line distance can be reasonably used as a proxy for potential spatial access in certain LMIC settings. The cost of obtaining individually geocoded population location and sophisticated measures of travel impedance should be weighed against the gain in accuracy.
The UN keeps data on migrations patterns around the world, tracking origin and destination countries and number of migrants (Trends in International Migrant Stock: Migrants by Destination and Origin). I took some time out and created this network visualization of origin and destination countries from 2010. Other years were available, but this is all I had time for.
The size of each node represents the number of countries from which migrants arrive. By far, the most connected country is the United States, accepting more people from more countries than any other place on the planet. Most areas of the network represent geographic regions. Note that Africa is clustered at the top, and pacific island countries are clustered at the bottom.
An interesting result is that countries tend to send migrants to other countries which are only slightly better off than they are. For example, Malawi sends most of its migrants to Zambia and Mozambique, and Zambians go to South Africa, whereas those countries do not reciprocate to countries poorer than them. Wealthy countries tend to be more cosmopolitan in their acceptance of migrants.
Click on the picture to explore a larger version of the graphic.
Policy makers in the US and Europe seized on the paper as proof that cutting stimulus and social programs was a good idea, and proceeded to do so with abandon. Of course, right wingers wanted to cut money to social programs anyway, and would have done so regardless, but the paper was held out as scientific proof that it was a solid plan of action.
I won’t comment on how strange it was that Republicans were interested in science at all, given recent efforts to politicize the NSF and micromanage the grant decision process.
The trouble was that the results presented in RR were shown to be based on the selective use of data. Thomas Herndon, a 28-year-old graduate student, obtained the dataset from RR themselves and couldn’t reproduce the results.
In fact, he found that the only way to accurately reproduce the results in RR’s paper that showed that high debt restrained economic growth was to exclude important cases. When including the missing data, high debt was associated with consistently positive growth, though modestly slowed.
Originally, I took the view that this was a case of sloppy science. RR had a dataset, got some results which fit the narrative they were pushing and didn’t pursue the matter any further. Reading Herndon’s paper, however, I changed my mind.Herdon took the data and did what any analyst would do when starting exploratory analysis, he plotted it (see figure on the right). Debt to GDP ratios and growth are both continuous measures. We can do a simple scatterplot and see if there’s any evidence that would suggest that the two things are related.
To me, this is a pretty fuzzy result. Though the loess curve (an interpolation method to illustrate trend) suggest that there is *some* decline in growth overall, I’d still ding any intro stats student for trying to suggest that there’s any relationship at all. There is no way that RR, both trained PhD’s and likely having the help of a paid research assistant, didn’t produce such a plot.
Noting that the loess curve drops past approximately 120%, I calculated the median growth for each country represented. Only 7 countries have had debt to GDP ratios greater than 120% in the past 60+ years: Australia, Belgium, Canada, Japan, New Zealand, the UK and the United States. Out of these only two had (median) negative growth: Belgium (-.69%, effectively zero) and the United States (-10.94%), which has only had a debt to GDP greater than 120% one time. All other countries has positive growth under high debt, even beleaguered Japan. New Zealand can even claim a strong 9.8% growth under high debt. The US, then, is a major outlier, possibly bringing the entire curve down.
As this doesn’t fit their story, RR’s solution was to categorize debt to GDP ratios into five rough classifications, and calculate the mean growth within each group. This is a common trick to extract results from bad data. It’s highly tempting for researchers (and epidemiologists do it far too often), but a bad idea to present it without all the caveats and warnings that should go with it.
I’m not surprised that ideologues such as RR would be so keen to produce the result they did. After all, they published the popular economics work “This Time Is Different: Eight Centuries of Financial Folly” where they try to suggest that budget policy of the US in 2013 should somehow be informed by the economy of 14th century Spain.
I am, however, surprised that reviewers let this pass. If I would have been a reviewer, I would have:
1) pointed out the problems of categorization, where data doesn’t require it
2) noted that categorizing the data (or even plotting it) tears out temporal correlation. For example, one data point from 2008 (stimulus) may be put in the high debt category, but another from 2007 (crash) in the low debt category. While budgets of one year may have little to do with the budget of another, the economy of one year is likely related to the economy of the previous year.
3) questioned the causal mechanisms behind debt and growth. This is obviously a deep question for economists (and not epidemiologists), but of particular import. When does the economy start to react to debt? I’m pretty sure that there is a lag effect as spending bills tend to space disbursements over the course of the fiscal year.
The RR debacle should be a lesson, not only to economists, but to all scientists. While we may always be under pressure to produce results and hope that those results fit and support whatever position we take, shoddy methods don’t get us off the hook. In RR’s case, I would call this fabrication. A good many studies are merely guilty of wishful thinking, but the chance always exists that someone will come out of the woodwork and expose our flaws. After all, that’s what science is all about.
A couple of weeks ago, I attended a lecture on network analysis where the investigators analyzed popular political books on Amazon.com.
Amazon lists not only information on the book but also the titles, in order of purchasing frequency, of other books that customers may have purchased. The researchers here were able to identify left leaning and right leaning books by examining the purchasing habits of Amazon customers.
Decibel “is America’s only monthly extreme music magazine” and has been in publication since 2004. Every year, they publish the titles of the 40 best metal records of the year, according to their review staff.
Here is 2012’s list:
40 Gojira – L’Enfant Sauvage
39 Meshuggah – Koloss
38 Agalloch – Faustian Echoes EP
37 The Shrine – Primitive Blast
36 Incantation – Vanquish In Vengeance
35 Samothrace – Reverence To Stone
34 Devin Townsend Project – Epicloud
33 Panopticon – Kentucky
32 Saint Vitus – LILLIE: F-65
31 Mutilation Rites – Empyrean
30 Author & Punisher – Urus Americanus
29 A Life Once Lost – Ecstatic Trance
28 Asphyx – Deathhammer
27 Farsot – Insects
26 Gaza – No Absolute For Human Suffering
25 Inverloch – Dark/Subside
24 Swans – The Seer
23 Horrendous – The Chills
22 Killing Joke – MMXII
21 Early Graves – Red Horse
20 Liberteer – Better To Die On Your Feet Than Live On Your Knees
19 High On Fire – De Vermis Mysteriis
18 Napalm Death – Utiltarian
17 Torche – Harmonicraft
16 Grave – Endless Procession Of Souls
15 Satan’s Wrath – Galloping Blasphemy
14 Testament – Dark Roots Of Earth
13 Cattle Decapitation – Monolith Of Inhumanity
12 Blut Aus Nord – 777: Cosmosophy
11 Municipal Waste – The Fatal Feast
10 Pig Destroyer – Book Burner
09 Paradise Lost – Tragic Idol
08 Royal Thunder – CVI
07 Enslaved – Riitiir
06 Neurosis – Honor Found In Decay
05 Pallbearer – Sorrow and Extinction
04 Witchcraft – Legend
03 Evoken – Atra Mors
02 Baroness – Yellow & Green
01 Converge – All We Love We Leave Behind
I looked all of these records on Amazon. For each of them, I noted which of the others were in the first 12 titles that were purchased with it, creating a 40 by 40 adjacency matrix where rows (i) and columns (j) represented records. For each entry, a zero was noted where the customer which purchased the i-th record did not purchase the j-th record, and a one where they did.
I found that many of the records on the list were purchased with one another. The most common record purchased in combination with another on the list was Neurosis‘ “Honor Found in Decay.” Fifteen of the other records on this Top 40 were purchased with “Honor Found in Decay.”
In network terms, the Degree of this record would be 15. Pallbearer’s “Sorrow and Extinction” had a degree of 11, Royal Thunder’s “CVI” and Blut Aus Nord’s “777: Cosmosophy” both had a degree of 9.
The network of Decibel’s Top 40 looks like this:
You can see that some records get purchased with other records more than others. The size of the dots represent the degree of the record.
Now, I did some cluster analysis on the data, looking for related groups of records within the network. Using R, I produced the following dendrogram:
There are two major clusters, each with its own subcluster (dendrograms are hierarchical). One includes Converge, Neurosis, Pallbearer Royal Thunder, Evoken and Inverloch with a subcluster including only the first four. These are all bands that might be expected to be purchased with one another. The other big one includes all the rest. Main clusters are designated by color.
I found one containing the three entries for Baroness, Municipal Waste and Napalm Death, very different bands. I’m truly not sure why those three would be in a cluster together (is the cluster is based on lonliness in the network?).
Anyway, I’m done, but glad I got any results at all. I’ll let readers (especially metal fans!) interpret the results.
Today I encountered a discussion, where the participants emphatically maintained that the current US economic woes are to be blamed in part on increased US defense spending during the Iraq and Afghanistan wars. I countered and claimed that they have no relation at all. Of course, these people hate me now (thinking I was merely being difficult for the same of being difficult), but that’s ok. I’m used to it.
To test this hypothesis, I took data on US GDP (adjusted to constant 2005 dollars) and combined them with data on US defense spending (adjusted to constant 2010 dollars). The results can be seen to the left. The red line is defense spending. The blue line is GDP.
As I maintained, there is no obvious relationship between defense spending and economic growth. There are a couple of major blips in GDP growth, namely the collapsing of tech equities in the early 2000’s and the economic meltdown on 2007/8. There are no events in US GDP for drops during Clinton nor sudden increases in defense spending following 9/11.
In fact, as defense spending dropped pre-9/11, you can see the US economy was plugging along just fine. As defense spending went up post 9/11, the US economy maintained the same trajectory, minus the economic bumps.
Now, at first glance, this is a little more convincing. But when you take the events into consideration, it is less so. The two major economic events of the 2000’s, namely the equity bust, and the financial meltdown both resulted in sudden jumps in the unemployment rate. 9/11 and the troop surge did not. In fact, as spending was doing up, unemployment was going down. If we look back into the nineties, we can notice that even though defense spending was declining, unemployment was up, then down again. In short, given the context, there is no real reason to assume that two related.
I am NOT an advocate for war. I am though, an advocate for evidence backed claims. There is little evidence to suggest that increased defense expenditures during the Bush years affected our economy.
We can claim, if we like, that federal revenues might have been greater had the wars not happened. These revenues, it is argued, could have been allocated to education or infrastructure improvements, for example. However, it has to be noted that the wars weren’t funded out of federal revenues. They were funded out of low interest bonds. Thus, as those bonds had not been serviced at the time that this data was collected, there is, again, even less reason to assume that the wars negatively impacted the economy.
Now, we can certainly make arguments over how much defense spending is too much and what the potential long term effects of servicing the war debt will be. I argue, though, that our elected representatives are much more interested in financing the military than, say, welfare programs for the needy. It would take a great leap of faith to assume that, if the military were closed tomorrow, monies targeted for defense would automatically be transferred to providing health care to poor people. I also argue that, long term, the expenditures that came out of the financial crisis will be, in comparison, more difficult to service.
The war cost us politically, but was a bargain economically. To me, that’s a much more frightening state of affairs.
I have written two posts attempting to use textual analysis to determine whether Ron Paul did or did not write the inflammatory newsletters that have gotten so much press recently. The first post failed miserably. I used four articles from the “Ron Paul Report” of which authorship was in question. I compare these with more than 30 articles and books know to be written by Paul. The particular methodologies I employed there were able to determine that Paul was likely not the author of two (of four) newsletter articles. The authorship of the other two was left to speculation.
In part 2, I included text from other authors including myself (as a control) and authors known to collaborate with Paul, namely Lew Rockwell (from whose site I was able to obtain many of Paul’s articles), Jack Kerwick and Michael S. Rozeff. I concluded that Paul may or may not have been the author of the articles, but much of the evidence in that analysis pointed to one Lew Rockwell. In the end, though, I presonally concluded that the establishment of authorship through quantitative means is a difficult venture.
Recently, a FOX News affiliate “uncovered” the “true” author of the more incendiary portions of the Ron Paul Report. Ben Swann of FOX believes that one James B Powell wrote the newsletters. He concludes this based not on the signed confession of Mr. Powell, but on his own subjective comparison of James Powell’s “How to Survive Urban Violence” with the disputed texts of Ron Paul’s newsletters.
Of course Ron Paul supporters and the conservative blogosphere hae chosen to merely believe Mr. Swann, seemingly without taking the extra of effort of either asking Mr. Powell or by digging into the text for some more rigorous analysis. Naturally, we are just supposed to believe it, too.
I found the text for Powell’s “How to Survive Urban Violence” along with a single copy of the “Powell Report,” a newsletter that Powell produces to provide investment advice to paying subscribers. Other than those two, I was unable to find any other text by Powell.
I included these two texts in my collections of texts and set about attempting to determine the authorship of the four disputed articles. Again, I will use a principal component analysis (PCA) methodology, though this time I will use the excellent R package BiplotGUI. I will find the first two PC’s of word length, sentence length, and punctuation. I will then graph the first two PC’s against one anaother and determine if there is evidence for clusters of texts, which should correspond to distinct authors. If we can determine that the four texts are placed in some reasonable vicinity of one (or no) authors, then we might be able to infer who actually wrote (or did not write) these texts.
I extracted the data for word length, sentence length and punctuation using the Signature software package.Word Length
As we hoped, texts cluster in areas corresponding to different writers. I have noted Paul’s cluster in blue using a 90% alpha bag. Mr. Rockwell’s work cluster (in green) to the left of Paul’s, indicating that word length is distinct between the two. The newsletters appear to lie closer to Mr. Rockwell’s cluster, though there is some cross over between the two. Note that the article on car jacking (the worst of the bunch) seems to cluster with a chapter from “End the Fed” and an article from Rockwell on Bethlehem. I will point out that the particular chapter of “End the Fed” that sits in this cluster is quite distinct in tone from the other chapter. Upon reading them both, I felt that two different people wrote the two chapters.Sentence Length
The point predictive plot was more interesting that the plot of the first two PC’s. Again, even when looking at sentence length, the article on carjacking clusters with two of Mr. Rockwell’s articles and the odd chapter from “End the Fed,” suggesting that they *might* all come from the same author. Most of Paul’s articles are clustered by themselves, though this should not be surprising, as we already know that they were written by the same person!
PunctuationThis one is perhaps the most compelling of all of the analyses that I have run. The newsletters, Lew Rockwell’s articles and one of the Powell articles cross over one another. Paul’s articles nearly all occupy their own cluster. The only newsletter article that lies anywhere near Paul’s works is the article on reelection. Again, Rockwell’s articles cluster near the chapter from “End the Fed.” Powell’s “Urban Violence” article sits in Paul’s cluster (though near the Re-election article, though his other article lies far away.
At this point, I’m willing to accept that Paul probably didn’t write at least three of the four newsletter articles, though I would have preferred to see otherwise. Paul’s works appear to have some commonalities that indicate that if, in fact, he did write these articles, we would expect to see them appear within his cluster. Outside of the fairly standard and non-offensive re-election article, the three do not. Interestingly, the previous analysis pointed to Lew Rockwell as the author of the re-election article.
As for determining authorship, we don’t have enough texts from the other authors to draw any reasonable conclusions as to who was responsible. I say that Lew Rockwell may have written the article on car-jacking. Authorship of the articles on AIDS and the coming race war is more difficult to establish. We only have two articles from James Powell. Personally, I do not believe that Mr. Powell wrote any of these articles, though, again, having more texts would greatly help the analysis.
While I may be willing to accept that Paul is being truthful when he says that he did not author the articles, I cannot believe that he didn’t know about them. Paul is still accountable for pandering to racists for profit and political support though getting politicians to admit to their past indiscretions is as difficult as determining authorship of mystery texts.
I was checking out a couple of chapters from Rick Sheckman’s 2008 book, “Just How Stupid Are We?: Facing the Truth About the American Voter,” a compendium of factoids on American ignorance. It turns out we are dumber than I could have ever imagined.
The Pew News Research Center has an online test of news and political knowledge, though. You can test your own news savvy there. I scored 100%. Only 8% of people who take this test score 100%. I feel alone, though I recognize that I border on the obsessive. I’m not surprised that not everybody gets 100%, but I’m pretty shocked that anybody get ALL of the questions wrong.
Try the test out and see where you stand. It updates every few weeks, I think. I promise it won’t make you feel bad.
Update: Part 3 is here.
Last week, readers (all two of you), may remember that I attempted to explore the question of authorship of Ron Paul’s controversial newsletters. You may recall that I attempted to compare the frequency of word length of a number of Paul’s known writings with four newsletter excerpts of which Paul denies authorship.
The trouble with the approach I took is that the tests are designed to show differences in authorship, but do not address the question of similarity. We may be able to statistically show that two pieces of writing come from different authors through a chi-square test of independence through the appearance of a small p-value. A large, p-value, however, does not necessarily show that the same author wrote two pieces of writing, though many take this result to be implicit.
What the results of the previous post do require, however, is further tests.
I focused on four articles, one on the coming race war, one on carjacking, one on AIDS and another one calling for Paul reelection to Congress. By analyzing word length, punctuation and letter appearance, We were able to determine that Paul probably did not write two of the four articles, namely the re-election article and the particularly offensive article on carjacking. The article on AIDS and the coming race war, however, are still in dispute.
Taking a cue from a paper sent my Mr. JD Klein, who kindly took the time to comment on the last post, I ran a principal component analysis (PCA) on word length. I have since added several articles by Lew Rockwell, head of the Mises Institute (a libertarian think tank), a few articles written by other members of the institute, more of Paul’s articles and three more of his books.
PCA is a mathematical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of uncorrelated variables called principal components. It is normally used as a data reduction technique when one has multiple correlated variables and wishes to reduce them into one, two or possibly three compact, but uncorrelated variables. In this case, there 30 variables representing the percentage of word lengths (from 1-30) in all of the texts.
What one can also do, is find important clusters of observations when plotting the first and second PC’s against one another. Thus, if Paul wrote some works, but someone else wrote others, we might see that all of Paul’s writings occupy a particular region on the plot, whereas the other author occupies another.I have included the plot on the right. Interestingly, Paul’s writings are all over the place. What is of not, is that some of Lew Rockwell’s writings appear to be clustered in one region, along with the re-election and carjacking articles, the very two aritcles that were found to likely NOT to be written by Paul in the previous post. I have circled the appropriate region.
Searching further on authorship attribution and text analysis (this field is rather new to me), I also found a software package called JGAAP (Java Graphical Authorship Attribution Program) . It is a Java based textual analysis program. It allows one to feed in a number of text files, assign authorship to each one of them, and compare them with a number of texts of unknown authorship. While the program allows for a number of comparison methods, I opted for the path of lowest resistance (and time) and compared word length between the texts using and nearest neighbor driver and a histogram distance.
I have included a table of the three most likely authors of the four articles based on word length. Interestingly, Paul is not the definitive author on any of the texts. In fact, he is not even in the top three for the re-election article. Lew Rockwell, however, is implicated in all four of these articles. Michael Rozeff (I included a number of “control” articles) made the top three for the race war article, a result that I’m not sure how to interpret.
Clearly, further analysis is in order. Given extra time, I will pursue this to the best of my ability. I find these results fascinating, however. Paul maintains that he did not write the articles and, given these results, that may be true. Lew Rockwell, long involved in Paul activities could have, in fact written these.
That Paul himself disavows these articles is not surprising in an election campaign. What is missing, though, is the question of who wrote these articles and the extent of Paul’s knowledge of what was written in his name. I think that I have, in some way, cracked this egg for further investigation.
Update: Part 2 is here.
Update: Part 3 is here.
Ron Paul sold newsletters in the 80’s and 90’s. The content of these newsletters was appalling though unsurprising. Here’s a sample:
“We don’t think a child of 13 should be held responsible as a man of 23. That’s true for most people, but black males age 13 who have been raised on the streets and who have joined criminal gangs are as big, strong, tough, scary and culpable as any adult and should be treated as such.”
“And Stanford, Michigan, and many other universities have banned speech that offends privileged groups. Anti-white, anti-male, anti-heterosexual or anti-Christian remarks are perfectly OK, of course.” You can imagine, then, what a relief it must be to minorities, homosexuals, women and non-Christians to find themselves the privileged people of America. The rest of this page and part of the second details a cabal of homosexuals in the Bush administration who like to lead “the young” astray.
“Boy, it sure burns me to have a national holiday for that pro-communist philanderer, Martin Luther King. I voted against this outrage time and time again as a Congressman. What an infamy that Ronald Reagan approved it! We can thank him for our annual Hate Whitey Day. Listen to a black radio talk show in any major city. The racial hatred makes a KKK rally look tame.”
“Dr. Douglass believes that AIDS is a deliberately engineered hybrid of these two animal viruses cultured in human tissue, and he blames World Health Organization experimentation at Ft. Detrick, Maryland…. Could the government have experimented with it in the civilian population, as it did in the 1950s with LSD, and had things get out of control? I don’t know, but these sure are interesting questions.”
“A well-known libertarian editor just back from New York told me: ‘The ACT-UP slogan, on stickers plastered all over Manhattan, is “Silence = Death.” But shouldn’t it be “Sodomy = Death”?'”
Paul claims not only to NOT have written the trash in his newsletters, but also claims to not have known of the content of them. I find it highly unlikely that, given Paul’s prolific written output, that Paul would not have had the time to write the content of newsletters and signitures which bear his name. I also find it unlikely that he himself wouldn’t have read them, given that he drew a portion of his income from their continued sale.
Regardless, the claim that Paul did NOT write the content of his own newsletters needs to be put to rigorous test. Clearly, Paul himself is of no use in this venture, given his precipitous position as a Presidential candidate.
PhiloComp.net offers the “The Signature Stylometric System,” a text analysis software package offered for free. One can use the package, for example, to determine if the same author wrote all of Shakespeare’s plays or to determine the authorship of the Federalist Papers. It compares word and sentence length between texts, and determines frequency of letter usage and punctuation. Authors have particular styles. For example, one author may often use three letter words (or four letter words!). We may take a disputed work, compare the word length of it against all other works by said author, and then statistically test whether there is evidence to suggest that the work came from that author.
I collected a number of works known for a fact to be written by Paul. I included a couple of chapters from “End the Fed,” a number of his speeches, and more than 20 articles and compiled them into a single corpus. On the internet, I then found four articles from his newsletters: one asking readers to assist in his re-election to office (his present seat in Congress, actually), one on the supposed government conspiracy to create and spread AIDS (partially quoted above), one on the coming race war, and one particularly deplorable article on carjacking and the need for an armed populace.
A graph of the distribution of word length in Paul’s output can be seen below.
Using the software, I compared the word length and sentence length of each of the four newsletter articles to works known to be written by Paul. The results are below. For those unfamiliar with stats and/or p-values, the gist is this: If the p-value is less than , say, .05, there is reason to believe that authors of the newsletter articles is someone other than Paul. If the p-value is greater than .05, we might concluded that there is not enough evidence to suggest that Paul did not write the articles, and move on to other methods of testing (as is seen in the next post).
The results are interesting. There is not enough evidence to suggest that someone other than Paul wrote the piece on AIDS and the piece speculating on a coming race war, though to confirm (or refute) Paul’s authorship, we may have to resort to other methodologies. On the other hand, there is reason to believe that someone else may have written the other two articles, the one on carjacking and the re-election piece.
I have also included a comparison with a piece on health care that is known to be written by Paul. The tests confirm that it compares nicely with the rest of Paul’s known writings (or at least provides no evidence that it is significantly different). For reference, I have also included the results of a tests between Paul’s writings and the entire text of this blog starting in 2007. Again, the test confirms that the authors are likely different people (which I already knew).
A visual comparison of word length between the feature on the coming race war and the rest of Paul’s works shows that the two are very similar. For reference, I have included a comparison of Paul’s works with my last blog post, which, incidentally was also statistically different from Paul’s writing on all measures.
Obviously, we will never know without a doubt who did or did not write the trash that appeared in Paul’s incendiary newsletters, though results like these and more casual spot-check analyses indicate that the case is hardly closed. I am convinced that Paul happily exploited the worst elements of the American political landscape. He willfully mixes with racists, conspiracy nuts and paranoid gun freaks for nothing more than political gain, political contributions and worse yet, book sales. I am also convinced that he was aware of the newsletters that he has “disavowed” though the results above indicate that he may, in fact, have farmed out some of the writing to other people.
Subjecting myself to his writing was one of the most painful and useless experiences of my life. I really wanted to give the man a chance, particularly after his impressive display at the Republican foreign policy debate. “End the Fed” read more like a paper from freshman comp than a serious book, though it somehow attempts to pass itself off as a work of deep economic analysis. Not to disparage people I know that may support Paul (and I do apologize), but I think that Krugman’s recent quip that Newt Gingerich is “a stupid man’s idea of what a smart man sounds like” is actually more true of Ron Paul.
It doesn’t take a piece of software to know that it is possible that Paul at least signed off on some of the nonsense in his newsletters. The jury on whether he did or did not write these articles may be out, but a reading of his works shows that philosophically, it doesn’t take a great leap of faith to move from Paul’s public persona to some of the ugliest portions of right wing politics.
Update: Please see further analysis in the next post that expands upon these results. If Paul didn’t write these letters, who did?
Further discussion of methods and criticism of this post on another blog can be found here.