List

This paper came about when my clever colleague Toby Hopp asked me about a dataset I had collected. It was on the 2012 election and it included tweets that mentioned candidates Obama and Romney. I used it to investigate agenda-setting theory. He was interested in different theoretical areas  — incivility and social capital. This lens led to interesting questions.

The premise — that different areas of the country would differ in how uncivil they were toward political candidates — was interesting. I knew incivility was in there, after all the #1 non-stop word associated with Obama was F@#$. Of course, I expected there to be variance in areas across the country, but to what extent would that variance be explained by the demographic status, economic status and political partisanship of a particular area?

[iframe src=”http://www.chrisjvargo.com/incivility/” width=”100%” height=”400″]

The genius piece of programming that allowed us to embark on this investigation was the Sunlight API. My deepest thanks to Sunlight Foundation for making the API freely available to the public. It made resolving a GPS coordinate to a congressional district as easy as using requests in python to fetch some JSON. Once we had that, getting the census data and voting results for each congressional district was straight forward.

Developing a computational score for incivility was more — colorful. We relied on Cluebot and Google’s “bad word” lists. It worked well for a recent Kaggle competition aimed at detecting insults in online commentary. We show in the paper that it was valid for political candidate incivility as well.  We manually added in candidate specific insults as we saw them, of course. I encourage you to try to read through these lists with a straight face.

The result? We found that incivility on Twitter was highest in districts that had:

  • Low Socio-Economic Status
  • Low Levels of Social Capital
  • Low Levels of Partisan Polarity

As with all big data studies, there is some nuance and exceptions to these results. For a detailed breakdown, see the full paper here >>

  Posts

October 11th, 2015

Socioeconomic Status, Social Capital, and Partisan Polarity as Predictors of Political Incivility on Twitter

This paper came about when my clever colleague Toby Hopp asked me about a dataset I had collected. It was on the […]

March 31st, 2014

Network Issue Agendas on Twitter during the 2012 U.S. Presidential Election

I’ve been working on agenda-setting research now for 5 years. Still, I am incredibly humbled to be on a journal […]

April 4th, 2013

A Social Network Analysis of “Social Media” Articles in Academic Journals

My Ph.D. advisor Dr. Joe Bob Hester came to me with a question: how do academic articles on the broad […]

March 3rd, 2013

How many followers do people and news media have on Twitter?

Part of the research I do here at UNC looks at how people and the news media react to each […]

November 27th, 2012

LibLinear Algorithm & Twitter

As more and more social scientists employ algorithms to try and “code” or “annotate” large datasets, the question of which […]

November 26th, 2012

The Top Congressmen on Twitter

A colleague here at UNC asked me the other day if I could scrape the follower counts of the 500+ congressmen who […]

November 12th, 2012

Agenda-Setting, Ideologies & Twitter: How “Moderate Mitt” was a huge mistake for Newt Gingrich

Continuing with my agenda-setting research stream, I decided to look at the GOP primaries this year, and more specifically the […]

November 12th, 2012

When is a website liable for User Generated Content?

In some research I did last year, I investigated the question, when is a website liable for content it hosts […]

November 11th, 2012

Does Agenda-Setting Theory Still Apply to Social Media & Social Networking?

In what was my first agenda-setting study, I took a look at social media/social networking site Twitter, and investigated the […]