Quantcast
Channel: Lingua Franca » Lucy FerrissLingua Franca - Blogs - The Chronicle of Higher Education
Viewing all articles
Browse latest Browse all 260

We the Partisan People

$
0
0

war-is-peaceIn response to my recent post on pronunciation in political speech, one reader took me to his video on the subject, which led me in turn to an amazing bit of research underway by scholars at Stanford and Brown Universities, the University of Chicago, and the National Bureau of Economic Research. In their paper “Measuring Polarization in High-Dimensional Data: Method and Application to Congressional Speech,” Matthew Gentzkow, Jesse Shapiro, and Matt Taddy have combed through 126 years of congressional speech to detect patterns of partisanship based on two-word phrases, defining partisanship as “the ease with which an observer could infer a congressperson’s party from a fixed amount of speech.”

Now, it’s no secret that Republicans and Democrats each have their favorite turns of phrase. One politician’s “anthropogenic climate change” is another’s “so-called global warming”; one’s “undocumented immigrant” is another’s “illegal alien.” The question is whether partisanship shows up more in language patterns now than it did in days of yore. We already know that presidential campaigns have been even uglier than what we saw last winter and spring: John Quncy Adams’s supporters accused Andrew Jackson’s mother of being a prostitute, while Jackson called Adams a “hermaphrodite.” But to test whether legislators’ speech in general has undergone what George Lakoff called “consequential change,” these researchers tried to “apply tools from structural estimation and machine learning to study the partisanship of language in the U.S. Congress from 1873 to 2009.” As their Figure 3, below, illustrates, they found a remarkable rise in partisan speech since the famous “Contract With America” of the mid-1990s. Their conclusions are at odds with earlier research on partisan speech, which had concluded that, while partisanship has been rising recently, it was even higher in the past, at least if one judges by language.Figure 3 jpeg

Why does this new research point toward a rise in partisan speech, not just recently, but compared with speech from all earlier eras in American politics? According to one of the researchers, Jesse Shapiro of Brown University, the current research team used an automated method called regularization to craft an answer to the first empirical question anyone should ask of a statistical survey like this: Could the differences be due to chance? If the thing you’re studying is fairly limited, you can compare the actual distribution to a random distribution and come up with a conclusion. But since language constitutes such a vast pool of choices, you need some other method to correct for what statisticians called “finite sample bias,” or, as Shapiro puts it, “taking too seriously the information in small selections of data, for example the patterns of usage of rarely occurring phrases.” For those who understand statistics better than I do, more information on the team’s methods can be found in a series of lectures by Matt Taddy here. For those as ignorant of statistical methods as I am, Shapiro’s explanation may be helpful:

One way to describe the model at a high level is that it is an “urn model” of speech. That is, we model speech as if speakers are drawing phrases at random from an urn, and the contents of the urn differ by party. Our methods then attempt to estimate how different the contents of the Republican urn are from those of the Democratic urn. The urn model is a dramatic oversimplification of the way that human speech works, of course, but it it also a very useful metaphor that makes it possible to perform the exercise we lay out in the paper.

But here’s the startling thing about the team’s conclusion, particularly as we enter the final months of this torturous presidential campaign. In their words, with my emphases:

An average one-minute speech in our data contains around 33 phrases (after pre-processing). In 1874, an observer hearing such a speech would be expected to have a posterior of around .54 on the speaker’s true party. In 1990, this value remained almost equivalent at around .55. In 2008, the value was .83.

More than four out of five times that you hear a one-minute clip of a legislator’s speech, in other words, you will know what party he or she belongs to. Only 16 years ago — and consistently through the centuries preceding — your chances of a correct guess would have hovered close to 50/50.

So it’s not your imagination. We are speaking, as it were, two different languages when it comes to the values and social policies our representatives preach and we echo. These languages frame issues in ways that influence public opinion, and politicians now pay consultants tremendous fees to coin neologisms and match pairs of words in such a way as to seize the headlines. Just think about mass shooting versus radical Islamic terrorism — two phrases chosen by Democrats and Republicans, respectively, to describe the recent killings in an Orlando nightclub if you need to imagine the waves of influence that spread from a carefully chosen partisan phrase. And as the authors of the study write,

Language is also one of the most fundamental cues of group identity, with differences in language or accent producing own-group preferences even in infants and young children (Kinzler et al. 2007). Imposing a common language was a key factor in the creation of a common French identity (Weber 1976), and Catalan-language education has been effective in strengthening a distinct Catalan identity within Spain (Clots-Figueras and Masella 2013). That the two political camps in the US increasingly speak different languages may contribute to the striking increase in inter-party hostility evident in recent years (Iyengar et al. 2012).

Do I think they’re pointing to a dangerous trend? To quote one of our most partisan contemporary politicians, You betcha.

 

 

 


Viewing all articles
Browse latest Browse all 260

Trending Articles