Category Archives: Political Theory

The Progress of the Humanities

Over on Facebook, my friend Adam Elkus linked me to an article by David Lake, titled “Theory is Dead, Long Live Theory: The End of the Great Debates and the Rise of Eclecticism in International Relations. [pdf]” The piece is extremely strong, and describes International Relations split into two camps: one engaged in normal science capable of progress, and the other a tiresome collection of “Great Debates” that are never answered.

There’s a lot to love in Lake’s paper — it really is very high quality — but the most evocative image from it is the threat that International Relations will split into two fields that do not even study the same phenomenon. What if the scientists focus on experiments (and quasi-experiments) that can be conducted in the here and now, while the Great Debaters retreat into history and just-so stories?

To me, the best hope to save International Relations from such a fate lies in the “digital humanities.” The digital humanities are not just a method for those interested in the past to escape the “humanities ghetto of low employment and low wages
— rather, the “Digital Humanities” use Big Data techniques to understand our common past, in the same way that companies like Facebook use many of the same techniques to understand many private pasts. (Some more information on the digital humanities is available on the personal site of Jason Heppler of Stanford University.)

As an example, take Lake’s discussion of Zara Steiner‘s Triumph of the Dark, a narrative history of the outbreak of the Second World War. Lake notes the rigor of the book, but sadly states such a work can generate no hypotheses or tests. But a digital humanities approach — say working from the massive newspaper, magazine, book, and census corpora at our disposal, is not so limited. It is easy to imagine hypotheses that explain the reason for the motives of leaders with that amount of data to work with. Perhaps the degree to which “Hitler wanted war” can be tracked by measuring the day-to-day bellicosity of the written works of those he met with? Or might the locations that we knew Neville Chamberlain spent certain parts of his life be linked to pro-peace inflections in the lives of others?

International Relations is the science dedicated to predicting, controlling, and improving the behavior of States. This should be done through hypotheses testing, modeling building, and including the methods of the digital humanities. There are many ways to advance science.

In the chaos of old boy networks – in stagnant fields with no progress – it sucks to be young. But in all those areas where science and technology march hand-in-hand toward progress, it is a joy to be young!

Controversies in Normal Science

tl:td: “Normal Science” refers to science when scientists focus on making progress, not just arguing in circles. The difference between probabilistic and bayesian — or “go-up” and “go-next” — statistics matters much more to the future of normal science and the nonsense that Walt and Meirsheimer came up with in a recent article.

Will F. Moore, at his blog A Second Mouse has a really funny up titled “Commentary on Mearsheimer and Walt.” Not surprisingly, it is a commentary on Mearsheimer’s and Walt’s (d’uh! :-) ) recent post, which I also criticized. Basically, Mearsheimer and Walt wrote a piece in which they demonstrated deep confusion of scientific methods, and lamented the decline of the “old boys network” and its replacement by objective methods of evaluation.

One of the ridiculous parts of Mearsheimer and Walt’s columns is their inability to distinguish substantive from non-substantive divisions in normal science. For instance, a large part of Mearsheimer and Walt’s piece is dedicated to a discussion of “scientific realism,” which appears to be a confused discussion of instrumental validity. Mearsheimer and Walt completely miss the division of scientific research into frequestist and bayesian camps, which Will Moore humorously emphasizes:

Quantitative approaches—particularly the misapplication of hypothesis testing methods which make complete sense in the context of survey research but no sense whatsoever in the context of the analysis of one-off populations—may be wrong, but at least we can systematically say why they are wrong.[4] Grand theory?—welcome to the narrative fallacy and that wonderful little hit of dopamine that your brain gives you in response to any coherent story. And that’s all they’ve got to work with.

..

[4] And again, we know of plenty of alternatives, including the rapid emergence of Bayesian model averaging which is likely to wipe out the cult of incremental frequentist garbage can models. The cult otherwise known by the initials APSR and AJPS.

Previously on this blog, I’ve referred to “frequentist” and “bayesian” statistics as “go-up” and “go-next,” because frequentist work tends to emphasize building a model of reality, while bayesian models tend to focus on predicting what will happen next. As I wrote previously:

The Go-Up view of statistics is that statistics measures the population from which an observation comes from. The appropriate way to go-up is to wait until you have a sufficient number of observations. and then generalize about the population from our observations. So if you are conducting science, and you notice. This is the method that Derbyshire was describing in 2010. A large number of observations of academic performance show consistent gaps between black and white learners. Because we’re “going-up” from observations to populations, we can conclude some things about the population, and how outcomes in the population should work-out over all, but it makes no sense to try to predict any given student’s success based on this. We’re going-up, not going-next.

The Go-Next view of statistics is that statistics gives us the likelihood of something being true, based on what has come before. In Go-Next statistics, population-averages are besides the point. What matters is guessing what’s going to happen, next, based on what you’ve seen before. The whole point is to guess what’s going to work for individuals you know only a few things about, based on your experience with other individuals who shared some things with the new strangers.

..

The superstructure of science changes as the infrastructure of the economy changes. The Go-Next philosophy of statistics, once the peasant stepchild of the serene Go-Up interpretation, now reigns supreme.

The unfolding victory of Go-Next Statistics matters much, much more than, say, the Copernican Revolution. The number of people whose daily conversations were actually impacted by Copernicus may have been a few dozen, all involved in the Papal-Academic complex.

How many times a day does Facebook’s decision of which news to share impact you?

There are real controversies and real research programs in normal science.

Too bad Walt and Mearsheimer know so little about normal science they were unable to identify either.

Money, Power, and Normal Science

Fabio Rojas has a post up titled Theory Death in Political Science. It links to a post by Stephen Saideman, “Leaving Grand Theorists Behind,” which was published as Saideman’s Semi-Spew. (A companion piece was also published at Duck of Minerva and discussed by me earlier.)

Here’s the beginning of the post:

A definition: theory death is when some intellectual group tires of theory based on armchair speculation. Of course, that doesn’t mean that people stop producing theory. Rather, it means that “theory” no longer means endless books based on the author’s insights. Instead, people produce theory that responds to, or integrates, or otherwise incorporates a wealth of normal science research. In sociology, theory death seems to have happened sometime in the 1980s or 1990s. For example, recent theory books like Levi-Martin’s Social Structures or McAdam and Fligstein’s A Theory of Fields are extended discussions of empirical research that culminate in broader statements. The days of endlessly quoting and reinterpreting Weber are over. :(

Now, it seems, theory death is hitting some areas of political science.

What Fabio Rojas calls “theory death” is the “normalization of science.” That is, the establishment of methods that allow for progress in the prediction, control, and improvement of behavior of some object of study (molecule, person, State, etc.) over time.

The next line is particularly important:

Science becomes normalized when the power the Old Boys network achieves through limiting competition is overtaken by the money available for creating progress.

There have been two great flowerings of science in American history. Both emerged from the establishment of the great American University System in the late 19th century, but they accelerated at different times. As I wrote previously:

Following the Second World War science boom, the federal government accelerated the rise of the American research universities. From the Second World War to the Vietnam War, physics was a favorite area for funding. From this we received many new physical inventions, such as a transistor. After the Vietnam War, medicine is a favorite area for funding. Now we have great medical breakthroughs.

While social science research funding is only a fraction of medical research, the federal academic complex ensures that there is bleed through from health sciences to the social sciences as well. The bureaucratic momentum for peer-reviewed scientific research funding. Such funding requires that researchers seek to achieve progress in some areas, which of course privileges normal science (which is capable of achieving progress) relative to non-paradigmatic science (Which is not).

ways_of_knowing_3

The reason that Political Science is late to normalization — why it is experiencing “theory death” later than other fields — come from the obvious exception to this general rule for how academia works:

Professors, like most people, respond to the incentives of power, influence, and money.

The institution of tenure reduces uncertainty regarding money, and focuses the incentives on power and influence.

Power in academia comes from the number of bodies a professor has under him. These bodies might be apprentices (graduate students he advises), journeymen (post-docs who have a PhD and work at the lab, or staff researchers), or simple workers (lab technicians, etc).

Influence in academia comes from the extent to which one is successful in influencing one’s peers. This is typically measured in terms of influence scores, which are a product of how often the academic is cited, weighted by how important of a publication he is cited in.

Unlike other places in academia, professors hope to influence national policy makers, and so are relatively immune to academic discipline. This actually hurts scholarship. For instance, Victor Cha’s otherwise great book on North Korea, The Impossible State, is pretty much ruined by his analysis of Kim Jung Il, which was basically a job application. Likewise, Stephen Walt and John Mearsheimer (who began this discussion by defending the Old Boys network) basically produce political propaganda for the Old Right (pessimistic, Army-focused, and anti-Zionist). The lack of academic discipline has allowed political science to get away with graduating students into the “humanities ghetto” — because skills don’t matter in political science as much as connections, those without connections are left with high unemployment and bitter job prospects:

wages_employment_majors_humanities_ghetto_md

The way forward is probably for grant-funding organizations to support normal science in political science research, and for political agitators to coalesce within agenda-driven “think tanks.” Educational sciences have already experienced this split. It’s time for Political Science to normalize, too.

Definitions and Progress

A couple days ago a post on Duck of Minerva linked to a working paper called “I can has IR theory?” [pdf]. The title was funny, but something about the contents bothered me.

I Can Has IR Theory Appears to have tow components
1. It is an extended hit peace against “neopositivism,” which appears to be a methodology (or something) disliked by the authors. It is difficult to know if this is true, however, because the authors do not bother to define their terms.
1. It includes a discussion of “scientific ontology,” which likewise is never defined.

Unlike “neopositivism” though (the only thing I can tell about which is that the authors — Patrick Jackson and Daniel Nexon — dislike it, and that it appears to be related to quantitative methods), the article includes numerous descriptions of “scientific ontology.” It is these descriptions that bothered me.

“Scientific ontology” appears to be synonymous for “nomological network,” an antiquated and simplistic form of modeling that is prone to error.

First, some passages from Jackson and Nexon’s working paper:

To be more precise, we think that international-relations theory is centrally involved with scientific ontology, which is to say, a catalog—or map—of the basic substances and processes that constitute world politics. International-relations theory as “scientific ontology” concerns:
• The actors that populate world politics, such as states, international organizations, individuals, and multinational corporations;
• Their relative significance to understanding and explaining international outcome
• How they fit together, such as parts of systems, autonomous entities, occupying locations in one or more social fields, nodes in a network, and so forth;
• What processes constitute the primary locus of scholarly analysis, e.g., decisions, actions, behaviors, relations, and practices; and
• The inter-relationship among elements of those processes, such as preferences, interests, identities, social ties, and so on.

(Note how they are measured is left out.)

And this passage (as mentioned above, “Neopositivism” is never defined and only loosely described, so focus on the passage related to “scientific ontology”)

The Dominance of Neopositivism
This line of argument suggests that neopositivist hegemony, particularly in prestige US journals, undermines international-relations theorization via a number of distinct mechanisms:
• It reduces the likelihood that international-relations theory pieces will be published in “leading” journals because neopositivism devalues debate over scientific ontology in favor of moving immediately to middle-range theoretic implications; • It reduces the quality of international-relations theorization by requiring it to be conjoined to middle-range theorizing and empirical adjudication; and
• It forces derivative middle-range theories to be evaluated through neopositivist standards.

(Note that scientific ontology thus excludes “middle-range theoretical implications.)

In an earlier work, I wrote that :

As a measure of construct validity, nomothetic span is more inclusive than Cronbach and Meehl’s (1955) concept of the nomological network, as nomothetic span includes not only how a construct relates to other construct, but also how measures of the same construct relate to each other (Messick, 1989).

Because the undefined concept of “scientific ontology” appears to be more or less identical to the idea of nomological network, which was described a half century ago. Without incorporating measurement into a model, it’s impossible to a functional definition, a method of falsifying the model, or even a way to make useful predictions. And without this ability, it’s impossible to make progress.

Operational definitions are absent from Jackson’s and Nexon’s piece, both from their primary terms, and their view of “scientific ontology.”