The Only Text at UNL So Far Worth Anything


Rules for the World provides an innovative perspective on the behavior of international organizations and their effects on global politics. Arguing against the conventional wisdom that these bodies are little more than instruments of states, Michael Barnett and Martha Finnemore begin with the fundamental insight that international organizations are bureaucracies that have authority to make rules and so exercise power. At the same time, Barnett and Finnemore maintain, such bureaucracies can become obsessed with their own rules, producing unresponsive, inefficient, and self-defeating outcomes. Authority thus gives international organizations autonomy and allows them to evolve and expand in ways unintended by their creators.

Barnett and Finnemore reinterpret three areas of activity that have prompted extensive policy debate: the use of expertise by the IMF to expand its intrusion into national economies; the redefinition of the category “refugees” and decision to repatriate by the United Nations High Commissioner for Refugees; and the UN Secretariat’s failure to recommend an intervention during the first weeks of the Rwandan genocide. By providing theoretical foundations for treating these organizations as autonomous actors in their own right, Rules for the World contributes greatly to our understanding of global politics and global governance

Buy it

On the Limitations of Science

Most scientific papers are probably wrong,” by Kurt Kleiner, New Scientist, 30 August 2005, (from Slashdot).

Quoted in full:

Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.

John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.

“We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery,” Ioannidis says.

In the paper, Ioannidis does not show that any particular findings are false. Instead, he shows statistically how the many obstacles to getting research findings right combine to make most published research wrong.
Massaged conclusions

Traditionally a study is said to be “statistically significant” if the odds are only 1 in 20 that the result could be pure chance. But in a complicated field where there are many potential hypotheses to sift through – such as whether a particular gene influences a particular disease – it is easy to reach false conclusions using this standard. If you test 20 false hypotheses, one of them is likely to show up as true, on average.

Odds get even worse for studies that are too small, studies that find small effects (for example, a drug that works for only 10% of patients), or studies where the protocol and endpoints are poorly defined, allowing researchers to massage their conclusions after the fact.

Surprisingly, Ioannidis says another predictor of false findings is if a field is “hot”, with many teams feeling pressure to beat the others to statistically significant findings.

But Solomon Snyder, senior editor at the Proceedings of the National Academy of Sciences, and a neuroscientist at Johns Hopkins Medical School in Baltimore, US, says most working scientists understand the limitations of published research.

“When I read the literature, I’m not reading it to find proof like a textbook. I’m reading to get ideas. So even if something is wrong with the paper, if they have the kernel of a novel idea, that’s something to think about,” he says.

Now, if only some one would have already said this….