Tim Harford endorses testing (some) public policy ideas using randomised trials. One part leapt out at me:
Realising that inconvenient – or just plain boring – trial results are less likely to appear in print, medical journals now refuse to publish trials that were not logged before they started in a register of trials. Such registers ensure embarrassing results cannot be made to disappear. This is vital in medicine, and just as important in social policy.rnimagine if we had some way of tracking every regression someone ran? Unless the issue is particularly contentious, journals tend to favour articles that show some sort of relationship (although authors are getting better and better at reporting all their specifications). Ideally, we'd like to know every statistical study that wasn't published, but unfortunately running a loop in Stata is much more private than running a large RCT.
Luckily, RCTs in the social sciences are convincing enough that studies that show no effect are still worthy of publication. As they become more and more popular and competition stiffens, will this still be the case?
Much as academics squabble over minute assumptions in econometric specifications, will we be soon be arguing over differences in intervention design? When it comes to translating research into policy prescriptions I wonder how much better off we'll be.
Even if the social sciences move closer to the "full disclosure" practices we see in the medical community - there are other filters which will still lead to bias. The most potent: the media. Positive results tend to get reporter first, and tend to stick in the public mind longer, even after they've been contested or discredited.
1 Comment
I've heard some academic talking about a "Journal of Negative Results" - it may just have been coffee break chatter. Still, nice idea.