📚 This is an archive of Aid Thoughts, a development economics blog that was active from 2009 to 2017. Posts and comments are preserved in their original form.

Innocent until proven likely

Some time ago I wrote about how randomness can obscure culpability. For example, let's say you program a computer to roll a die for you, but - importantly- it keeps the result hidden.  You also program the computer to automatically donate from your bank account to a charity if the resulting roll is a four or above.

At some point Toby Ord walks in the door and convinces you that giving is a moral duty, so you decide to lower the "charity-giving" threshold from a roll of four to a roll of three. The impact of Ord's words is clear: your expected giving increases from $2.50 to $3.33, about 83 cents higher than before.

Let's say you then tell the computer to make a roll, and you learn that you have donated to charity. Is Ord's intervention responsible? We really can't say - the `charity giving' result was possible before Ord came in the room, we don't observe the actual roll and we cannot see if the result would have cleared the pre-Ord threshold. While Ord's impact is observable in a grander, statistical sense, individual results cannot be attributed to his intervention.

Suddenly, the world becomes a lot more uncertain. Take climate change, for instance. While there is growing evidence that global warming increases the probability of weather-related disasters, it is more difficult to tie individual disasters to global warming, as we cannot say for certain that a flood wouldn't have happened anyway. This doesn't stop us from embracing policy to reduce the probability that floods will happen in the future, but we do have to be careful about how we ascribe the blame for individual events.

Interestingly, according to an article in the Economist, death-row inmates in North Carolina are now allowed to apply statistical averages to specific cases:

Mr Robinson was the first person to have his death sentence vacated under North Carolina’s Racial Justice Act. Enacted in 2009, the law lets death-row inmates challenge their sentence (though not the underlying conviction) on grounds of racial bias. If a court finds that in the state, county, prosecutorial district or judicial division at the time of sentencing, death sentences were sought or imposed more frequently on members of one race than another, or were sought or imposed more frequently as punishment for killing members of one race, or if race was “a significant factor” in jury selection, the death sentence will be commuted to life without possibility of parole.

North Carolina’s law—unlike Kentucky’s, the only similar law in force—allows the use of statistical evidence to support an inmate’s claim, rather than requiring clear evidence of discriminatory intent.

This means that death row inmates don't have to prove that racial bias made a difference in their case, they just have to prove that there is, in a statistical sense, racial bias. While I am happy that policies like this might be useful for pushing back against racism in court decisions in a grander sense (I'm also opposed to the death penalty), I'm uncomfortable with the assumption that a general result can be applied to a single observation. This is akin to saying Toby Ord is responsible for every single successful die roll, effectively giving him credit for $3.33 worth of charity, instead of the $0.83 difference he really made.

Extra points if you can rewrite this blog post using econometric equations.

Categories: Research

6 Comments

Ian · May 02, 2012 at 03:24 PM

Yes, but isn't the point that Toby Ord, Global Warming or racism was possibly one of several contributing factors in each case but probably not the only one - and the problem at an individual scale is that either we can work out the respective contributions of each factor or (more likely) we can't.

This sounds a bit like the attribution/contribution problem with impact evaluations. And a further extension of this reasoning is the difficult of inferring from the general to the specific or the specific to the general e.g. we might know in general that cash transfers reduce absolute poverty, but did your programme, or conversely we know your programme reduced the poverty level of its benficiaries but does that mean that the approach you used is a good idea in general.

Matt · May 02, 2012 at 04:50 PM

Hey Ian, nnThanks for the thoughts. Yes - these things can all possibly be contributing factors, but usually we can't tell if they were (we can't ask the computer to tell us what the die roll was, we only see the result). nnWe also can't always assume that everything was a contributing factor in each individual case (even if it doesn't make a difference). For example: imagine my computer-generated die roll experiment, but replicated over a large population of people. Now imagine that a random subset of this population are deaf, and when Toby Ord walks in and talks to them, it has no effect. If we looked at his impact statistically, the estimated effect would be much lower (because it will be zero for deaf people, and $0.83 for the hearing, so the average effect will be lower - a heterogeneous effect). In this case, we really can't look at an average effect and assume that Ord made *some* difference in each case - for a subset, he *had* to have made no difference. nnYour thoughts on impact evaluations seem like a related problem, but tied closer to the issue of external validity.

Humanicontrarian · May 02, 2012 at 07:55 PM

Hey Matt,

Faulty attribution. Reminds me of a story of how racism works. Job interviews for a position. 500 applicants. Black guy gets the job "due to reverse discrimination". That sort of gossip is destructive enough. What's worse is that you probably have 400 white guys, each of whom believes he lost out on the job because of it. And each one griping to family and friends, who then say things like "I know a guy who didn't get a job because...".

Nice post. In the aid world, maybe it faulty attribution also looks like this: 10 agencies all support in some way an IDP camp, where there's a clinic that does 100 consultations per week. If you add up their annual reports, you'll find 52,000 consultations took place in the camp.

Humanicontrarian · May 02, 2012 at 07:58 PM

Oops. The last sentence of the previous comment should have said: If you add up their annual reports, you'll find that NGOs contributed/supported to 52,000 consultations in the camp.

Lee · May 03, 2012 at 09:20 AM

Same problem for performance-related pay via value-added calculations for teachers right?

Matt · May 03, 2012 at 12:16 PM

Hmm, that creates the additional complication of identification, right? You need to establish that the teacher actually led to the added-value, which is difficult enough. The problem I'm talking about still exists post identification, if that identification is done at a high enough level. You randomly allocate a teacher to a set of classrooms and find that students in the `treated' classrooms, on average, increase their test scores over the previous year more than the untreated classrooms. You've identified the teacher effect, and can pay them accordingly, but you still can't point to a single student with an increase scores identical to the mean increase and claim that the teacher was definitely responsible for the performance increase.