(by Michael Lavine)
1. A New York Times editorial cites statistical evidence in calling for tighter regulation of for-profit colleges: for-profit colleges absorb a disproportionate amount of federal aid dollars and their graduates are more indebted and default on their student loans at a disproportionate rate. The Times’ argument seems weak because it doesn’t consider potential confounders. Perhaps the higher rates of aid and default are due to for-profit colleges serving a poorer population than non-profit colleges.
The Times is usually a good user of statistics, and I often admire their statistical graphics. But in this case I think they made a mistake. I don’t know whether for-profit colleges should be more tightly regulated, but I’m not convinced by the editorial’s statistics. I call on the Times, and all of us, to be careful in wielding statistics.
2. A story on the BBC website reports ‘Climate warming since 1995 is now statistically significant, according to Phil Jones, the UK scientist targeted in the “ClimateGate” affair.’ The story explains statistical significance this way, “By widespread convention, scientists use a minimum threshold of 95% to assess whether a trend is likely to be down to an underlying cause, rather than emerging by chance. If a trend meets the 95% threshold, it basically means that the odds of it being down to chance are less than one in 20.” They are, of course, repeating a common misunderstanding.
I wrote to the BBC, saying that their explanation was wrong. Here’s their reply:
We discussed and debated different ways of phrasing this before publishing the story. In the end we decided to go with the version we did because although technically it is challengeable, we felt it was easily comprehensible and correct from a common sense point of view.
(Editor’s note: regarding item 2, see also here.)