(by Howard Wainer)
This article from 2005 (by Zhenglun Pan, Thomas Trikalinos, Fotini Kavvoura, Joseph Lau, and John Ioannidis) is brilliant.
It is well established that the size of many experimental effects diminish over time. So, we often see an initial investigation of some new treatment has a large effect, but subsequent replications show a much smaller effect. The ‘blame’ for this is often laid at the door of publication bias — that the sampling distribution of the effect might be Gaussian with a mean just slightly above zero, and so many studies of the treatment can’t get published because they have small or null results. Suddenly a study gets results from the high tail of the dist’n and is published in an A-list journal with fanfare. Now that it is in the literature subsequent attempts at replication can also get published — so out of file drawers come the other studies, often done before the alpha study — but in B-list journals.
Enter the attached paper. The Chinese scientific literature is rarely read or cited outside of China. But the authors of this work are usually knowledgeable of the non-Chinese literature — at least the A-list journals. And so they too try to replicate the alpha finding. But do they? One would think that they would find the same diminished effect size, but they don’t! Instead they replicate the original result, even larger. Here’s one of the graphs:
How did this happen? Is it because the Chinese researchers ‘know’ what answer they should get, and so arrange their results to get them? Or is there some sort of publication bias in which studies with the ‘wrong’ (too small) result are not accepted? I don’t know, but my conclusion is we need to view medical research from China (and maybe elsewhere) carefully until we understand better the mechanisms driving it.