This is a fascinating discussion of a controversy that's apparently rocking the medical community.
I dug up a NEJM article by Kirsch which rebuts many of the rebuttals he got from colleagues and pharma:
Challenging Received Wisdom: Antidepressants and the Placebo Effect
As far as I can tell, he did a meta-analysis of clinical data, which is a different type of statistical analysis than the one used by the FDA to approve new drugs. The FDA only requires that two studies demonstrate a statistically significant improvement in the test drug vs. the placebo or some other comparator. The presence of other failed studies doesn't matter, if two meet this criteria. So a drug that represents a 14% improvement over placebo will be approved, as long as statistically that's a "significant difference" (ie, actual vs. random chance).
What I think Kirsch did was analyze a broader range of data, including unsuccessful, unpublished trials, submitted to the FDA. He quantitated the "placebo" effect and concluded that the drugs were only effective in a small subset of patients. Using his criteria, these drugs would not have been approved, or would have only been approved for certain patients with moderate or severe depression.
The problem is the drug may not provide results any better than the placebo. If there was no placebo effect, we wouldn't be having this conversation?
A drug showing no improvement over placebo wouldn't get approved. The problem here is that the studies used by the FDA for approval DID show an improvement, but Kirsch's analysis suggests that the FDA used a flawed methodology in analyzing the clinical data.
That's what I was thinking we're entitled to, how many trials, how many failures, how many successes. And if they won't tell patients, you would think insurers would be all over this, denying or monitoring prescriptions with low probabilities of success. Naive maybe...
I think virtually everyone agrees with you except the drug co's. The FDA is pushing co's to do just that, but I'm not sure how effective this effort is yet.
It's not a 14% improvement, the drug is only effective in 14% of cases, presumably ineffective/neglible for 86% of patients. Can we afford to spend health care money this way?
I can't locate an original source for the "14%". It could be either "14% improvement" or "14% of patients, who improved by x%", depending on how the trial was designed. Either way, I agree it's a very low number.