sponsored links
TEDMED 2012

Ben Goldacre: What doctors don't know about the drugs they prescribe

June 5, 2012

When a new drug gets tested, the results of the trials should be published for the rest of the medical world -- except much of the time, negative or inconclusive findings go unreported, leaving doctors and researchers in the dark. In this impassioned talk, Ben Goldacre explains why these unreported instances of negative data are especially misleading and dangerous.

Ben Goldacre - Debunker
Ben Goldacre unpicks dodgy scientific claims made by scaremongering journalists, dubious government reports, pharmaceutical corporations, PR companies and quacks. Full bio

sponsored links
Double-click the English subtitles below to play the video.
Hi. So, this chap here,
00:16
he thinks he can tell you the future.
00:18
His name is Nostradamus, although here the Sun have
00:21
made him look a little bit like Sean Connery. (Laughter)
00:23
And like most of you, I suspect, I don't really believe
00:26
that people can see into the future.
00:29
I don't believe in precognition, and every now and then,
00:30
you hear that somebody has been able to predict something that happened in the future,
00:33
and that's probably because it was a fluke, and we only
00:36
hear about the flukes and about the freaks.
00:39
We don't hear about all the times that people got stuff wrong.
00:42
Now we expect that to happen with silly stories
00:46
about precognition, but the problem is,
00:48
we have exactly the same problem in academia
00:51
and in medicine, and in this environment, it costs lives.
00:54
So firstly, thinking just about precognition, as it turns out,
00:59
just last year a researcher called Daryl Bem conducted
01:02
a piece of research where he found evidence
01:05
of precognitive powers in undergraduate students,
01:07
and this was published in a peer-reviewed academic journal
01:10
and most of the people who read this just said, "Okay, well,
01:13
fair enough, but I think that's a fluke, that's a freak, because I know
01:15
that if I did a study where I found no evidence
01:17
that undergraduate students had precognitive powers,
01:20
it probably wouldn't get published in a journal.
01:23
And in fact, we know that that's true, because
01:26
several different groups of research scientists tried
01:29
to replicate the findings of this precognition study,
01:32
and when they submitted it to the exact same journal,
01:35
the journal said, "No, we're not interested in publishing
01:38
replication. We're not interested in your negative data."
01:41
So this is already evidence of how, in the academic
01:46
literature, we will see a biased sample of the true picture
01:48
of all of the scientific studies that have been conducted.
01:53
But it doesn't just happen in the dry academic field of psychology.
01:57
It also happens in, for example, cancer research.
02:01
So in March, 2012, just one month ago, some researchers
02:05
reported in the journal Nature how they had tried
02:09
to replicate 53 different basic science studies looking at
02:12
potential treatment targets in cancer,
02:16
and out of those 53 studies, they were only able
02:20
to successfully replicate six.
02:22
Forty-seven out of those 53 were unreplicable.
02:26
And they say in their discussion that this is very likely
02:30
because freaks get published.
02:34
People will do lots and lots and lots of different studies,
02:36
and the occasions when it works they will publish,
02:38
and the ones where it doesn't work they won't.
02:41
And their first recommendation of how to fix this problem,
02:42
because it is a problem, because it sends us all down blind alleys,
02:46
their first recommendation of how to fix this problem
02:50
is to make it easier to publish negative results in science,
02:51
and to change the incentives so that scientists are
02:55
encouraged to post more of their negative results in public.
02:58
But it doesn't just happen in the very dry world
03:02
of preclinical basic science cancer research.
03:06
It also happens in the very real, flesh and blood
03:10
of academic medicine. So in 1980,
03:13
some researchers did a study on a drug called lorcainide,
03:17
and this was an anti-arrhythmic drug,
03:20
a drug that suppresses abnormal heart rhythms,
03:22
and the idea was, after people have had a heart attack,
03:24
they're quite likely to have abnormal heart rhythms,
03:27
so if we give them a drug that suppresses abnormal heart
03:28
rhythms, this will increase the chances of them surviving.
03:31
Early on its development, they did a very small trial,
03:34
just under a hundred patients.
03:37
Fifty patients got lorcainide, and of those patients, 10 died.
03:39
Another 50 patients got a dummy placebo sugar pill
03:43
with no active ingredient, and only one of them died.
03:46
So they rightly regarded this drug as a failure,
03:49
and its commercial development was stopped, and because
03:51
its commercial development was stopped, this trial was never published.
03:54
Unfortunately, over the course of the next five, 10 years,
03:58
other companies had the same idea about drugs that would
04:04
prevent arrhythmias in people who have had heart attacks.
04:08
These drugs were brought to market. They were prescribed
04:10
very widely because heart attacks are a very common thing,
04:12
and it took so long for us to find out that these drugs
04:15
also caused an increased rate of death
04:19
that before we detected that safety signal,
04:22
over 100,000 people died unnecessarily in America
04:25
from the prescription of anti-arrhythmic drugs.
04:31
Now actually, in 1993,
04:34
the researchers who did that 1980 study, that early study,
04:38
published a mea culpa, an apology to the scientific community,
04:42
in which they said, "When we carried out our study in 1980,
04:45
we thought that the increased death rate that occurred
04:49
in the lorcainide group was an effect of chance."
04:50
The development of lorcainide was abandoned for commercial reasons,
04:54
and this study was never published;
04:56
it's now a good example of publication bias.
04:57
That's the technical term for the phenomenon where
05:00
unflattering data gets lost, gets unpublished, is left
05:02
missing in action, and they say the results described here
05:06
"might have provided an early warning of trouble ahead."
05:09
Now these are stories from basic science.
05:14
These are stories from 20, 30 years ago.
05:17
The academic publishing environment is very different now.
05:22
There are academic journals like "Trials," the open access journal,
05:25
which will publish any trial conducted in humans
05:29
regardless of whether it has a positive or a negative result.
05:32
But this problem of negative results that go missing in action
05:35
is still very prevalent. In fact it's so prevalent
05:39
that it cuts to the core of evidence-based medicine.
05:43
So this is a drug called reboxetine, and this is a drug
05:49
that I myself have prescribed. It's an antidepressant.
05:52
And I'm a very nerdy doctor, so I read all of the studies
05:54
that I could on this drug. I read the one study that was published
05:57
that showed that reboxetine was better than placebo,
06:00
and I read the other three studies that were published
06:03
that showed that reboxetine was just as good as any other antidepressant,
06:05
and because this patient hadn't done well on those other antidepressants,
06:08
I thought, well, reboxetine is just as good. It's one to try.
06:10
But it turned out that I was misled. In fact,
06:13
seven trials were conducted comparing reboxetine
06:16
against a dummy placebo sugar pill. One of them
06:19
was positive and that was published, but six of them
06:21
were negative and they were left unpublished.
06:24
Three trials were published comparing reboxetine
06:28
against other antidepressants in which reboxetine
06:29
was just as good, and they were published,
06:32
but three times as many patients' worth of data was collected
06:34
which showed that reboxetine was worse than
06:38
those other treatments, and those trials were not published.
06:40
I felt misled.
06:44
Now you might say, well, that's an extremely unusual example,
06:48
and I wouldn't want to be guilty of the same kind of
06:50
cherry-picking and selective referencing
06:52
that I'm accusing other people of.
06:55
But it turns out that this phenomenon of publication bias
06:57
has actually been very, very well studied.
06:59
So here is one example of how you approach it.
07:01
The classic model is, you get a bunch of studies where
07:03
you know that they've been conducted and completed,
07:06
and then you go and see if they've been published anywhere
07:08
in the academic literature. So this took all of the trials
07:10
that had ever been conducted on antidepressants
07:13
that were approved over a 15-year period by the FDA.
07:15
They took all of the trials which were submitted to the FDA as part of the approval package.
07:19
So that's not all of the trials that were ever conducted on these drugs,
07:23
because we can never know if we have those,
07:26
but it is the ones that were conducted in order to get the marketing authorization.
07:28
And then they went to see if these trials had been published
07:32
in the peer-reviewed academic literature. And this is what they found.
07:34
It was pretty much a 50-50 split. Half of these trials
07:36
were positive, half of them were negative, in reality.
07:40
But when they went to look for these trials in the peer-reviewed academic literature,
07:43
what they found was a very different picture.
07:48
Only three of the negative trials were published,
07:50
but all but one of the positive trials were published.
07:55
Now if we just flick back and forth between those two,
07:59
you can see what a staggering difference there was
08:03
between reality and what doctors, patients,
08:06
commissioners of health services, and academics
08:09
were able to see in the peer-reviewed academic literature.
08:12
We were misled, and this is a systematic flaw
08:15
in the core of medicine.
08:19
In fact, there have been so many studies conducted on
08:23
publication bias now, over a hundred, that they've been
08:25
collected in a systematic review, published in 2010,
08:29
that took every single study on publication bias
08:32
that they could find.
08:35
Publication bias affects every field of medicine.
08:36
About half of all trials, on average, go missing in action,
08:39
and we know that positive findings are around twice as likely
08:43
to be published as negative findings.
08:46
This is a cancer at the core of evidence-based medicine.
08:49
If I flipped a coin 100 times but then
08:53
withheld the results from you from half of those tosses,
08:57
I could make it look as if I had a coin that always came up heads.
09:00
But that wouldn't mean that I had a two-headed coin.
09:04
That would mean that I was a chancer
09:06
and you were an idiot for letting me get away with it. (Laughter)
09:07
But this is exactly what we blindly tolerate
09:11
in the whole of evidence-based medicine.
09:14
And to me, this is research misconduct.
09:18
If I conducted one study and I withheld
09:22
half of the data points from that one study,
09:25
you would rightly accuse me, essentially, of research fraud.
09:28
And yet, for some reason, if somebody conducts
09:33
10 studies but only publishes the five that give the result that they want,
09:36
we don't consider that to be research misconduct.
09:40
And when that responsibility is diffused between
09:43
a whole network of researchers, academics,
09:46
industry sponsors, journal editors, for some reason
09:49
we find it more acceptable,
09:52
but the effect on patients is damning.
09:54
And this is happening right now, today.
09:57
This is a drug called Tamiflu. Tamiflu is a drug
10:02
which governments around the world have spent billions
10:05
and billions of dollars on stockpiling,
10:08
and we've stockpiled Tamiflu in panic,
10:10
in the belief that it will reduce the rate of complications of influenza.
10:13
Complications is a medical euphemism for pneumonia
10:17
and death. (Laughter)
10:20
Now when the Cochrane systematic reviewers
10:25
were trying to collect together all of the data from all
10:28
of the trials that had ever been conducted on whether Tamiflu actually did this or not,
10:31
they found that several of those trials were unpublished.
10:34
The results were unavailable to them.
10:37
And when they started obtaining the writeups of those trials through various different means,
10:39
through Freedom of Information Act requests, through
10:43
harassing various different organizations, what they found was inconsistent.
10:45
And when they tried to get a hold of the clinical study reports,
10:49
the 10,000-page long documents that have
10:52
the best possible rendition of the information,
10:55
they were told they weren't allowed to have them.
10:59
And if you want to read the full correspondence
11:01
and the excuses and the explanations given by the drug company,
11:04
you can see that written up in this week's edition
11:07
of PLOS Medicine.
11:10
And the most staggering thing of all of this, to me,
11:15
is that not only is this a problem, not only do we recognize
11:18
that this is a problem, but we've had to suffer fake fixes.
11:22
We've had people pretend that this is a problem that's been fixed.
11:26
First of all, we had trials registers, and everybody said,
11:29
oh, it's okay. We'll get everyone to register their trials, they'll post the protocol,
11:31
they'll say what they're going to do before they do it,
11:35
and then afterwards we'll be able to check and see if all the trials which
11:37
have been conducted and completed have been published.
11:39
But people didn't bother to use those registers.
11:41
And so then the International Committee of Medical Journal Editors came along,
11:44
and they said, oh, well, we will hold the line.
11:46
We won't publish any journals, we won't publish any trials,
11:48
unless they've been registered before they began.
11:50
But they didn't hold the line. In 2008, a study was conducted
11:53
which showed that half of all of trials published by journals
11:57
edited by members of the ICMJE
12:00
weren't properly registered, and a quarter of them weren't registered at all.
12:02
And then finally, the FDA Amendment Act was passed
12:07
a couple of years ago saying that everybody who conducts
12:10
a trial must post the results of that trial within one year.
12:12
And in the BMJ, in the first edition of January, 2012,
12:16
you can see a study which looks to see if people kept
12:20
to that ruling, and it turns out that only one in five
12:22
have done so.
12:26
This is a disaster.
12:29
We cannot know the true effects of the medicines
12:32
that we prescribe if we do not have access
12:36
to all of the information.
12:39
And this is not a difficult problem to fix.
12:42
We need to force people to publish all trials
12:46
conducted in humans, including the older trials,
12:51
because the FDA Amendment Act only asks that you publish the trials conducted after 2008,
12:54
and I don't know what world it is in which we're only
12:58
practicing medicine on the basis of trials that completed in the past two years.
13:01
We need to publish all trials in humans,
13:05
including the older trials, for all drugs in current use,
13:07
and you need to tell everyone you know
13:10
that this is a problem and that it has not been fixed.
13:13
Thank you very much. (Applause)
13:17
(Applause)
13:20
Translator:Joseph Geni
Reviewer:Morton Bast

sponsored links

Ben Goldacre - Debunker
Ben Goldacre unpicks dodgy scientific claims made by scaremongering journalists, dubious government reports, pharmaceutical corporations, PR companies and quacks.

Why you should listen

"It was the MMR story that finally made me crack," begins the Bad Science manifesto, referring to the sensationalized -- and now-refuted -- link between vaccines and autism. With that sentence Ben Goldacre fired the starting shot of a crusade waged from the pages of The Guardian from 2003 to 2011, on an addicitve Twitter feed, and in bestselling books, including Bad Science and his latest, Bad Pharma, which puts the $600 billion global pharmaceutical industry under the microscope. What he reveals is a fascinating, terrifying mess.

Goldacre was trained in medicine at Oxford and London, and works as an academic in epidemiology. Helped along by this inexhaustible supply of material, he also travels the speaking circuit, promoting skepticism and nerdish curiosity with fire, wit, fast delivery and a lovable kind of exasperation. (He might even convince you that real science, sober reporting and reason are going to win in the end.)

As he writes, "If you're a journalist who misrepresents science for the sake of a headline, a politician more interested in spin than evidence, or an advertiser who loves pictures of molecules in little white coats, then beware: your days are numbered."

Read an excerpt of Bad Pharma >>

sponsored links

If you need translations, you can install "Google Translate" extension into your Chrome Browser.
Furthermore, you can change playback rate by installing "Video Speed Controller" extension.

Data provided by TED.

This website is owned and operated by Tokyo English Network.
The developer's blog is here.