Saturday, June 05, 2004
What’s the Harm? Aspirin Use and Breast Cancer
Carol Tavris and Avrum Z. Bluming
And now, another medical bulletin makes the front page: According to a lead article in the May 26 Journal of the American Medical Association, aspirin reduces the risk of breast cancer by "28 percent." Wonderful news, almost too good to be true, and the media from coast to coast were on this story like a duck on a June bug: National Public Radio, USA Today, the New York Times, the Los Angeles Times, and every regional paper in between made this a top headline. The BBC reported that “Aspirin cuts breast cancer risk” and The Times of India, more cautiously, announced that “Aspirin may cut breast cancer risk.”
Unfortunately, with science news as with political news, the tension between "getting it first" and "getting it right" is growing. At least with political news, readers can ask skeptical questions and have a sense of what a news story might be omitting, and good reporters know that a politician will try to spin a story. But most people, being neither scientists nor statisticians, must rely on the way scientific articles are reported to the press and then to public. Why be skeptical? Can't we trust the scientists to give us the gist of their findings, and science writers to report that gist accurately? Increasingly, no. We can't.
This is what the authors of the JAMA article concluded in their paper: “These results support an inverse association between ever use of aspirin and breast cancer risk in both premenopausal and post menopausal women . . . These data add to the growing evidence that supports the regular use of aspirin and other drugs (such as ibuprofen) as effective chemopreventive agents for breast cancer” (our emphasis). Here is what the published article actually found:
No decreased risk among premenopausal women.
No decreased risk among women who took ibuprofen (Motrin or Advil).
No decreased risk among women with early or non-invasive breast cancer.
No association between the length of time a woman takes aspirin and the decrease in breast cancer risk—although, since breast cancer takes years to develop, the length of time should be significant if aspirin were really having an effect.
No indication of what dose of aspirin is effective. Although the study was set up to identify the effect of aspirin and other drugs on the risk of breast cancer, the researchers did not collect basic information on the dosages the women were in fact taking.
The actual decreased incidence of breast cancer (if in fact this preliminary conclusion is valid) was a little less than 3 women for every 100 taking aspirin (at unknown doses).
The place for scientists to debate and criticize these findings is in the pages of JAMA and other scientific journals, and there will undoubtedly be considerable discussion about the possible benefits of aspirin. In due course, the work of other investigators will support, refute or enhance these findings. For example, in January of this year, the Journal of the National Cancer Institute pubished a prospective study suggesting that prolonged use of aspirin in women—for many years—is associated with an increased risk of pancreatic cancer. That is why scientists and clinical practitioners must read these findings carefully, debate them, and then make the wisest decisions they can with their patients.
But when scientific findings are reported too quickly, too uncritically, in a culture hungry for medical news and medical advice, caution gets short shrift. Perhaps science writers are not used to treating scientists the way political writers (should) treat government officials: with an understanding that the expert's claims are not enough—that for the real story, you have to dig further. Perhaps, working against a deadline, the reporter reads only the article's abstract or the journal's press release. Nonetheless, the end result of the scientist's faith that his or her version will be the accepted one, and the reporter's faith that the scientist is trustworthy, is a collusion of irresponsibility.
The ultimate disservice is to the public, which alternately leaps to each new "breakthrough" enthusiastically, followed by disillusion and cynicism if the breakthrough turns out to be modest or not a breakthrough at all. Scientists should not be getting a free ride. It's time to subject their work to the same scrutiny we now give the Bush administration's claims that there were weapons of mass destruction in Iraq.
###
Carol Tavris and Avrum Z. Bluming
And now, another medical bulletin makes the front page: According to a lead article in the May 26 Journal of the American Medical Association, aspirin reduces the risk of breast cancer by "28 percent." Wonderful news, almost too good to be true, and the media from coast to coast were on this story like a duck on a June bug: National Public Radio, USA Today, the New York Times, the Los Angeles Times, and every regional paper in between made this a top headline. The BBC reported that “Aspirin cuts breast cancer risk” and The Times of India, more cautiously, announced that “Aspirin may cut breast cancer risk.”
Unfortunately, with science news as with political news, the tension between "getting it first" and "getting it right" is growing. At least with political news, readers can ask skeptical questions and have a sense of what a news story might be omitting, and good reporters know that a politician will try to spin a story. But most people, being neither scientists nor statisticians, must rely on the way scientific articles are reported to the press and then to public. Why be skeptical? Can't we trust the scientists to give us the gist of their findings, and science writers to report that gist accurately? Increasingly, no. We can't.
This is what the authors of the JAMA article concluded in their paper: “These results support an inverse association between ever use of aspirin and breast cancer risk in both premenopausal and post menopausal women . . . These data add to the growing evidence that supports the regular use of aspirin and other drugs (such as ibuprofen) as effective chemopreventive agents for breast cancer” (our emphasis). Here is what the published article actually found:
No decreased risk among premenopausal women.
No decreased risk among women who took ibuprofen (Motrin or Advil).
No decreased risk among women with early or non-invasive breast cancer.
No association between the length of time a woman takes aspirin and the decrease in breast cancer risk—although, since breast cancer takes years to develop, the length of time should be significant if aspirin were really having an effect.
No indication of what dose of aspirin is effective. Although the study was set up to identify the effect of aspirin and other drugs on the risk of breast cancer, the researchers did not collect basic information on the dosages the women were in fact taking.
The actual decreased incidence of breast cancer (if in fact this preliminary conclusion is valid) was a little less than 3 women for every 100 taking aspirin (at unknown doses).
The place for scientists to debate and criticize these findings is in the pages of JAMA and other scientific journals, and there will undoubtedly be considerable discussion about the possible benefits of aspirin. In due course, the work of other investigators will support, refute or enhance these findings. For example, in January of this year, the Journal of the National Cancer Institute pubished a prospective study suggesting that prolonged use of aspirin in women—for many years—is associated with an increased risk of pancreatic cancer. That is why scientists and clinical practitioners must read these findings carefully, debate them, and then make the wisest decisions they can with their patients.
But when scientific findings are reported too quickly, too uncritically, in a culture hungry for medical news and medical advice, caution gets short shrift. Perhaps science writers are not used to treating scientists the way political writers (should) treat government officials: with an understanding that the expert's claims are not enough—that for the real story, you have to dig further. Perhaps, working against a deadline, the reporter reads only the article's abstract or the journal's press release. Nonetheless, the end result of the scientist's faith that his or her version will be the accepted one, and the reporter's faith that the scientist is trustworthy, is a collusion of irresponsibility.
The ultimate disservice is to the public, which alternately leaps to each new "breakthrough" enthusiastically, followed by disillusion and cynicism if the breakthrough turns out to be modest or not a breakthrough at all. Scientists should not be getting a free ride. It's time to subject their work to the same scrutiny we now give the Bush administration's claims that there were weapons of mass destruction in Iraq.
###
Wednesday, October 22, 2003
The Letrozole Study
By Carol Tavris, Ph.D. and Avrum Bluming, M.D.
Most people get their news from the headlines-reading through the
paper, logging on to an Internet home page, or getting a quick TV flash.
News-by-headline is fine if you want to find out the latest sports scores,
traffic conditions, and jury verdicts. When it comes to medical news,
however, consumers and physicians had better read on.
More than 20 years ago, Allen L. Hammond of the American Association
for the Advancement of Science cautioned the public that "In today's
news-conscious world, there is an enormous emphasis on breakthroughs. But
with rare exceptions, science is a process, not an isolated event.
Conveying the way science really works, the interplay of persistence and
luck, the painstaking accumulation of evidence, the clash of proponent and
critic, the gradual dawning of conviction demands a look behind the
headlines."
His observation is even more crucial in medicine, where scientific
discoveries can have grave consequences for life and death. But how often,
how many times, have the headlines blared news of some new miracle
drug--followed, as the night the day, by later news of the drug's side
effects, ineffectiveness, or risks? Many consumers do not realize that
because of the enormous pressure on pharmaceutical companies to get new
drugs to market fast--because drug testing takes time and vast sums of
money--the temptation to cut a drug trial short, if the results merely
seem promising, is often overwhelming.
The latest version of this now-familiar story appeared on October 9,
when the New England Journal of Medicine posted on the Web an article due
to be published four weeks later in its weekly print journal. Major
news organizations trumpeted the story of the apparently beneficial result
of a new medication for breast cancer, Letrozole. What was so important
about this research that the NEJM couldn't wait a month, and that made
the researchers halt their study after only two and a half years of the
five planned?
The study evaluated more than 5,000 post-menopausal breast cancer
patients to determine whether adding five years of treatment with Letrozole
improved the disease-free survival of those who had already received
five years of treatment with Tamoxifen. The Letrozole group did not
differ in survival rates compared to a control group that was given a
placebo. However, the Letrozole group was said to have a statistically
significant 46 percent decrease in the risk of a recurrent or new breast
cancer.
Sounds impressive? It's not. For one thing, the two groups were not
matched by the extent of cancer at the time of their first surgery nor by
the type of chemotherapy they had had. These differences might have
affected the recurrence of breast cancer, quite independent of their
receiving Letrozole.
Second, the researchers were reporting *projected* results, not actual
ones! Because the study was stopped prematurely, none of the women had
actually received the full five years of Letrozole.
Third, although 46 percent sounds like an impressive decrease in risk,
it's a statistical manipulation. The absolute decrease in risk was only
6 percent. The New England Journal's own editorial acknowledged that
even if the beneficial effect reported in this study were valid, the use
of Letrozole would reduce one breast cancer occurrence for every 100
women treated.
Already we are reading letters to newspapers from people saying, "Thank
God the researchers halted this study early so that we may benefit! If
only my beloved sister (mother) (wife) had had this amazing drug!"
That is the reaction the hoopla is designed to generate, and that is what
troubles us. We are distressed by the decision of the investigators to
terminate the Letrozole study prematurely, before they could get more
definitive answers about the recurrence of the disease and about the
women's overall survival. And we are even more distressed by the New
England Journal of Medicine's decision to create an atmosphere of drama and
urgency by its early release of the article.
All of us, consumers and physicians, would do well to look behind the
headlines of medical "breatkthroughs," and to remember that headlines
sell news--and news sells drugs.
By Carol Tavris, Ph.D. and Avrum Bluming, M.D.
Most people get their news from the headlines-reading through the
paper, logging on to an Internet home page, or getting a quick TV flash.
News-by-headline is fine if you want to find out the latest sports scores,
traffic conditions, and jury verdicts. When it comes to medical news,
however, consumers and physicians had better read on.
More than 20 years ago, Allen L. Hammond of the American Association
for the Advancement of Science cautioned the public that "In today's
news-conscious world, there is an enormous emphasis on breakthroughs. But
with rare exceptions, science is a process, not an isolated event.
Conveying the way science really works, the interplay of persistence and
luck, the painstaking accumulation of evidence, the clash of proponent and
critic, the gradual dawning of conviction demands a look behind the
headlines."
His observation is even more crucial in medicine, where scientific
discoveries can have grave consequences for life and death. But how often,
how many times, have the headlines blared news of some new miracle
drug--followed, as the night the day, by later news of the drug's side
effects, ineffectiveness, or risks? Many consumers do not realize that
because of the enormous pressure on pharmaceutical companies to get new
drugs to market fast--because drug testing takes time and vast sums of
money--the temptation to cut a drug trial short, if the results merely
seem promising, is often overwhelming.
The latest version of this now-familiar story appeared on October 9,
when the New England Journal of Medicine posted on the Web an article due
to be published four weeks later in its weekly print journal. Major
news organizations trumpeted the story of the apparently beneficial result
of a new medication for breast cancer, Letrozole. What was so important
about this research that the NEJM couldn't wait a month, and that made
the researchers halt their study after only two and a half years of the
five planned?
The study evaluated more than 5,000 post-menopausal breast cancer
patients to determine whether adding five years of treatment with Letrozole
improved the disease-free survival of those who had already received
five years of treatment with Tamoxifen. The Letrozole group did not
differ in survival rates compared to a control group that was given a
placebo. However, the Letrozole group was said to have a statistically
significant 46 percent decrease in the risk of a recurrent or new breast
cancer.
Sounds impressive? It's not. For one thing, the two groups were not
matched by the extent of cancer at the time of their first surgery nor by
the type of chemotherapy they had had. These differences might have
affected the recurrence of breast cancer, quite independent of their
receiving Letrozole.
Second, the researchers were reporting *projected* results, not actual
ones! Because the study was stopped prematurely, none of the women had
actually received the full five years of Letrozole.
Third, although 46 percent sounds like an impressive decrease in risk,
it's a statistical manipulation. The absolute decrease in risk was only
6 percent. The New England Journal's own editorial acknowledged that
even if the beneficial effect reported in this study were valid, the use
of Letrozole would reduce one breast cancer occurrence for every 100
women treated.
Already we are reading letters to newspapers from people saying, "Thank
God the researchers halted this study early so that we may benefit! If
only my beloved sister (mother) (wife) had had this amazing drug!"
That is the reaction the hoopla is designed to generate, and that is what
troubles us. We are distressed by the decision of the investigators to
terminate the Letrozole study prematurely, before they could get more
definitive answers about the recurrence of the disease and about the
women's overall survival. And we are even more distressed by the New
England Journal of Medicine's decision to create an atmosphere of drama and
urgency by its early release of the article.
All of us, consumers and physicians, would do well to look behind the
headlines of medical "breatkthroughs," and to remember that headlines
sell news--and news sells drugs.