Saturday, June 05, 2004
What’s the Harm? Aspirin Use and Breast Cancer
Carol Tavris and Avrum Z. Bluming
And now, another medical bulletin makes the front page: According to a lead article in the May 26 Journal of the American Medical Association, aspirin reduces the risk of breast cancer by "28 percent." Wonderful news, almost too good to be true, and the media from coast to coast were on this story like a duck on a June bug: National Public Radio, USA Today, the New York Times, the Los Angeles Times, and every regional paper in between made this a top headline. The BBC reported that “Aspirin cuts breast cancer risk” and The Times of India, more cautiously, announced that “Aspirin may cut breast cancer risk.”
Unfortunately, with science news as with political news, the tension between "getting it first" and "getting it right" is growing. At least with political news, readers can ask skeptical questions and have a sense of what a news story might be omitting, and good reporters know that a politician will try to spin a story. But most people, being neither scientists nor statisticians, must rely on the way scientific articles are reported to the press and then to public. Why be skeptical? Can't we trust the scientists to give us the gist of their findings, and science writers to report that gist accurately? Increasingly, no. We can't.
This is what the authors of the JAMA article concluded in their paper: “These results support an inverse association between ever use of aspirin and breast cancer risk in both premenopausal and post menopausal women . . . These data add to the growing evidence that supports the regular use of aspirin and other drugs (such as ibuprofen) as effective chemopreventive agents for breast cancer” (our emphasis). Here is what the published article actually found:
No decreased risk among premenopausal women.
No decreased risk among women who took ibuprofen (Motrin or Advil).
No decreased risk among women with early or non-invasive breast cancer.
No association between the length of time a woman takes aspirin and the decrease in breast cancer risk—although, since breast cancer takes years to develop, the length of time should be significant if aspirin were really having an effect.
No indication of what dose of aspirin is effective. Although the study was set up to identify the effect of aspirin and other drugs on the risk of breast cancer, the researchers did not collect basic information on the dosages the women were in fact taking.
The actual decreased incidence of breast cancer (if in fact this preliminary conclusion is valid) was a little less than 3 women for every 100 taking aspirin (at unknown doses).
The place for scientists to debate and criticize these findings is in the pages of JAMA and other scientific journals, and there will undoubtedly be considerable discussion about the possible benefits of aspirin. In due course, the work of other investigators will support, refute or enhance these findings. For example, in January of this year, the Journal of the National Cancer Institute pubished a prospective study suggesting that prolonged use of aspirin in women—for many years—is associated with an increased risk of pancreatic cancer. That is why scientists and clinical practitioners must read these findings carefully, debate them, and then make the wisest decisions they can with their patients.
But when scientific findings are reported too quickly, too uncritically, in a culture hungry for medical news and medical advice, caution gets short shrift. Perhaps science writers are not used to treating scientists the way political writers (should) treat government officials: with an understanding that the expert's claims are not enough—that for the real story, you have to dig further. Perhaps, working against a deadline, the reporter reads only the article's abstract or the journal's press release. Nonetheless, the end result of the scientist's faith that his or her version will be the accepted one, and the reporter's faith that the scientist is trustworthy, is a collusion of irresponsibility.
The ultimate disservice is to the public, which alternately leaps to each new "breakthrough" enthusiastically, followed by disillusion and cynicism if the breakthrough turns out to be modest or not a breakthrough at all. Scientists should not be getting a free ride. It's time to subject their work to the same scrutiny we now give the Bush administration's claims that there were weapons of mass destruction in Iraq.
###
Carol Tavris and Avrum Z. Bluming
And now, another medical bulletin makes the front page: According to a lead article in the May 26 Journal of the American Medical Association, aspirin reduces the risk of breast cancer by "28 percent." Wonderful news, almost too good to be true, and the media from coast to coast were on this story like a duck on a June bug: National Public Radio, USA Today, the New York Times, the Los Angeles Times, and every regional paper in between made this a top headline. The BBC reported that “Aspirin cuts breast cancer risk” and The Times of India, more cautiously, announced that “Aspirin may cut breast cancer risk.”
Unfortunately, with science news as with political news, the tension between "getting it first" and "getting it right" is growing. At least with political news, readers can ask skeptical questions and have a sense of what a news story might be omitting, and good reporters know that a politician will try to spin a story. But most people, being neither scientists nor statisticians, must rely on the way scientific articles are reported to the press and then to public. Why be skeptical? Can't we trust the scientists to give us the gist of their findings, and science writers to report that gist accurately? Increasingly, no. We can't.
This is what the authors of the JAMA article concluded in their paper: “These results support an inverse association between ever use of aspirin and breast cancer risk in both premenopausal and post menopausal women . . . These data add to the growing evidence that supports the regular use of aspirin and other drugs (such as ibuprofen) as effective chemopreventive agents for breast cancer” (our emphasis). Here is what the published article actually found:
No decreased risk among premenopausal women.
No decreased risk among women who took ibuprofen (Motrin or Advil).
No decreased risk among women with early or non-invasive breast cancer.
No association between the length of time a woman takes aspirin and the decrease in breast cancer risk—although, since breast cancer takes years to develop, the length of time should be significant if aspirin were really having an effect.
No indication of what dose of aspirin is effective. Although the study was set up to identify the effect of aspirin and other drugs on the risk of breast cancer, the researchers did not collect basic information on the dosages the women were in fact taking.
The actual decreased incidence of breast cancer (if in fact this preliminary conclusion is valid) was a little less than 3 women for every 100 taking aspirin (at unknown doses).
The place for scientists to debate and criticize these findings is in the pages of JAMA and other scientific journals, and there will undoubtedly be considerable discussion about the possible benefits of aspirin. In due course, the work of other investigators will support, refute or enhance these findings. For example, in January of this year, the Journal of the National Cancer Institute pubished a prospective study suggesting that prolonged use of aspirin in women—for many years—is associated with an increased risk of pancreatic cancer. That is why scientists and clinical practitioners must read these findings carefully, debate them, and then make the wisest decisions they can with their patients.
But when scientific findings are reported too quickly, too uncritically, in a culture hungry for medical news and medical advice, caution gets short shrift. Perhaps science writers are not used to treating scientists the way political writers (should) treat government officials: with an understanding that the expert's claims are not enough—that for the real story, you have to dig further. Perhaps, working against a deadline, the reporter reads only the article's abstract or the journal's press release. Nonetheless, the end result of the scientist's faith that his or her version will be the accepted one, and the reporter's faith that the scientist is trustworthy, is a collusion of irresponsibility.
The ultimate disservice is to the public, which alternately leaps to each new "breakthrough" enthusiastically, followed by disillusion and cynicism if the breakthrough turns out to be modest or not a breakthrough at all. Scientists should not be getting a free ride. It's time to subject their work to the same scrutiny we now give the Bush administration's claims that there were weapons of mass destruction in Iraq.
###