see http://www.reportingonhealth.org/blogs/2011/12/20/fukushima-alarmist-claim-obscure-medical-journal-proceed-caution
Their consensus: just because a study's peer-reviewed doesn't mean it's credible. And evaluating a journal's impact factor can be helpful, but it's not sufficient.
Translation: The study is crap.
Their consensus: just because a study's peer-reviewed doesn't mean it's credible. And evaluating a journal's impact factor can be helpful, but it's not sufficient.
Translation: The study is crap.
Barbara Feder Ostrov's Health Journalism BlogFukushima: Alarmist Claim? Obscure Medical Journal? Proceed With Caution
December 20, 2011
UPDATE: Click here for a response from International Journal of Health Services Editor-in-Chief Vicente Navarro.
The press release trumpeted a startling claim: researchers had linked radioactive fallout from the Fukushima nuclear disaster to 14,000 deaths in the United States, with infants hardest hit.
"This is the first peer-reviewed study published in a medical journal documenting the health hazards of Fukushima," the press release bragged in announcing the study's publication today. The press release, which compared the disaster's impact to Chernobyl, appeared via PR Newswire on mainstream news sites, including the Sacramento Bee and Yahoo! News.
Casual readers who didn't realize this was only a press release could be forgiven for thinking this was a spit-out-your-coffee story. But with a little online research and guidance from veteran health journalists Ivan Oransky and Gary Schwitzer, I quickly learned that there's a lot less to this study and to the medical journal that published it. Read on for their advice on what journalists can learn from this episode.
Normally, reporters are supposed to feel better about research that's been peer-reviewed before publication in a scientific journal. But the claims of the press release were just so outlandish, warning bells went off.
As it turns out, the authors, Joseph Mangano and Janette Sherman, published a version of this study in the political newsletter Counterpunch, where it was quickly criticized. The critics charged that the authors had cherry-picked federal data on infant deaths so they would spike around the time of the Fukushima disaster. Passions over nuclear safety further muddied the debate: both researchers and some critics had activist baggage, with the researchers characterized as anti-nuke and the critics as pro-nuke.
As Scientific American's Michael Moyer writes: "The authors appeared to start from a conclusion—babies are dying because of Fukushima radiation—and work backwards, torturing the data to fit their claims."
So how did such a seemingly flawed study wind up in a peer-reviewed journal?
I researched the journal, the International Journal of Health Services, and its editor, Vicente Navarro. Navarro, a professor at Johns Hopkins University's prestigious school of public health, looked legit, but the journal's "impact factor" (a measure of a research journal's credibility and influence) was less impressive. (I emailed and called Navarro for comment; I'll update this post if I hear back from him.)
I asked Ivan Oransky, executive editor of Reuters Health and co-founder of the Retraction Watch blog, and Health News Review founder Gary Schwitzer: how can journalists better evaluate when to cover (and more importantly, when not to cover) the medical research stories that cross their desks?
Their consensus: just because a study's peer-reviewed doesn't mean it's credible. And evaluating a journal's impact factor can be helpful, but it's not sufficient.
Here's what Oransky had to say:
I do use impact factor to judge journals, while accepting that it's an imperfect measure that is used in all sorts of inappropriate ways (and, for the sake of full disclosure, is a Thomson Scientific product, as in Thomson Reuters). I find it helpful to rank journals within a particular specialty. It's not the only metric I use to figure out what to cover, but if I'm looking at a field with dozens or even more than 100 journals, it's a good first-pass filter. There's competition to publish in journals, which means high-impact journals have much lower acceptance rates. And if citations are any measure at all of whether journals are read, then they're obviously read more, too.
I looked up the journal in question, and it's actually ranked 45th out of 58 in the Health Policy and Services category (in the social sciences rankings) and 59th out of 72 in the Health Care Sciences & Services category (in the science rankings).
As to how this could get published in a peer-reviewed journal, well, not all peer review is created equal. Higher-ranked journals tend to have more thorough peer review. (They also, perhaps not surprisingly, have higher rates of retractions. Whether that's because people push the envelope to publish in them, or there are more eyeballs on them, or there's some other reason, is unclear. But there's no evidence that it's because their peer review is less thorough.)
Finally, I'd refer readers to this great primer on peer review by Maggie Koerth-Baker.
Gary Schwitzer also provided these helpful tips for journalists:
1. Brush up on the writings of John Ioannidis, who has written a great deal in recent years about the flaws in published research.
2. Journalists who live on a steady diet of journal articles almost by definition promote a rose-colored view of progress in research if they don't grasp and convey the publication bias in many journals for positive findings. Negative or null findings may not be viewed as sexy enough. Or they may be squelched prior to submission. While perhaps not a factor in this one case, it nonetheless drives home the point to journalists about the need to critically evaluate studies.
3. In this case, a journalist would be well-served by a friendly local biostatistician's review.
4. It is always more helpful to focus on the quality of the study rather than the impact factor of the journal or the reputation of the researcher (for reasons Ivan articulated). However, these are legitimate questions to ask any published researcher: "Why did you choose to submit your work to that journal? Did you submit it elsewhere and was it rejected? If so, what feedback did you get from the peer reviewers?"
Related Posts:
|
MOre....
Fukushima Fallout and Infant Deaths: International Journal of Health Services' Vicente Navarro Responds
Fukushima infant deaths International Journal of Health Services Janette Sherman Joseph Mangano Michael Moyernuclear radiation Scientific American Vicente Navarro
December 21, 2011
Yesterday, I wrote about controversial researchlinking fallout from Japan’s earthquake-damaged Fukushima nuclear plant to infant deaths in the United States.
The research, which was harshly criticized by Scientific American’s Michael Moyer and others, was published in the peer-reviewed Journal of International Health Services, and I had asked the journal’s editor-in-chief Vicente Navarro for his response to the criticisms.
Navarro, professor of health policy at Johns Hopkins University’s Bloomberg School of Public Health, emailed me this comment today:
Thank you for making me aware of the critical response that Mr. Moyer has published in the blog of The Scientific American to the article we published in the last issue of the International Journal of Health Services by Joseph J. Mangano and Janette D.Sherman entitled “An Unexpected Mortality Increase in the United States Follows Arrival of the Radioactive Plume from Fukushima: Is There a Correlation?”.
In reply to your questions, this quarterly is a peer-reviewed journal and the paper was reviewed by 2 outstanding scholars in the subject being discussed. We trust our referees’ judgment. We do not publish letters to the editors, but when we receive criticisms we believe merit attention, we publish them asking the authors of the original article to reply if they so wish, publishing the exchange in the same issue and let the readers judge. This is how academic debates should be handled.
We have invited Mr. Moyer to submit his criticisms published in the Scientific American blog to the IJHS in its entirety as a reprint or in a modified form and we very much hope he will agree. If he does, the IJHS will publish it in one of the next issues with a reply from the authors if they so wish, which I suspect they will.
Moyer said in an email that he had declined Navarro’s invitation. Here’s why:
In short: I'm a journalist, not a scientist. My post is the property of Scientific American, so there's rights issues. My post also argued against both the paper and the claims made by the authors in their press release. And since the authors' strategy seems to be to gain legitimacy for their public claims by the simple fact of appearing in a peer reviewed journal, I didn't want to give them another opportunity to trumpet their success.
Well said. It will be interesting to see if others submit criticisms to the International Journal of Health Services and how the authors respond. Still, anyone Googling the study or the authors’ names will see that the “academic debate” Navarro refers to has spread well beyond the confines of one journal.
Related Posts:
Photo credit: Thierry Ehrmann via Flickr
No comments:
Post a Comment