In “Trial and Error: The scientific system does little to prevent scientific fraud. Is there a better way?” , New York Times Magazine, January 15, 2006, author David Dobbs, in the wake of the Hwang Woo Suk cloning fraud, says:
“Journal editors say they can’t prevent fraud. In an absolute sense, they’re right. But they could make fraud harder to commit. Some critics, including some journal editors, argue that it would help to open up the typically closed peer-review system, in which anonymous scientists review a submitted paper and suggest revisions. Developed after World War II, closed peer review was meant to ensure candid evaluations and elevate merit over personal connections. But its anonymity allows reviewers to do sloppy work, steal ideas or delay competitors’ publication by asking for elaborate revisions (it happens) without fearing exposure. And it catches error and fraud no better than good editors do. ‘The evidence against peer review keeps getting stronger,’ says Richard Smith, former editor of the British Medical Journal, ‘while the evidence on the upside is weak.’ Yet peer review has become a sacred cow, largely because passing peer review confers great prestige–and often tenure.”
Suggested solutions are open peer review, where “reviewers are known and thus accountable to both author and public” and open-source reviewing, where “the journal posts a submitted paper online and allows not just assigned reviewers but anyone to critique it. After a few weeks, the author revises, the editors accept or reject and the journal posts all, including the editors’ rationale.” The British Medical Journal has not used closed peer review since 1999.