The peer review process, long seen as the cornerstone of scientific credibility, is under strain. Critics argue it’s vulnerable to bias, delay, and manipulation — and increasingly fails to guarantee the quality and reliability that science demands.
Peer Review Validation
Researchers rely on peer review to validate their work. Before most academic papers are published in journals, they are reviewed by other scientists in the same field. In theory, this ensures that flawed methods, unsupported conclusions, or missing data are caught before publication. In practice, it doesn’t always work that way.
Many journals send submissions to only two or three reviewers. Sometimes just one. Reviewers are unpaid and often overburdened. Journals may struggle to find experts willing to review at all, and some choose reviewers who may not be impartial or timely.
Journals can delay publication by choosing slow reviewers or letting papers sit. Some reject without proper review. Some may greenlight questionable work.
Professor Bo‐Christer Björk, Information Systems Science at the Hanken School of Economics and Professor David Solomon, Department of Medicine and OMERAD, Michigan State University wrote a 2013 paper “The publishing delay in scholarly peer‐reviewed journals.”
“Publishing in scholarly peer reviewed journals usually entails long delays from submission to publication. In part this is due to the length of the peer review process and in part because of the dominating tradition of publication in issues, earlier a necessity of paper-based publishing, which creates backlogs of manuscripts waiting in line. The delays slow the dissemination of scholarship and can provide a significant burden on the academic careers of authors.”
Scientists also note that peer review rarely confirms whether results are reproducible. It doesn’t involve repeating the experiments. Instead, it checks logic, citations, and presentation – leaving open the possibility that findings are flawed or even fabricated.
System Improvements
Efforts to improve the system are emerging.
Some journals now publish reviewer comments alongside accepted papers. Others invite open peer review, where reviewer identities are disclosed. Preprint servers like arXiv and bioRxiv let researchers share early versions of papers publicly — inviting informal critique before formal review.
Replication
A more rigorous alternative is replication – repeating the study to see if the results hold up. Replication is rare because it’s costly, often unrewarded, and may not attract funding. But some groups are pushing for change.
The Reproducibility Project, led by the Center for Open Science, tested psychology studies and found more than half failed to replicate. In cancer biology, a similar effort found even fewer results held up under scrutiny. These findings raised alarms across disciplines.
Some new journals and organisations now promote registered reports, in which peer review occurs before results are known. This helps eliminate bias and rewards study design over outcome.
Meanwhile, artificial intelligence tools are being tested to scan papers for methodological flaws, statistical inconsistencies or unreported variables – offering support, but not yet a solution.
Reform Needed
Ultimately, researchers agree: the system needs reform.
“Peer review is supposed to be a gatekeeper. But if it fails to catch errors or reward rigor, it becomes a bottleneck – or worse, a rubber stamp,” said Bill Cullifer, World Asthma Foundation.
As public trust in science becomes increasingly important, reforming how science checks itself may be one of the field’s most urgent challenges.