I’m sure we’ve all watched it by now: the science fiction satire “Don´t look up”. Towards the end of the movie Professor Randall Mindy (Leonardo DiCaprio) confronts tech billionaire Peter Isherwell (Mark Rylance) on his technological plan to destroy the comet that threatens to destroy Earth: “I just want to make sure that you´re open to the scientific peer review process - and you´re not approaching this mission like a businessman”. As we all know, Isherwell ignores his request and the consequences are apocalyptic.
Don’t Look Up may go down in history as the first major Hollywood film that highlight the catastrophic consequences of neglecting the scientific peer review process. “This is the first time I've heard the peer review process so extensively discussed in a movie, which is great.” “It’s a movie at its core that really speaks to the importance of science-based decision making in our society, in our daily lives.” Says Amy Manizer, an astronomer and professor of planetary science at University of Arizona (link)
However, the scientific peer review process today also faces major challenges. There are increasing concerns about the difficulty of recruiting reviewers and the low quality of the review work. To meet these challenges, some journals have completely transformed the way they do peer review. In 2019 the Editorial Board of Synlett (led by Nobel Prize winner Ben List) decided to make Crowd Peer review the default method for the review of manuscripts. Another journal SynOpen now only does Crowd peer review.
What is crowd peer review? It is a scientific review process is organized around a crowd of reviewers engaging in a scientific discussion instead of independently reviewing the manuscript. A large number of reviewers get access to a manuscript and will finish a review over a shorter timeframe. While, the traditional peer review originates from a time when manuscripts were sent back and forth via paper mail, the online setting now offer opportunities to do cooperate in doing the peer review.
There are at least six positive effects of leveraging crowd diversity in a peer review process:
Positive
effects |
Characteristics |
1. Much shorter peer review period. |
Faster publication process |
2. Better decisions. |
Reaching consensus through discussions between
reviewers. Less bias and contradictory statements. Editor can make a more
informed decision |
3. More inclusive scientific participation. |
Including a larger and more diverse group of
reviewers |
4. Strengthening collective learning. |
More transparent learning environment |
5. Less work for a single reviewer. |
Self-selection of articles and preferred comments on
the manuscript |
6. Improved author feedback. |
High quality feedback both on the academic content
and the academic style - more detailed feedback |
The peer review period is very rapid compared with a traditional peer review. Typically, a manuscript is online for our crowd for only two to four days. It is enough to obtain a sufficient amount of quality feedback. The complete publication process will often be finished within a few weeks. More than a dozen individuals will usually participate in the review process (van Gemmeren & List, 2021).
2. Better decisions
Crowd peer review reduces the risk of biased referee reports. It prevents referees from following an agenda and reduces the number of extreme opinions. One problem can also sometimes be harsh comments.
Since reviewers are anonymous, they will not feel the same “peer pressure” to follow other opinions if they do not agree with each other. Anonymous statements make it easier to avoid bias and make an independent assessment. The reviewers do not hesitate to engage in a discussion when conflicting opinions exist.
Software such as ScholarOne and Filestage is used to organize the online
peer review process (van Gemmeren & List, 2021).
3. More inclusive scientific participation
A much larger and more diverse reviewer base can be recruited. This includes highly trained assistant professors and postdoctoral researchers, but also chemists who have left academia to lead industrial research laboratories. In this way, the peer review becomes the responsibility of a much broader group of scientists. In many journals, reviewer requests are directed towards the most visible members in the scientific community, leaving many potential reviewers out. This review becomes more inclusive by involving underrepresented groups of scientists (early in their career stage, country of origin, etc.). Young researchers receiving few peer-review requests experience that they can give something back to the community and can help shape the journal. It also reduces the overload of certain reviewers in traditional peer review, by using a broader reviewer base (van Gemmeren & List, 2021).
In recent years, the problem of ‘reviewer fatigue’ has increased due to an ever-increasing volume of submitted manuscripts. A small group of individuals get many requests, and this also slows down the peer review process.
4. Collective learning
The transparency of the online environment also stimulates collective learning. All reviewers learn something about their own comments by receiving feedback on them through scientific discussions. Only by reading comments from other referees, one gets insight into issues one might have previously overlooked. This may be important in their further development both as scientists and reviewers (van Gemmeren & List, 2021).
5. Less work for a single reviewer
It is the aggregation of the multiple diverse contributions that result in the full referee report. “Reviewer fatigues” is avoided because the level of feedback from the reviewers is much more flexible. Reviewers can select the manuscripts they think are most relevant for their academic background. They can also choose if they have the time to make comments in the short peer review period (self-selection) (van Gemmeren & List, 2021),
6. Improved author feedback
A good peer review should also help the authors improve their manuscript as much as possible. The review can produce very detailed comments and substantive referee reports. The crowd reviewers are able to scale up both the amount of feedback on the scientific content itself and how it presented. This includes stylistic and grammatical errors, inconsistencies, and ways to improve the fluency of the academic language. Instead of just getting a general comment that the language must be improved, one will receive many specific comments on how the writing can be improved. Reviewers will often comment on specific issues that are of insufficient quality and suggest on further experiments and improvements (van Gemmeren & List, 2021).
After the crowd review, the annotated manuscript with all the comments is generated into a review report. This report and a brief summary made by Crowd Review Editor is sent to the Editor who makes a decision, whether to accept, reject or request revisions of the manuscript. The review process reduces the conflicts of interest and brings a more precise conclusion to the fate of the manuscript, which sometimes is difficult in a traditional peer review. Only occasionally is it necessary to supplement the comments of the crowd by those of additional experts (van Gemmeren & List, 2021).
Crowd peer review is a very interesting example of what can be labeled as human swarm problem solving, a specific sub-type of collective intelligence (Baltzersen 2022). In solving problems, collective intelligence centers on human-to-human interaction in an attempt to leverage the diversity of individual contributions. Read more about crowd peer review in chapter 10 in the book “Cultural-historical perspectives on collective intelligence”
Kilder
Baltzersen, R. K. (2022). Cultural-Historical Perspectives On Collective Intelligence. Cambridge University Press. Link to Open Access version
van Gemmeren, M., & List, B. (2021). How and Why Crowd Reviewing Works. Synlett, 32(09), 885-891. Link
Ingen kommentarer:
Legg inn en kommentar