Peer review is the main tool for scrutinizing scholarly publications, grant proposals and career advancements in science. However, the current peer review system is under severe strain, with consequences for the quality of science and the rapidity of dissemination of scientific results. Several studies have found that the current way of performing peer review can be inefficient, slow, and even biased. Almost every scientist has ideas on how to improve the system, but it is very difficult, if not impossible, to perform experiments to determine which measures are most effective. The project implements a simulation framework in which many ideas of how to improve the review process can be quantitatively tested.
Intellectual Merit The framework is built using agent-based modeling. Scientists, manuscripts and journals are digital agents and a peer review system emerges from their interaction. Multiple experiments can be run: for example, one proof-of-concept application shows how changing the way peer review is performed can dramatically alter the pace at which science is disseminated.
The research develops a full-fledged and open-source simulation software that allows to study alternatives to the current system.
Broader Impacts The proposed work is potentially transformative of the way science is carried out. This framework can be used to identify better and more efficient models for peer review, leading to profound changes on scientific publishing and funding. Also, if this exploratory research is successful, a new computational branch of sociology of science could emerge. Changing the way peer review is performed to favor faster and more efficient solutions could potentially have broad effects on the daily work of scientists, including more time for academic training and research, and less time spent revising and reformatting manuscripts and grant proposals. Favoring unbiased practices could enlarge the representation of minorities in science.
Peer review is routinely used to assess the quality of manuscripts, research proposals, and even researchers. Typically, peer review is performed by a handful of "peers", who evaluate the quality of a manuscript, or the curriculum vitae of a candidate. Recently, the idea of a "post-publication" peer review has emerged. The quality of an article can be measured in a more "democratic" way counting the number of people using a certain finding for their own research (citations), or counting how many patents, articles in the popular press, etc. a work has inspired. This project aimed at developing quantitative methods to evaluate alternative peer review schemes, predict the scientific trajectories of researchers, and detect potential biases influencing peer review. The main outcomes of the project incude: A computational framework to assess alternative models for peer review. The software simulates manuscripts, authors and journals, providing a way to test the effect of changing the "ingredients" of the peer review mode. A study of the scientific trajectories of neuroscientists. Using machine learning, this work probed whether it is possible to predict a researcher's future trajectory based on their current CV. Results show that number of articles written, current standing in the discipline, and the diversity of journals where a researcher publishes are the best predictors of future performance. A study highlighting the potential for nepotism in hiring practices in Italian academia. Statistical analysis shows that some disciplines in Italy display a suspicious paucity of last names, potentially due to the illegal practice of professors hiring their relatives in academic posts. A study of the effect of country(ies) of affiliation on the fate of manuscripts, both in terms of placement in high-visibility scientific journals, and on the subsequent performance in terms of citations. The study shows that multi-national collaborations fare better both when measuring where the work is published, and when counting the citations it received. The study also highlights a new way to assess the scientific standing of countries. A study of the goodness of the advice given to scientists when it comes to writing. Despite the fact that hundreds of articles and books have been written on "how to write for science", the advice has never been put to the tests. Analyzing more than a million articles, the study showed that much of the advice given to scientist is in fact misleading: articles followin the advice receive significantly fewer citations than those not following the advice. Articles funded under this grant were widely covered in the popular press, and have led to a lively discussion on the merits of peer review, and how it can be adapted to an ever-changing scientific landscape