There are real problems facing science today. While many surface level changes have been made, here we explore a deeper transformation of discipline.
The changes suggested so far are relatively surface level (getting them using a social science website), though they are psychologically informed and thus likely to work , they don’t get to the real core of the problem (human greed), as they all come after a piece of work is published.
Here we reexamine the peer review system at it’s core, in order to see if we can design a more efficient, well functioning, system. These decisions (about the core) are most prone to small design flaws which, over the years, will grow into bigger and bigger problems (as the current issues have).
It is imperative that we have a spirited debate about the specifics outlined below and not believe that our decisions are set in stone when we make them. Only continual maintenance of the system can ensure fidelity over time (Blanchard, & Fabrycky, 1990).
Nosek and Bar-Anan (2012) do an excellent job outlining a general proposal and our system is designed with almost all of these suggestions at the core. This being said, we think more discussion should be had about the specific mechanism of peer review and the ways that we can use the information within a ‘social’ science website to aide review.
Specifically, the system knows how individual’s papers do, who clicks on them, and where else they click; all of this information can be utilized in the selection of reviewers and review generally speaking. Nosek and Bar-Anan (2012) propose the following peer review mechanism:
“Instead of submitting a manuscript for review by a particular journal with a particular level of prestige, authors submit to a review service for peer review… The review service does not decide whether the article meets the “accept” standard for any particular journal. Instead, it gives the article a grade. Journals could retain their own review process, as they do now, or they could drop their internal review system and use the results of one or many review services.” (pg. 232)
Again, we would suggest the information within the system be used to facilitate review, as the system knows which authors are a part of which professional organizations and their impact factors. It knows who the authors of the paper interact with and who are the leaders in the field.
For instance, professional organizations could stipulate that in order for a paper to be considered for dissemination within that group, it has to have certain keywords and have had x number of members comment on or like it, including a few people with higher impact factors (professors or fellows in the organization). Note that the work is already public.
If the paper meets the predetermined standard, it is automatically sent to a selection of individuals from the group, who are likely to be interested (because they have interacted with similar papers). Algorithms can be set up within the group to assess the reaction to the paper (based on likes/ ratings, comments, shares).
If the paper is received well within the community, it is then sent to a larger portion of the community (similar to current social media, virality). Each new level achieved upgrades the ‘stamp of approval’ (e.g., Bronze, Silver, Gold; Nosek et al., 2012) on the paper, which can then be used as another metric beyond the impact factor.
One further addition we would like to add to Nosek’s proposal is the ability for, at the end of the year (or decade), extra badges to be given for the top ten (or top 100) papers published in a particular (sub)domain. These collections could be put together for any aspect of the paper, could be printed, and provide something to aim for in the creation of content (besides high impact).
This is certainly not exactly what the system will look like in it’s final iteration, but no matter what, if there is an online utility involved, we should be using the information within that system to maximize utility and inform the review process.
Do you agree? Is there anything you would add or change? Leave it below! 😀
Don’t forget to find us on facebook.