
Authors submit to a hybrid journal, the Proceedings of VLDB (PVLDB). The best way I’ve seen this handled is by the database community. So what is the solution to this problem? One-shot revisions Otherwise, a lot of talented students are going to quit. We should strive to minimize rejections and randomness in the peer review process, to ensure that our graduate students have good mental health. It is even more painful when the outcome seems random: papers of a similar caliber with similar flaws get accepted, while their paper gets rejected. It is painful to get rejected, especially if you have been working for a year or more on a project. Second, rejection takes a big mental toll on the junior members of academia, especially grad students. In the field of systems security, Davide Balzarotti notes that between 30-40% of submissions to the top 4 security conferences are resubmissions!
#ACCEPTED WITH REVISIONS PC#
These resubmitted papers also increase the load on program committees, which are already overloaded with each PC member reviewing a dozen papers or more in a short period of time.Įven in the case where authors operate in good faith and fix the flaws pointed out in the first reject, they might submit and be unpleasantly surprised when the second set of reviewers point out another set of subjective flaws, and reject the paper. If the paper is rejected, we go through the cycle again, wasting more reviewer time down the line until the paper is finally accepted. A new set of three or five reviewers are selected for the paper, and they spend a significant amount of time pointing out the paper’s flaws. With decisions depending so strongly on who the reviewers are, authors are tempted to resubmit borderline papers to the next conference in the hope of getting accepted. First, as a community, we review the same papers again and again. Rejection is a problem due to two main reasons. What is the big deal with a paper rejection? After all, isn’t rejection a part of academic life? Shouldn’t students get used to it? Doesn’t rejection make the paper stronger? But not all reviewers take this position hence, whether the same paper might get accepted or rejected might depend on luck, on which reviewers get assigned to the paper. Some reviewers are okay with accepting an imperfect paper, as long as the flaws are clearly mentioned and discussed. Yet another result of this binary model is randomness. This results in a number of papers getting rejected that could have been accepted with just a bit more work. One big problem is: what should we do with interesting papers that have some flaws? Reviewers naturally want to maintain a high standard for the conference this would mean rejecting such papers, even if the flaws can be fixed with a few more experiments or rewriting (since shepherding does not allow much time for this). There are a number of disadvantages to our current binary review system. Shepherding is also limited by the time frame: sometimes there is less than a month between notifications and the camera-ready deadline, which precludes any major changes to the paper.ĭisadvantages. Similarly, the acceptance cannot be conditioned on the result of a new experiment turning out in a particular manner. An accepted paper may be shepherded, but the extent to which the paper might change or improve during shepherding is limited: authors might optionally include new experiments or data, but the shepherd cannot force them to do this. Most of the top conferences in systems and architecture follow the binary decision model: the outcome of the review process is either an accept or a reject. Including one-shot revision in our peer review process could lead to lower overall reviewing load, better mental health for graduate students, and better overall science.
