Poor reasons to reject a computer security paper, Part 2

I was surprised with the popularity of Part 1 of my series “Poor reasons to reject a computer security paper”. I am interpreting this as a sign that I may be on to something. If you haven’t read Part 1, please do so, not just for the sake of completeness but so you can also understand my motivation for writing these posts (I am, categorically, not trying to claim any kind of moral superiority because I have made similar comments in the past).

The second poor reason to reject a paper:

“This paper is below the bar of X”

TLDR: Do not reject a paper based on your subjective judgement of a conference’s prestige.

This is a close second to the “a blog post has shown this” comment. Variations of this comment include “this paper does not reach the bar of X”, “this paper is not deep enough for X”, “this is a fun paper but clearly unsuitable for X”. Occasionally, the reviewer will suggest a more “appropriate” alternative which is, always, a less prestigious venue than the one that the paper is currently submitted to:

  • “The authors should send this to workshop so-and-so”
  • “I am sure that a lot of people would find this work interesting if presented as a poster”
  • “The back of a Dunkin Donuts napkin is an appropriate dissemination method for this work”

My claim is that this comment is often wrong, and when it is not wrong, it is unnecessary.

As far as I can tell conferences do not have set “bars.” Reviewers give a score to each paper and then the chairs (as well as the reviewers in the PC meetings) take these scores and use them to accept the top N papers, where N is the number of papers that a conference can (or is willing to) accommodate. Sure, you can have champions that pull a paper despite opposition and strong opposers who want to make sure a paper is not accepted despite favorable reviews, but other than that, it is the set of all reviews that eventually determine who is in and who is out.

1814p0

Another problem with bar-setting is that it is a subjective, often unreliable, metric. People are much more likely to be judgmental towards a paper that they understand well (e.g. it is in their own field of research or is just an easily approachable paper) than one that they do not. Moreover, it appears to me that this subjective bar is actually a quale. One cannot really explain what makes this bar or how it is set… they can only tell you what is above it and what is below it. I have gut feelings too, but it’s exactly those that we are supposed to be actively working against in science.

Finally, even if indeed a paper is clearly unsuitable for a top conference (e.g. because of underdeveloped ideas, unwarranted assumptions, key experiments missing, or identical related work), one can rest assured that given enough constructive criticism, the authors will eventually figure it out by themselves without having to be explicitly told.

This entry was posted in Poor Reasons. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *