In 2016, at the time when I am writing this (July), I have been a PC member for seven different security conferences including two of the top ones, CCS and USENIX Security, and I have reviewed upwards of 80 papers. This year has also been the year where a certain paper of mine that I really believe in, gets rejected over and over again for all the wrong reasons. Being a reviewer, I have seen fellow reviewers making the same comments that my paper has been receiving to other peoples’ papers.
I saw enough of those to be inspired to write a series of blog posts. Do note that I am a junior faculty and I do not have tens of years of experience from which I can distill deeper truths. I do, however, think that these observations are true and we must, both individually as well as corporately, correct the errors of our ways. I am saying “our ways” because I am guilty of the same offenses.
Lastly, it is not without some hesitation, that I write this. I understand that there is a possibility that someone may feel offended and I may reap the fruits of my offense. At the same time, having been raised in a rather corrupt country where people do not report bad behavior because they are afraid of the consequences, I have learned that fear is a bad motivator. All I want to do is help shine light, advance, and uphold the security community. I am not casting any stones because, alas, I am not without sin.
Without further ado, here’s the first poor reason to reject a paper:
“A blog post has shown this”
TLDR: Don’t reject scientific papers on the basis of three-paragraph blog posts.
This is, hands down, one of the most common comments I have seen in reviews. What it typically amounts to is that there exist some 3-paragraph blog post which indeed has been on the same topic and even matches one or more of the statistics or lessons learned by the currently reviewed paper. Naturally, a 12-page paper, will have tens of experiments that systematically assess the problem, measure it across novel dimensions, compare it with related work, offer countermeasures and, in general, present a much more complete treatment and analysis of a problem and its solutions.
Many security reviewers, however, in their endless pursuit of novelty and vulnerabilities, now treat this topic as tainted. The blog post that was written prior to the submission of this paper somehow subtracts from the value of the paper. The topic discussed in the paper suddenly becomes “well known” and all the extra work is now mere “details.”
This comment, if made and acted upon by enough people, has the ability to destroy our community from the inside out. Why spend 6 months or a year, collecting data, thoroughly working on a problem, analyzing it from all sorts of different angles if someone can, in one afternoon (the time it typically takes to write a blog post) make your paper unpublishable? Even if the authors of the paper were inspired by a specific blog post, they more often than not expand upon it either in breadth or depth, both of which are crucial and highly desirable for the scientific community. It just stands against reason to claim that three-paragraph blog posts with one/two data points are the same as 12-page papers.
In my opinion, if this continues, security people will be attracted (even more than they already are) to “flash-in-the-pan” work where novelty will be king and everything else, including reproducibility, meaningfulness of findings, and societal impact, must take the back seat.