This flowchart summarizes the process by which the current system operates. Key features include long publication delays, secret peer review, failure to make evaluations (reviews and ratings) available to the community, and journal prestige as the only evaluative signal available immediately upon publication.

Reviews Considered Harmful?

… my position just is that such discussions [meaning models of disability] are intellectually intriguing but seem to be of limited value for solving front-line real-world practical problems

Anonymous Reviewer

Somehow, over the past few years, I seem to have grown a thinner skin around reviews. Or perhaps I am just doing work that evokes more problematic responses. Or maybe I am learning to recognize harms that previously passed me by. In any case, I think it is time for us as a community to start a conversation about the darker side of reviewing.

What do I mean by this? Of course it is difficult to get reviews that critique ones work, sometimes legitimately and sometimes because they miss something in a paper, or are written on a bad day. Even so, peer reviews, in general, are valuable and important, and authors know that. I’ve always told myself (and my students) to think of a paper as a user interface — if the user misunderstands things, the question is not “why did the user make so many mistakes” it is “why did my interface not guide the user properly toward the right approach and away from the wrong one”. Analogously, a review is an (imperfect) reflection of the flaws in either one’s research or writing — a perfect project, and a perfect writeup, together should presumably result in perfect reviews. This of course is very idealistic, but at least close to the general goal that I think we all share.

Being a reviewer has always been a space in which we must take care to exercise power compassionately, helping the writer (often a new researcher, often a student) to learn and grow from a process that with a fair amount of randomness decides “what counts” and sets careers in motion (or slows them down). However, I’ve recently observed that the power of reviewers goes beyond mentorship and gatekeeping. Ideology, bias, and politics have become visible to me. Here are some examples of truly harmful errors which have the potential to compound other barriers to participation in our community.

Increased scrutiny for certain types of work. Papers that raise questions about the academic process (and its biases) seem to face a degree of scrutiny and nitpicking that makes it much harder to publish them. I’ve spoken with multiple others who have found this same phenomenon when doing this sort of work. This matters because these forms of inquiry are already de-valued in comparison to other forms of research, and the additional difficulties in publishing them only make this worse. It should come as no surprise that the researchers who take the time to do this sort of work are also often members of groups that are under represented in the academy.

Critique because of a political difference of opinion. I have always been taught never to escalate a disagreement with reviewing outside of the rebuttal process, and throughout almost my entire career I have adhered to that. However, a reviewer objected to the term “marginalized”, and accused us of engaging in grievance studies, stating

One nonsensical concept the authors introduced was the use of ‘higher marginalized status’ (whatever that may mean - one presumes the authors subscribe to the strange psuedolegal theory debunked e.g. by Douglas Murray in the ‘Madness of Crowds’)

I found myself requesting help with receiving a fair review from the program chairs of the conference I had submitted to. Similarly, a reviewer of a grant proposal that included improved tools for Blind and Low Vision programmers stated:

I agree we need to include vision impaired population in the design loop, but it is not necessary for them to do the programming to implement their ideas.... It is not really necessary for those vision impaired to perform programming.

I accepted the rejection of my proposal, but contacted the relevant program officer to alert them to my concerns with this reviewer’s beliefs about who can program.

Accusations of conflict of interest as a result of deep community engagement. I used deep, community engaged work as one of several data collection strategies in a paper (other communities were also providing data). In addition to volunteering in the community, I ultimately invited a leader in the community to co-author the submitted paper. All of this was disclosed, but a reviewer felt that as a result the contribution of the paper was limited because of

...prior relationship with [the community] compromises the interview data drawn from participants in [the community].

Sometimes the harm is not having a reviewer at all. Further compounding all of this is the difficulties that editors have in finding reviewers. The last time I was an associate chair at a conference, which was prior to the upheaval COVID-19 has caused in all of our lives, as a senior member of our community with a large network to draw upon, I had to ask six people to review a paper for every one who said yes. More recently, I submitted a journal paper only to discover months later that it still was not even in review because the editor had asked over 20 people and only had one person agree.

How can we do better? I don’t claim to have the answers here, but I think it is time to start experimenting, or at least talking more about what to do. Here are some ideas I’ve been thinking about. Please comment on this post and add your own!

Process Improvements: I think we have multiple problems that require process improvements. One is reviewer training. Also, a known process for redressing (or addressing) problematic reviews could be of value.

Open Reviewing: One way to improve review quality is oversight. However this takes even more time. Open reviewing might be another way to impact what people say without as much extra work. In addition, it could reduce the burden on authors faced with problematic reviews by allowing others to call them out and respond to them.

Limited Submission: Volume of submissions compounds all the others issues — because we have more papers to review than reviewer capacity, we draw reviewers from further afield, or earlier in their careers, than ever before. One way to address this limit the number of papers reviewed by a single author, or require some reviewing service in return. This may be hard to enforce in papers with many authors, some of whom may not even be from the same field.

Fewer Peer Review Opportunities, more Other Opportunities: I’m sure this won’t be popular, but it is possible we could also reduce volume by increasing the number of papers allowed at conferences somehow. Could we make conference participation entirely poster based, for example? This would work best if we remove peer reviewed papers entirely from conferences, so that there is no competition for those slots.

Traveling Reviewers: Lastly, we could encourage authors to only resubmit a paper with substantial revision and change tracking, by having reviewers “travel” with papers, at least within the same tier/group of conferences (maybe: UIST, DIS, CSCW, CHI, TOCHI, ASSETS, and other peer conferences). This should help to reduce paper volume and increase review consistency. It would require a process for addressing the sorts of harmful statements I mentioned above. However, that represents an improvement over the current situation where there is no process. For example, perhaps authors could request replacing specific reviewers.

Please comment! I’d love to hear more ideas for what we can do to improve things!