Changes to the CHI PLAY 2021 Review Process


  • Our motivation is to improve the review process, while also easing the burden of review. We are also concerned with welcoming different contribution types while managing the complexities of interdisciplinary evaluation.
  • Authors identify the contribution type, to make clear what methodological approach they employ. 
  • Reviewers self-identify their expertise with both the topic and the method. 
  • Review form focuses feedback into three evaluation criteria: the context of the contribution, the method, and the clarity of the contribution. 
  • Reviewers no longer ‘rate’ the paper, but instead provide recommendations of Accept, Major revisions, Minor revisions, and Reject. In the second round of reviews, only recommendations of Accept and Reject are provided. 
  • We welcome feedback on the review form, process changes, contribution types, or any other aspect raised by these changes. Please email to ask for clarifications or suggest changes. 

Along with CHI PLAY’s move to a journal publication this year, and the accompanying addition of the revise and resubmit phase, we are also making changes to the CHI PLAY full-paper review process that will affect authors, associate chairs (ACs), and external reviewers. 

The motivation behind our changes is to improve the review process for everyone involved.  We have listened to your feedback over the years around what is broken with the review process and where frustrations arise. And our aim is to address these frustrations and improve the process. What this means to us is: 

  • Ease the burden of reviewing.  We are constantly being asked to provide peer review for a variety of venues and collegial processes. Our goal is to ease the burden of this request as much as possible for CHI PLAY full papers while ensuring high-quality feedback is provided to authors. 
  • Maintain different standards for different contribution types.  Although we know that some methodological approaches have greater representation at CHI PLAY (i.e., empirical contributions), we welcome a variety of contribution types, also including theoretical papers, systems contributions, research through design, and meta-research. We want to ensure that reviewers understand that these different contribution types have different standards of evaluation so that a prevalence of method does not equal a dominance of method at our conference.  
  • Address the matrix of domain and method. We work in an interdisciplinary field. As a result, reviewers may be experts in a topic (e.g., educational games) but not a method (e.g., qualitative empirical research). Or they may be expert in a method (e.g., research through design), but not the topic (e.g., augmented reality games). Having reviewers understand that the expectations are for them to address the paper from their position of expertise will help us get the highest-quality reviews while not asking reviewers to comment on aspects of the paper that they are not experts in assessing. 
  • Focus feedback for authors. Authors have noted that the free association they sometimes receive as peer review is interesting to read, but not very actionable. As we move to a revise and resubmit process, it is especially important that authors can interpret the review feedback clearly to revise their paper for the second round of review. 
  • Train and mentor reviewers implicitly. As our field grows and matures, we welcome researchers from associated fields and junior researchers who are reviewing for the first time. The form helps guide reviewers to what is important to assess at CHI PLAY, providing implicit training through participation in a more structured reviewing process that clearly outlines indicators of quality. 

The mechanism of our changes is the review form on PCS.  We are adjusting the form in multiple ways. 

1. Contribution Type. First, we are asking authors to explicate their primary (required) and secondary (optional) contribution type. We based the selection of contribution type on an article that enumerated contribution types in the field of Human Computer Interaction [1].  We described each contribution type, and explicated the criteria by which it is evaluated (see Table 1), before expanding two of the categories to provide greater specificity. This resulted in the following contribution types: 

  • Artefact-Design, e.g., research through design, envisionments, guidelines, techniques.
  • Artefact-Technical, e.g., building novel systems, algorithms, visualizations, architectures, implementing novel features in existing systems.
  • Empirical-Mixed Methods, e.g., combined qualitative and quantitative empirical research.
  • Empirical-Qualitative, e.g., ethnography, qualitative user studies.
  • Empirical-Quantitative, e.g., quantitative user studies, statistical methods, computational methods, data modeling.
  • Meta-Research, e.g., meta-analyses, systematic reviews.
  • Methodological, e.g., validated scales, new methods. 
  • Theoretical, e.g., conceptual frameworks, theory underpinning CHI PLAY studies/domains, theoretical analysis, critical reflection, and essays.

For more description on each contribution type, please see Table 1.  If authors are unsure of which contribution type to choose, or feel that their contribution type is not represented in this list, they can email the paper chairs at

2. Reviewer Self-rating of Expertise. In the past, reviewers have simply rated their expertise on a 4-point scale.  This approach did not allow a reviewer to clearly identify their expertise with the method used versus the domain of the contribution. As we are an interdisciplinary community, it is rare that a reviewer is an expert in both the domain and method, and making clear the context in which review feedback is provided will help ACs and authors interpret the reviews. Reviewers are now asked to declare their expertise on two scales: 

What is your familiarity with the method employed? 

  • Expert: I have published using this method multiple times, or have repeatedly taught others to use this method
  • Knowledgeable: I have published at least once using this method, and have used this method multiple times
  • Passing: I have read papers using this method or have used it a few times, but not actually published with it
  • Limited knowledge: I know of this method, but have never actually used it myself
  • No knowledge: I do not know this method

What is your expertise in the topic or application area? 

  • Expert: I have published on this topic or area multiple times, or have repeatedly taught others about it
  • Knowledgeable: I have published at least once on this topic or application area
  • Passing: I have read in this area, but have not actually published in it
  • Limited knowledge: I know of this area, but have never published about it
  • No knowledge: I do not know about this topic

3. Categorial Review Form. To help focus reviewer feedback, we have created a categorical review form in which reviewers provide feedback along three main themes that are essential to assess regardless of contribution type: the context, the methods, and the clarity. We initially were aiming for a full rubric for evaluation, but decided to provide the focused evaluation form as a compromise between a full rubric and a single open-ended field. The full review form can be seen in Figure 1. In addition to the summary of the contribution, recommendation, and confidential comments to the committee, reviewers will be asked to assess the following: 

  • Please comment on the context of the contribution. Does the paper adequately refer to prior work both when motivating the work and discussing findings, is the contribution well motivated, is its significance articulated, and are implications for the wider research community clearly identified?
  • Please assess the methodology of the paper. Are the research goals clearly formulated and in line with best practices relevant to the contribution type? Are chosen methods appropriate, and clearly described? Is the presentation of results clear, and—where relevant—supported by the data? Is the interpretation of results aligned with the research questions and method? Are the conclusions drawn from the findings justified, and do they acknowledge the limitations of the methodology?
  • Please comment on the clarity of the paper. Is the writing clear, comprehensible, and inclusive? Are figures clear, illustrative, and well captioned? When relevant, are video figures provided that present artefacts in a comprehensive way? Is the description of the research transparent so that others could reproduce the work? Where applicable, do authors critically appraise the ethical implications of their research?

We feel that these aspects of the paper can and should be addressed for each contribution type; however, the details of how they are assessed will vary. For example, the findings of an empirical quantitative paper will be different than a theoretical contribution or technical artefact, but in all cases, these findings should be presented clearly and contextualized in prior work, and authors should justify the method used, discuss the implications for the community, and articulate the significance of the work. 

Note that there are some evaluation criteria that we feel the ACs should evaluate, and so are not asking for these explicitly in the external review. For example, ACs are expected to synthesize and summarize reviews and consider the transparency of the research in terms of supplemental material (e.g., data, source code, video figures). 

4. No Ordinal Ratings. Rather than have ordinal ratings of the papers—which are problematic because they are subjective, unsubstantiated, and should not really be averaged anyways—we are moving to categorical recommendations. In the first round of review, external reviewers will be asked to provide a recommendation for the paper in line with how journals approach assessment: 

  • Accept: This paper can be accepted in its current form with only light editing to fix typos. No additional content, description, or justification needs to be added.
  • Minor revisions: The paper is likely to be accepted with minor revisions such as the integration and contextualization of new references, additional information on aspects such as system implementation, analyses, perspectives in the discussion, or acknowledgement of limitations of the work.
  • Major revisions: The paper may be accepted pending major revisions such as reframing the motivation, including new literature, recontextualizing discussion of the work, including new analyses, extension of designs, development of new system components, or adjustment of algorithms.
  • Reject: The submission has profound weaknesses in one or more areas, and should not be included in the conference this year. 

The second round of review will only allow for only recommendations of accept or reject. 


These changes were a collaborative and iterative process, driven by this year’s paper chairs (Kathrin Gerling, Elisa Mekler, Regan Mandryk) in consultation with the CHI PLAY Steering Committee and this year’s technical program chairs (Max Birk, Jo Iacovides). Rounds of iteration with community members and the steering committee produced this final set of changes. 


We expect these changes will improve the review process for authors, reviewers, and program committee members while also having long-term impact in building and training our community of researchers. We welcome feedback on any aspect of what has been presented here; please email to ask for clarifications or to suggest changes. We also expect that these changes will be iterated on after one review cycle and so welcome feedback throughout, and following, the review process. 


  1. Jacob O. Wobbrock and Julie A. Kientz. 2016. Research contributions in human-computer interaction. interactions 23, 3 (May + June 2016), 38–44. DOI:

Regan Mandryk, Kathrin Gerling, and Elisa Mekler
CHI PLAY 2021 Program Chairs

Leave a Reply