Call For Resource and Reproducibility Papers

The annual SIGIR conference is the major international forum for the presentation of new research results, and the demonstration of new systems and techniques, in the broad field of information retrieval (IR).

The 47th ACM SIGIR conference will be run in-person in Washington D.C., USA, from July 14-18, 2024.

This year we continue a special track for resource and reproducibility papers, separate from regular full and short papers. This year resource and reproducibility papers share the same track. This is in recognition that reproducibility papers also include resources and artefacts, and sometimes resource papers include results/analysis that reproduce previously published results.

Submissions will be peer reviewed, and accepted papers will be published in the main conference proceedings.

Papers are max 9 pages + references. (Shorter papers are welcome.)

Important Dates for Resource and Reproducibility Papers

Time zone: Anywhere on Earth (AoE)

  • Abstracts due: February 1, 2024
  • Papers due: February 8, 2024 (only possible if abstract submitted)
  • Notifications: March 25, 2024
Take note:
  • Immediately after the abstract deadline, PC Chairs will reject submissions that lack informative titles and abstracts (“placeholder abstracts”).
  • The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

Do you want to review for the Resource & Reproducibility Track?

We are looking for more reviewers for this year's track. We think this is a great track to review for with both interesting papers and a light reviewer load (2-3 papers per person).
If you are interested, please fill up this short form: https://forms.gle/eLnZvA6mfE6rDgzz7.


What do resource papers look like?

The Resources Track seeks submissions from both academia and industry that describe resources available to the community, the process and methodology of building those resources, and/or the lessons learned. Resources include, but are not restricted to:

  • Test collections for information retrieval and access tasks;
  • Documentation of designs and protocols of evaluation tasks (e.g., novel task designs implemented at evaluation forums);
  • Labelled datasets for machine learning;
  • Software tools and services for information retrieval and access; and
  • Software tools and services for evaluating and analyzing information retrieval and access systems.

Resource Review Criteria

Novelty
  • What is new about this resource?
  • Does the resource represent an incremental advance or something more dramatic?
Availability
  • Is the resource available to the reviewer at the time of review?
  • Are there discrepancies between what is described and what is available?
  • Are the licensing/terms of use sufficiently open as to allow most academic and industry researchers access to the resource?
  • If the resource is data collected from people, do appropriate human subjects control board procedures appear to have been followed?
  • Make sure links are publicly accessible without requiring logins.
Utility
  • Is the resource well documented? What level of expertise do you expect is required to make use of the resource?
  • Are there tutorials or examples? Do they resemble actual uses or are they toy examples?
  • If the resource is data, are appropriate tools provided for loading that data?
  • If the resource is data, are the provenance (source, preprocessing, cleaning, aggregation) stages clearly documented?
Predicted impact
  • What IR research activity is enabled by the availability of this resource?
  • Does the resource advance a well-established research area or a brand new one?
  • Do you expect that this resource will be useful for a long time, or will it need to be curated or updated? If the latter, is that planned?
  • How large is the (anticipated) research user community? Will that grow or shrink in the next few years?

What do reproducibility papers look like?

The Reproducibility Papers Track solicits papers that repeat, reproduce, generalize, and analyze prior work with a strong impact on information retrieval. The focus is on generating new findings of established approaches akin to a test of time. Submitted papers should analyze to which extent assumptions of the original work held up, and elaborate error modes and unexpected conclusions.

We are particularly interested in reproducibility papers (different team, different experimental setup) rather than replicability papers (different team, same experimental setup). The emphasis is not on reproducibility badging, but on generating new research insights with existing approaches.

Like resource paper, we encourage people to make as many of the resources associated with a reproducibility paper publicly available.

Reproducibility Review Criteria

Reproducibility track papers are expected to help establish whether prior research in IR is generalizable beyond the theoretical or experimental settings that the paper(s) being reproduced assume(s). Submissions are welcome on reproducibility in any area in IR.

Papers submitted to the Reproducibility paper track must explain:

  • Their motivation for selecting the methods that are replicated or reproduced and the impact of these methods on the IR community;
  • The directions in which they try to generalize, chose different angles from the original work that they reproduce, and the experimental setup(s) they select to support their research in these new directions;
  • The assumptions of the original work that they found to hold up, and the ones that could not be confirmed. For papers in the reproducibility track the key is to share knowledge about what lessons from prior work held up.

Key criteria are:

Contribution:
  • Does this work provide a novel angle on existing approaches and thereby lead to novel insights for the IR community?
Motivation:
  • How relevant is the replicated or reproduced work for the IR community, and how impactful are the achieved conclusions?
Soundness:
  • Is the replicated or reproduced paper sufficiently solid in terms of methodology and evaluation?
Quality of reproduction artefacts:
  • Do the supplementary materials for this submission support ease of reproducibility?

Submission Policy

Anonymity Policy:
  • Author should remove their names from the first page of the paper; however, full anonymous, double-blind is not required.
  • We recognise that anonymizing external resources (code, notebooks, datasets, etc.) can be onerous; therefore, authors do not have to anonymize these.
arXiv Policy: ACM submission policy:
  • Authors should carefully go through ACM’s authorship policy before submitting a paper. Submissions that violate the preprint policy, length, or formatting requirements or are plagiarized are subject to desk-rejection by the chairs.
  • It is also NOT permitted to double submit the content to both this track and other track(s) of SIGIR 2024 (e.g. a resource paper for building Dataset A and a full paper containing the construction process of Dataset A in the experiment section).
Author List Policy:
  • To support identification of reviewers with conflicts of interest, the full author list must be specified at submission time.
  • Authors should note that changes to the author list after the submission deadline are not allowed without permission from the PC Chairs.
Desk Rejection Policy:
  • Submissions that violate the preprint policy, length, or formatting requirements, or are determined to violate ACM’s policies on academic dishonesty, including plagiarism, author misrepresentation, falsification, etc., are subject to desk rejection by the chairs.
  • Figures, tables, proofs, appendixes, acknowledgements, or any other content after page 9 of the submission.
  • Formatting not in line with the guidelines provided above.
  • Addition of authors after abstract submission.
  • Content that has been determined to have been copied from other sources.
  • Any form of academic fraud or dishonesty.
  • Lack of topical fit for SIGIR.

Submission Guidelines

  • Submissions of resource papers must be in English, in PDF format.
  • Length is at most 9 pages (including figures, tables, proofs, appendixes, acknowledgments, and any content except references). Authors are not expected to fill the full 9 pages - we recognise that for some papers (particularly resource papers) that the contribution can be explained in far less than 9 pages.
  • Unrestricted space for references, in the current ACM two-column conference format.
  • Suitable LaTeX, Word, and Overleaf templates are available from the ACM Website (use “sigconf” proceedings template for LaTeX and the Interim Template for Word).
  • ACM’s CCS concepts and keywords are not required for review but may be required if accepted and published by the ACM.

For LaTeX, the following should be used:

\documentclass[sigconf,natbib=true,anonymous=true]{acmart}

Submissions should be submitted electronically via EasyChair to the resource track:
https://easychair.org/conferences/?conf=sigir24.

At least one author of each accepted paper is required to register for, and present the work at the conference.

Submission via EasyChair at: https://easychair.org/conferences?conf=sigir24

Resource & Reproducibility Chairs

  • Bhaskar Mitra, Microsoft Research
  • Bevan Koopman, University of Queensland
  • Krisztian Balog, University of Stavanger and Google Research

Contact

For any questions, you may contact the Chairs by email to sigir24-ResourceRepro@easychair.org.