Artifact Evaluation

This year, we’ve formed an artifact review committee for the purpose of collecting, evaluating, and displaying any artifacts related to accepted papers and we encourage you to submit your artifacts for review. The goal of this process is to provide a way for authors to share any work beyond the contents of the paper itself that aid in the reproducibility of results and allow other researchers or community members to build on the work reflected in the paper.

Possible artifacts include (but are not limited to):

  • source code (e.g., system implementations, proof of concepts)
  • datasets (e.g., network traces, raw study data)
  • scripts for data processing or simulations
  • machine-generated proofs
  • formal specifications
  • build environments (e.g., VMs, Docker containers, configuration scripts)

Submission of artifacts is encouraged but optional, and artifacts will be evaluated by the artifact review committee so that we can provide feedback on possible bugs in the build environment, readability of documentation, and appropriate licensing. After your artifact has been approved by the committee, we will accompany the paper link on petsymposium.org with a link to the artifact along with an artifact badge so that interested readers can find and use your hard work.