Context25: Evidence and Grounding Context Identification for Scientific Claims

Interpreting scientific claims in the context of empirical findings is a valuable practice, yet extremely time-consuming for researchers. Such interpretation of scientific claims requires identifying key results (from figures or tables) that provide supporting evidence from research papers, and contextualizing these results with associated methodological details (e.g., measures, sample, etc.). In this shared task, we are interested in automating identification of key results (or evidence) as well as additional grounding context to make claim interpretation more efficient.

Context25 will have two tracks:

  • Track 1: Evidence Identification from PDF pages
  • Track 2: Grounding Context Identification

Track 1: Evidence Identification from PDF pages

Given a scientific claim and a list of PDF page images from a relevant research paper, identify key figures or tables from the pages that provide supporting evidence for the claim. Performance on this task will be assessed with standard retrieval metrics like nDCG.

Track 2: Grounding Context Identification

Given a scientific claim and full-text from a relevant research paper, identify all grounding context from the paper discussing methodological details of the experiment that resulted in this claim. This grounding context is typically dispersed throughout the full-text, often far from where the supporting evidence is presented. Performance on this task will be assessed with automated summarization evaluation metrics like ROUGE and BERTScore; a subset of the best-performing models will also be assessed with a manual evaluation by trained expert annotators during the testing phase.

Important Links

  • Dataset: https://github.com/aakanksha19/context25
  • Google Group

Shared Task Timeline

  • Training set release: April 21st (Monday), 2025
  • Test set release: May 14th (Wednesday), 2025
  • Eval.ai submission closes: May 17th (Saturday), 2025
  • Result announcement: May 27th (Tuesday), 2025
  • Report submission deadline: June 3rd (Tuesday), 2025
  • Notification of acceptance: June 10th (Tuesday), 2025
  • Camera-ready paper due: June 21st (Saturday), 2025
  • Workshop dates: July 31st-August 1st, 2025

Organizers

Joel Chan (University of Maryland)

Matthew Akamatsu (University of Washington)

Aakanksha Naik (Allen Institute for AI)



Contact: sdproc2025@googlegroups.com

Sign up for updates: https://groups.google.com/g/sdproc-updates

Follow us: https://twitter.com/SDPWorkshop

Back to top