Interpreting scientific claims in the context of empirical findings is a valuable practice, yet extremely time-consuming for researchers. Such interpretation of scientific claims requires identifying key results (from figures or tables) that provide supporting evidence from research papers, and contextualizing these results with associated methodological details (e.g., measures, sample, etc.). In this shared task, we are interested in automating identification of key results (or evidence) as well as additional grounding context to make claim interpretation more efficient.
Context25 will have two tracks:
Given a scientific claim and the PDF for a relevant research paper, identify key figures or tables from the paper that provide supporting evidence for the claim. Performance on this task will be assessed with standard retrieval metrics like nDCG.
Given a scientific claim and a relevant research paper, identify all grounding context from the paper discussing methodological details of the experiment that resulted in this claim. This grounding context is typically dispersed throughout the full-text, often far from where the supporting evidence is presented. Performance on this task will be assessed with automated summarization evaluation metrics like ROUGE and BERTScore: a subset of the best-performing models will also be assessed with a manual evaluation by trained expert annotators.
Joel Chan (University of Maryland)
Matthew Akamatsu (University of Washington)
Aakanksha Naik (Allen Institute for AI)