Shared Tasks

Quick links

  • DAGPAP24: Detecting automatically-generated papers
  • Context24: Contextualizing Scientific Figures and Tables

DAGPap24: Detecting automatically generated scientific papers

You are invited to participate in the shared task "DAGPap24: Detecting automatically generated scientific papers" collocated with the 4th Workshop on Scholarly Document Processing (SDP 2024) to be held at ACL 2024. The competition will be held on CodaBench, launching on April 2nd, 2024. Participants are also invited to submit papers describing their findings.

Papers must follow the ACL format and conform to the ACL 2024 Submission Guidelines. Paper submission has to be done through OpenReview.net:

  • Leaderboard (CodaBench)
  • Website
  • Submission site (OpenReview.net)
  • Competition dates: April 2nd — April 30th, 2024
  • Paper submission deadline: May 17 31th, 2024

Monetary prizes

We offer the following monetary prizes for the winners of the shared task:

  • 1st place — $3000
  • 2nd place — $1200
  • 3rd place — $800

The winners will be defined based on the corresponding CodaBench competition (to be launched on April 2nd, 2024). Additional conditions are the reproducibility of the solution and a technical report submitted to the workshop.

Call for Research Papers

A big problem with the ubiquity of Generative AI is that it has now become very easy to generate fake scientific papers. This can erode public trust in science and attack the foundations of science: are we standing on the shoulders of robots? The Detecting Automatically Generated Papers (DAGPap) competition aims to encourage the development of robust, reliable AI-generated scientific text detection systems, utilizing a diverse dataset and varied machine learning models in a number of scientific domains.

Building on top of a similar competition held in 2022, we are now looking into full texts and a more fine-grained detection of LLM-generated scientific content, when artificial content might be interspersed with human-generated one.

Participants are invited to submit papers describing their findings during the competition.

Topics of Interest

We are not only interested in seeing technical reports describing competitors' approaches to solving the Shared Task but also invite submissions on related topics:

  • Detection of LLM-generated texts
  • Robustness of LLM-detectors to data drift
  • Specifics of LLM-detectors in the scientific domain
  • Explainability of LLM detection

For an even broader range of topics, please check out the CFP for the Scholarly Document Processing workshop SDP 2024.

Submission Information

Authors are invited to submit full and short papers with unpublished, original work. Submissions will be subject to a double-blind peer-review process. Accepted papers will be presented by the authors at the workshop either as a talk or a poster. All accepted papers will be published in the workshop proceedings (proceedings from previous years can be found here).

The submissions must be in PDF format and anonymized for review. All submissions must be written in English and follow the ACL 2024 formatting requirements:

Long paper submissions: up to 8 pages of content, plus unlimited references.
Short paper submissions: up to 4 pages of content, plus unlimited references.

Paper submission Website: Paper submission has to be done through OpenReview.net.

Final versions of accepted papers will be allowed 1 additional page of content so that reviewer comments can be taken into account.

Important Dates (Shared Task)

  • Competition launch: April 2nd (Tuesday), 2024
  • Development phase: April 2nd - April 28th, 2024
  • Final phase: April 29th - April 30th, 2024
  • Competition end: April 30th (Tuesday), 2024
  • Solutions sharing deadline: May 7th (Tuesday), 2024
  • Winners announcement: May 14th (Tuesday), 2024
  • Paper submission deadline: May 17 31th, (Friday), 2024
  • Notification of acceptance: June 28th, 2024
  • Camera-ready paper due: July 5th (Friday), 2024
  • Workshop dates: August 15th – August 16th, 2024

Contact

For any other questions about the competition, paper submission, the workshop, etc. please contact dagpap2024@googlegroups.com.

Organizers

Savvas Chamezopoulos, Elsevier

Yury Kashnitsky, Elsevier

Anita de Waard, Elsevier

Domenic Rosati, scite.ai

Drahomira Herrmannova, Elsevier


Context24: Contextualizing Scientific Figures and Tables

You are invited to participate in the shared task “Context24: Evidence and Grounding Context Identification for Scientific Claims” collocated with the 4th Workshop on Scholarly Document Processing (SDP 2024) to be held at ACL 2024. Participants of the competition are also invited to submit papers describing their findings.

All papers must follow the ACL format and conform to the ACL 2024 Submission Guidelines. Papers must be submitted via openreview.

Interpreting scientific claims in the context of empirical findings is a valuable practice, yet extremely time-consuming for researchers. Such interpretation of scientific claims requires identifying key results that provide supporting evidence from research papers, and contextualizing these results with associated methodological details (e.g., measures, sample, etc.). In this shared task, we are interested in automating identification of key results (or evidence) as well as additional grounding context to make claim interpretation more efficient.

Context24 will have two tracks:

Track 1: Evidence Identification

Given a scientific claim and a relevant research paper, identify key figures or tables from the paper that provide supporting evidence for the claim.

Track 2: Grounding Context Identification

Given a scientific claim and a relevant research paper, identify all grounding context from the paper discussing methodological details of the experiment that resulted in this claim. This grounding context is typically dispersed throughout the full-text, often far from where the supporting evidence is presented and can include figures, tables or text snippets.

The testing phase has now started!

Test data (without labels) is available in our shared task repository: https://github.com/oasisresearchlab/context24. Submission instructions:

  • Submissions will be accepted on our evalAI portal: https://eval.ai/web/challenges/challenge-page/2306/overview.
  • For each task, you can make up to 5 submissions per day.
  • You will be limited to 50 submissions per task over the duration of the competition.
  • Submissions will be accepted until 5 pm PST on June 6.
  • If you run into issues with submission, please reach out to aakankshan@allenai.org or raise an issue in the shared task repository.

Important Links

  • Dataset
  • Google Group
  • Submission site (OpenReview.net)
  • Paper submission deadline: June 17th, 2024

Important Dates (Shared Task)

  • Training set release: April 11th (Thursday), 2024
  • Test set release: May 24th (Friday), 2024
  • Result announcement: May 31st (Friday), 2024
  • Paper submission deadline: June 17th (Monday), 2024
  • Notification of acceptance: June 28th (Friday), 2024
  • Camera-ready paper due: July 8th (Monday), 2024
  • Workshop dates: August 15th–16th, 2024

Organizers

Joel Chan (University of Maryland)

Matthew Akamatsu (University of Washington)

Aakanksha Naik (Allen Institute for AI)



Contact: sdproc2024@googlegroups.com

Sign up for updates: https://groups.google.com/g/sdproc-updates

Follow us: https://twitter.com/SDPWorkshop

Back to top