You are invited to participate in the shared task "DAGPap24: Detecting automatically generated scientific papers" collocated with the 4th Workshop on Scholarly Document Processing (SDP 2024) to be held at ACL 2024. The competition will be held on CodaBench, launching on April 2nd, 2024. Participants are also invited to submit papers describing their findings.
Papers must follow the ACL format and conform to the ACL 2024 Submission Guidelines. Paper submission has to be done through OpenReview.net:
We offer the following monetary prizes for the winners of the shared task:
The winners will be defined based on the corresponding CodaBench competition (to be launched on April 2nd, 2024). Additional conditions are the reproducibility of the solution and a technical report submitted to the workshop.
A big problem with the ubiquity of Generative AI is that it has now become very easy to generate fake scientific papers. This can erode public trust in science and attack the foundations of science: are we standing on the shoulders of robots? The Detecting Automatically Generated Papers (DAGPap) competition aims to encourage the development of robust, reliable AI-generated scientific text detection systems, utilizing a diverse dataset and varied machine learning models in a number of scientific domains.
Building on top of a similar competition held in 2022, we are now looking into full texts and a more fine-grained detection of LLM-generated scientific content, when artificial content might be interspersed with human-generated one.
Participants are invited to submit papers describing their findings during the competition.
We are not only interested in seeing technical reports describing competitors' approaches to solving the Shared Task but also invite submissions on related topics:
For an even broader range of topics, please check out the CFP for the Scholarly Document Processing workshop SDP 2024.
Authors are invited to submit full and short papers with unpublished, original work. Submissions will be subject to a double-blind peer-review process. Accepted papers will be presented by the authors at the workshop either as a talk or a poster. All accepted papers will be published in the workshop proceedings (proceedings from previous years can be found here).
The submissions must be in PDF format and anonymized for review. All submissions must be written in English and follow the ACL 2024 formatting requirements:
Long paper submissions: up to 8 pages of content, plus unlimited references. Short paper submissions: up to 4 pages of content, plus unlimited references.
Paper submission Website: Paper submission has to be done through OpenReview.net.
Final versions of accepted papers will be allowed 1 additional page of content so that reviewer comments can be taken into account.
For any other questions about the competition, paper submission, the workshop, etc. please contact dagpap2024@googlegroups.com.
Savvas Chamezopoulos, Elsevier
Yury Kashnitsky, Elsevier
Anita de Waard, Elsevier
Domenic Rosati, scite.ai
Drahomira Herrmannova, Elsevier
You are invited to participate in the shared task “Context24: Evidence and Grounding Context Identification for Scientific Claims” collocated with the 4th Workshop on Scholarly Document Processing (SDP 2024) to be held at ACL 2024. Participants of the competition are also invited to submit papers describing their findings.
All papers must follow the ACL format and conform to the ACL 2024 Submission Guidelines. Papers must be submitted via openreview.
Interpreting scientific claims in the context of empirical findings is a valuable practice, yet extremely time-consuming for researchers. Such interpretation of scientific claims requires identifying key results that provide supporting evidence from research papers, and contextualizing these results with associated methodological details (e.g., measures, sample, etc.). In this shared task, we are interested in automating identification of key results (or evidence) as well as additional grounding context to make claim interpretation more efficient.
Context24 will have two tracks:
Test data (without labels) is available in our shared task repository: https://github.com/oasisresearchlab/context24. Submission instructions:
Joel Chan (University of Maryland)