Scholarly articles convey valuable information not only through unstructured text but also via (semi-)structured figures such as charts and diagrams. Automatically interpreting the semantics of knowledge encoded in these figures can be beneficial for downstream tasks such as question answering (QA).
In the SciVQA challenge, participants will develop multimodal QA systems using a dataset of scientific figures from ACL Anthology and arXiv papers. Each figure is annotated with seven QA pairs and includes metadata such as caption, figure ID, figure type (e.g., compound, line graph, bar chart, scatter plot), QA pair type. This shared task specifically focuses on closed-ended visual (i.e., addressing visual attributes such as colour, shape, size, height, etc.) and non-visual (not addressing figure visual attributes) questions.
The shared task is available on Codabench: https://www.codabench.org/competitions/5904/#/pages-tab.
All updates and details on the competition will be published there.
The dataset is available for download on Hugging Face: https://huggingface.co/datasets/katebor/SciVQA.
Systems will be evaluated using BERTscore, ROUGE-L, and ROUGE-1 metrics. Automated evaluations of submitted systems will be done through the Codabench platform.
This work has received funding through the DFG project NFDI4DS (no. 460234259).