Scholarly articles convey valuable information not only through unstructured text but also via (semi-)structured figures such as charts and diagrams. Automatically interpreting the semantics of knowledge encoded in these figures can be beneficial for downstream tasks such as question answering (QA).
In the SciVQA challenge, participants will develop multimodal QA systems using a dataset of scientific figures from ACL Anthology and arXiv papers. Each figure is annotated with seven QA pairs and includes metadata such as caption, ID, type (e.g., compound, line graph, bar chart, scatter plot), publication title, DOI and URL, QA pair type. This shared task specifically focuses on closed-ended visual (i.e., addressing visual attributes such as colour, shape, size, height, etc.) and non-visual (not addressing figure visual attributes) questions. Systems will be evaluated using metrics such as BLEU, METEOR, and ROUGE. Automated evaluations of submitted systems will be done through the Codabench platform (link will be provided soon).