National University of Singapore, Singapore
Min-Yen Kan (BS;MS;PhD Columbia Univ.; SACM, SIEEE) is an associate professor at the National University of Singapore. He serves the School of Computing as an Assistant Dean of Undergraduate Studies. Min is an active member of the Association of Computational Linguistics (ACL), and currently serves as a co-chair for the ACL Ethics Committee, and was previously the ACL Anthology Director for a decade, from 2008–2018. He is an associate editor for Information Retrieval, a journal under Springer. His research interests include digital libraries and applied natural language processing and information retrieval. He was recognized as a distinguished speaker by the ACM for natural language processing and digital libraries research. Specific projects include work in the areas of scientific discourse analysis, full-text literature mining, machine translation, lexical semantics and applied text summarization.
Artificial Intelligence is poised to impact many fields, but how will the rise of AI impact the way that we do science and scholarly work? Thomas Kuhn, in his philosophical analyses of sciences coined the term "paradigm shift" to describe the resultant progress in science theory when the normal science of an existing paradigm collides with theory-unaccountable, replicable observations. With scientists in AI still expecting key discoveries to be made, will we expect a new paradigm to overturn current normal science in AI and other fields? Will the age of accelerations, as defined by Thomas Friedman, hold sway over how real-world contexts are either accounted for or discarded by research practitioners and scholars alike?
I relate my perspective on how normal science and paradigm shifting science relate to the notion of research, fast and slow, and how scholarly document processing can facilitate the mean and variance in science discovery. I give an opinionated view of the importance of scholarly document processing, as a meta-research agenda that can either aid thoughtful slow research, or be leveraged to further exacerbate acceleration of normal science.
University of Manchester, UK
Sophia Ananiadou is a Professor of Computer Science at The University of Manchester. Her main research areas are Natural Language Processing and Biomedical Text Mining. She is the Director of the UK National Centre for Text Mining (2005-), an Alan Turing Institute Fellow (2018- ), and a Distinguished Research Fellow, Artificial Intelligence Research Centre, Japan, National Institute of Advanced Industrial Science and Technology (2017-). Her research interests include information extraction, uncertainty and modality recognition, emotion detection, text summarisation and text simplification.
Biomedical text summarization techniques are used to support users in accessing information efficiently, by retaining only the most important semantic information contained within documents. Text summarization is important in a variety of scenarios, including systematic reviews (synthesis), evidence-based medicine, clinical decision support, etc. I will discuss current trends in biomedical text summarization, the use of pre-trained language models (PLMs), benchmarks, evaluation measures and challenges faced in both extractive and abstractive methods. In particular, I will examine how to extract salient sentences by exploiting both local and global contexts and explore how the integration of fine-grained medical knowledge into PLMs can improve extractive summarisation.
University of Pennsylvania, USA
Andrew Head is an assistant professor at the University of Pennsylvania in the Department of Computer and Information Science. He co-leads Penn HCI, Penn's research group in human-computer interaction. Andrew's research focuses on how interactive systems can support the tasks of reading and writing. His most recent work has focused on how scientific documents can be enhanced with interactivity to help people understand them. Often, his systems incorporate novel backends for document authoring, processing, and understanding. Andrew partners with the Allen Institute for AI in conducting this line of research. His research frequently appears in the top ACM and IEEE-sponsored venues for HCI research, and has been repeatedly recognized with best paper awards and nominations.
In this talk, I share a vision of interactive research papers, where user interfaces surface information for readers when and where they need it. Grounded in tools that I and my collaborators have developed, I discuss what it takes to design reading interfaces that (1) surface definitions of terms where readers need them (2) explain the meaning of math notation and (3) convey the meaning of jargon-dense passages in simpler terms. In our research, we have found that effective reading support requires not only sufficient document processing techniques, but also the careful presentation of derived information atop visually complex documents. I discuss tensions and solutions in designing interactive papers, and identify future research directions that can bring about powerful augmenting reading experiences.