Schedule: July 15, 2022 (Pacific Daylight Time)

Time

Title

Speakers/Authors

9:00-9:15am Session: Opening Remarks
9:15-10:00am Keynote talk 1: Scaling out GNN Applications for NLP

Abstract: This talks begins with the motivation that drove me and my colleagues to invest in Graph Neural Networks: AWS Shanghai AI Lab owns the development of the open-source project Deep Graph Library (DGL), and we have a rich but focused research profile around GNNs, fanning into fundamental as well as important core AI application areas such as natural language processing and computer vision tasks. Then I will dive deeper into topics relevant to this workshop. I will share our early investigation that reveals Transformer is special form of GNNs, our modest attempt to bridge the gap between texts and graphs at sentence level, and our most recent effort in project GRED (Graph of Relation, Events and Discourse) which aims to extract and exploit various structures that are embedded in long document collections, and do so in a unified way. I will conclude with some key challenges and call for community's attention and effort.

10:00-10:45am Keynote talk 2: Towards Automatic Construction of Knowledge Graphs from Unstructured Text [slides]

Abstract: Graphs and texts are both ubiquitous in today’s information world. However, it is still an open problem on how to automatically construct knowledge graphs from massive, dynamic, and unstructured massive texts, without human annotation or supervision. In the past years, our group has been studying how to develop effective methods for automatic mining of hidden structures and knowledge from text, and such hidden structures include entities, relations, events, and knowledge graph structures. Equipped with pretrained language models and machine learning methods, as well as human-provided ontological structures, it is promising to transform unstructured text data into structured knowledge. In this talk, we will provide an overview on a set of weakly supervised machine learning methods developed recently for such an exploration, including joint spherical text embedding, discriminative topic mining, named entity recognition, relation extraction, event discovery, text classification, and taxonomy-guided text analysis. We show that weakly supervised approach could be promising at transforming massive text data into structured knowledge graphs.

10:45-11:00am Coffee Break/Social Networking
11:00-11:45am Panel Topic: Unifying GNNs and Pretraining

Panelists: Jian-Yun Nie (University of Montreal), Lili Mou (University of Alberta), Chenguang Zhu (Microsoft), Yu Su (Ohio State University), Xiaojie Guo (JD.COM Silicon Valley Research Center)
Jian-Yun Nie
Lili Mou
Chenguang Zhu
Yu Su
Xiaojie Guo
11:45-12:45pm

Paper Presentation Session A

(11:45-12:00pm)
Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts. [paper]
Wenhao Yu, Chenguang Zhu, Lianhui Qin, Zhihan Zhang, Tong Zhao and Meng Jiang

(12:00-12:15pm)
Continuous Temporal Graph Networks for Event-Based Graph Data. [paper]
Jin Guo, Zhen Han, su zhou, Jiliang Li, Volker Tresp and Yuyi Wang

(12:15-12:30pm)
Scene Graph Parsing via Abstract Meaning Representation in Pre-trained Language Models. [paper]
Woo Suk Choi, Yu-Jung Heo, Dharani Punithan and Byoung-Tak Zhang

(12:30-12:45pm)
Explicit Graph Reasoning Fusing Knowledge and Contextual Information for Multi-hop Question Answering. [paper]
Zhenyun Deng, Yonghua Zhu, Qianqian Qi, Michael Witbrock and Patricia J. Riddle

14:00-14:45pm Keynote talk 3: Enhancing Language Generation with Knowledge Graphs [slides]

Abstract: Natural language generation is to learn p(Y|X) to create desired Y with a given X. It has many real-world applications. However, there are many possibilities that Y is too difficult to generate with just X. Then knowledge graphs can help – with tons of relational information that is uncovered by billions of humans over thousands of years. In this talk, we will provide an overview on the approaches that use knowledge graphs to improve the precision of neural machine translation, the factual correctness of abstractive summarization, the content diversity in commonsense reasoning, the accuracy in question answering, etc. We show that knowledge graph-enhanced language generation methods could be promising at many other types of important applications.

14:45-15:30pm Keynote talk 4: Will Graphs Lead to the Next Breakthrough of Conversational AI? [slides]

Abstract: Teaching machines to understand natural languages and converse with humans require expressive and flexible meaning representations, which makes graphs an appealing tool for conversational AI. However, only recently have we started to see explorations of the interplay between graphs and conversational AI. In this talk, I will discuss two recent lines of work in this space. I will first discuss a new formalism for task-oriented dialogues based on dataflow graphs, and how that makes it possible to represent fine-grained semantics in task-oriented dialogues and support rich multi-turn interactions. Towards more universal conversational interfaces that support a broad range of domains, I will then discuss recent efforts in developing question answering systems on large-scale knowledge graphs with millions of entities and billions of facts, and how the broad coverage of knowledge graphs reveals new challenges such as non-i.i.d. generalization and large search spaces, and our attempts to tackle those challenges. The talk will be concluded with discussion on promising future directions.

15:30-15:45pm Coffee Break/Social Networking
15:45-16:15pm

Two Position Talks

(15:45-16:00pm)
P1: Bootstrapping a User-Centered Task-Oriented Dialogue System [slides]
Abstract: In this talk, we will discuss OSU TacoBot, a task-oriented dialogue system that won third place in the inaugural Alexa Prize TaskBot Challenge. TacoBot assists users in completing multi-step cooking and home improvement tasks. Designed with a user-centered principle and aspiring to deliver a collaborative and accessible dialogue experience, TacoBot is equipped with accurate language understanding, flexible dialogue management, and engaging response generation, and is backed by a strong search engine and an automated end-to-end test suite. We will discuss various roles graphs can play in the system and the promise of knowledge graph reasoning as future work.

Bio: Huan Sun is an associate professor (with tenure) in the Department of Computer Science and Engineering at the Ohio State University. Before joining OSU, she was a visiting scientist at the University of Washington (01-06/2016), received a Ph.D. in Computer Science from University of California, Santa Barbara (2015) and a B.S. in EEIS from the University of Science and Technology of China (2010). Her research interests lie in natural language processing, data mining and management, and artificial intelligence, with emphasis on building various kinds of natural language interfaces, task-oriented dialogue and conversational AI systems. Huan received the 2022 SIGMOD Research Highlight Award, 2021 BIBM Best Paper Award, Google Research Scholar Award (2022), NSF CAREER Award (2020), OSU Lumley Research Award (2020), SIGKDD Ph.D. Dissertation Runner-Up Award (2016), among others. Her team TacoBot won third place in the first Alexa Prize TaskBot challenge.

(16:00-16:15pm)
P2: Graph4NLP: A Library for Deep Learning on Graphs for NLP [slides]
Abstract: This talk will introduce a powerful and flexible deep graph learning library for natural language processing, namely, Graph4NlP. Graph4NLP is an easy-to-use library for R&D at the intersection of Deep Learning on Graphs and Natural Language Processing. It provides both full implementations of state-of-the-art models for data scientists and also flexible interfaces to build customized models for researchers and developers with whole-pipeline support. Built upon highly-optimized runtime libraries including DGL, Graph4NLP has both high running efficiency and great extensibility. The library covers a wide range of graph construction functions, including several dynamic graph construction and static graph construction methods. It also covers a wide range of NLP applications based on GNN modules, including text classification, semantic parsing, neural machine translation, summarization, KG completion, math word problem solving, name entity recognition, and question generation.

Bio: Xiaojie Guo is now a Research Scientist at JD.COM Silicon Valley Research Center. She got her Ph.D. from the Department of Information Science and Technology at George Mason University. Her research topics include machine learning and data mining, with particular interests in deep learning on graphs, graph transformation and generation, and interpretable representation learning, as well as their applications in natural language generation, cyber security, and molecule optimization. She has published over 26 papers in top-tier conferences and journals such as KDD, ICLR, NeurIPS, AAAI, ICDM, SDM, WWW, IEEE Transactions on Neural Networks, and Learning Systems (TNNLS), and Knowledge and Information System (KAIS). She won the Best Paper Award in ICDM 2019 and has one paper awarded as an ESI Hot and Highly Cited Paper. She also won the AAAI/IAAI 2022 Innovative Applications of Artificial Intelligence Awards.

16:15-17:15pm

Paper Presentation Session B

(16:15-16:30pm)
Improving Neural Machine Translation with the Abstract Meaning Representation by Combining Graph and Sequence Transformers. [paper]
Changmao Li and Jeffrey Flanigan

(16:30-16:45pm)
Graph Neural Networks for Adapting Off-the-shelf General Domain Language Models to Low-Resource Specialised Domains. [paper]
Merieme Bouhandi, Emmanuel Morin and Thierry Hamon

(16:45-17:00pm)
GraDA: Graph Generative Data Augmentation for Commonsense Reasoning. [paper]
Adyasha Maharana and Mohit Bansal

(17:00-17:15pm)
LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification. [paper]
Irene Li, Aosong Feng, Hao Wu, Tianxiao Li, Toyotaro Suzumura and Ruihai Dong

17:15-17:30pm Closing remarks