Time |
Title |
Speakers/Authors |
||||||
---|---|---|---|---|---|---|---|---|
9:00-9:15am | Session: Opening Remarks | |||||||
9:15-10:00am |
Keynote talk 1: Scaling out GNN Applications for NLP
Abstract: This talks begins with the motivation that drove me and my colleagues to invest in Graph Neural Networks: AWS Shanghai AI Lab owns the development of the open-source project Deep Graph Library (DGL), and we have a rich but focused research profile around GNNs, fanning into fundamental as well as important core AI application areas such as natural language processing and computer vision tasks. Then I will dive deeper into topics relevant to this workshop. I will share our early investigation that reveals Transformer is special form of GNNs, our modest attempt to bridge the gap between texts and graphs at sentence level, and our most recent effort in project GRED (Graph of Relation, Events and Discourse) which aims to extract and exploit various structures that are embedded in long document collections, and do so in a unified way. I will conclude with some key challenges and call for community's attention and effort. |
|||||||
10:00-10:45am |
Keynote talk 2: Towards Automatic Construction of Knowledge Graphs from Unstructured Text [slides]
Abstract: Graphs and texts are both ubiquitous in today’s information world. However, it is still an open problem on how to automatically construct knowledge graphs from massive, dynamic, and unstructured massive texts, without human annotation or supervision. In the past years, our group has been studying how to develop effective methods for automatic mining of hidden structures and knowledge from text, and such hidden structures include entities, relations, events, and knowledge graph structures. Equipped with pretrained language models and machine learning methods, as well as human-provided ontological structures, it is promising to transform unstructured text data into structured knowledge. In this talk, we will provide an overview on a set of weakly supervised machine learning methods developed recently for such an exploration, including joint spherical text embedding, discriminative topic mining, named entity recognition, relation extraction, event discovery, text classification, and taxonomy-guided text analysis. We show that weakly supervised approach could be promising at transforming massive text data into structured knowledge graphs. |
|||||||
10:45-11:00am | Coffee Break/Social Networking | |||||||
11:00-11:45am | Panel Topic: Unifying GNNs and Pretraining Panelists: Jian-Yun Nie (University of Montreal), Lili Mou (University of Alberta), Chenguang Zhu (Microsoft), Yu Su (Ohio State University), Xiaojie Guo (JD.COM Silicon Valley Research Center) |
|
||||||
11:45-12:45pm |
Paper Presentation Session A (11:45-12:00pm) Diversifying Content Generation for Commonsense Reasoning with Mixture of Knowledge Graph Experts. [paper] Wenhao Yu, Chenguang Zhu, Lianhui Qin, Zhihan Zhang, Tong Zhao and Meng Jiang (12:00-12:15pm) Continuous Temporal Graph Networks for Event-Based Graph Data. [paper] Jin Guo, Zhen Han, su zhou, Jiliang Li, Volker Tresp and Yuyi Wang (12:15-12:30pm) Scene Graph Parsing via Abstract Meaning Representation in Pre-trained Language Models. [paper] Woo Suk Choi, Yu-Jung Heo, Dharani Punithan and Byoung-Tak Zhang (12:30-12:45pm) Explicit Graph Reasoning Fusing Knowledge and Contextual Information for Multi-hop Question Answering. [paper] Zhenyun Deng, Yonghua Zhu, Qianqian Qi, Michael Witbrock and Patricia J. Riddle |
|||||||
14:00-14:45pm |
Keynote talk 3: Enhancing Language Generation with Knowledge Graphs [slides]
Abstract: Natural language generation is to learn p(Y|X) to create desired Y with a given X. It has many real-world applications. However, there are many possibilities that Y is too difficult to generate with just X. Then knowledge graphs can help – with tons of relational information that is uncovered by billions of humans over thousands of years. In this talk, we will provide an overview on the approaches that use knowledge graphs to improve the precision of neural machine translation, the factual correctness of abstractive summarization, the content diversity in commonsense reasoning, the accuracy in question answering, etc. We show that knowledge graph-enhanced language generation methods could be promising at many other types of important applications. |
|||||||
14:45-15:30pm |
Keynote talk 4: Will Graphs Lead to the Next Breakthrough of Conversational AI? [slides]
Abstract: Teaching machines to understand natural languages and converse with humans require expressive and flexible meaning representations, which makes graphs an appealing tool for conversational AI. However, only recently have we started to see explorations of the interplay between graphs and conversational AI. In this talk, I will discuss two recent lines of work in this space. I will first discuss a new formalism for task-oriented dialogues based on dataflow graphs, and how that makes it possible to represent fine-grained semantics in task-oriented dialogues and support rich multi-turn interactions. Towards more universal conversational interfaces that support a broad range of domains, I will then discuss recent efforts in developing question answering systems on large-scale knowledge graphs with millions of entities and billions of facts, and how the broad coverage of knowledge graphs reveals new challenges such as non-i.i.d. generalization and large search spaces, and our attempts to tackle those challenges. The talk will be concluded with discussion on promising future directions. |
|||||||
15:30-15:45pm | Coffee Break/Social Networking | |||||||
15:45-16:15pm |
Two Position Talks |
|
||||||
16:15-17:15pm |
Paper Presentation Session B (16:15-16:30pm) Improving Neural Machine Translation with the Abstract Meaning Representation by Combining Graph and Sequence Transformers. [paper] Changmao Li and Jeffrey Flanigan (16:30-16:45pm) Graph Neural Networks for Adapting Off-the-shelf General Domain Language Models to Low-Resource Specialised Domains. [paper] Merieme Bouhandi, Emmanuel Morin and Thierry Hamon (16:45-17:00pm) GraDA: Graph Generative Data Augmentation for Commonsense Reasoning. [paper] Adyasha Maharana and Mohit Bansal (17:00-17:15pm) LiGCN: Label-interpretable Graph Convolutional Networks for Multi-label Text Classification. [paper] Irene Li, Aosong Feng, Hao Wu, Tianxiao Li, Toyotaro Suzumura and Ruihai Dong |
|||||||
17:15-17:30pm | Closing remarks |