Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / Papers / End-to-End Document-Grounded Conversation with Encoder-Decoder Pre-Trained Language Model

Jinhyeon Kim, Donghoon Ham, Jeong-Gwan Lee, and Kee-Eung Kim (2021)

End-to-End Document-Grounded Conversation with Encoder-Decoder Pre-Trained Language Model

In: AAAI Conference on Artificial Intelligence (AAAI) DSTC9 Workshop.

The first track of the Ninth Dialog System Technology Challenge (DSTC9), “Beyond Domain APIs: Task-Oriented Conversational Modeling with Unstructured Knowledge Access,” encourages the participants to build goal-oriented dialog systems with access to unstructured knowledge, thereby making it possible to handle diverse user inquiries outside the scope of API/DBs. It consists of three sub-tasks: knowledgeseeking turn detection, knowledge selection, and knowledgegrounded response generation. We claim that tackling these sub-tasks separately is neither parameter-efficient nor of better performance. In this paper, we present an end-to-end document-grounded conversation system that utilizes a pretrained language model with an encoder-decoder structure. In the human evaluation, our dialog system achieved the accuracy score of 4.3082 and the appropriateness score of 4.2665, which ranked 9th out of 24 participant teams. Furthermore, we conduct an ablation study and show that the end-to-end encoder-decoder scheme enables more efficient use of parameters in the document-grounded conversation setting.