Conditional response augmentation for dialogue using knowledge distillation

Myeongho Jeong, Seungtaek Choi, Hojae Han, Kyungho Kim, Seung Won Hwang

Research output: Contribution to journalConference articlepeer-review

1 Citation (Scopus)

Abstract

This paper studies dialogue response selection task. As state-of-the-arts are neural models requiring a large training set, data augmentation is essential to overcome the sparsity of observational annotation, where one observed response is annotated as gold. In this paper, we propose counterfactual augmentation, of considering whether unobserved utterances would “counterfactually” replace the labelled response, for the given context, and augment only if that is the case. We empirically show that our pipeline improves BERT-based models in two different response selection tasks without incurring annotation overheads.

Original languageEnglish
Pages (from-to)3890-3894
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2020-October
DOIs
Publication statusPublished - 2020
Event21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020 - Shanghai, China
Duration: 2020 Oct 252020 Oct 29

Bibliographical note

Funding Information:
This work is supported by AI Graduate School Program (2020-0-01361) and IITP grant (No.2017-0-01779, XAI) supervised by IITP. Hwang is a corresponding author.

Publisher Copyright:
Copyright © 2020 ISCA

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Human-Computer Interaction
  • Signal Processing
  • Software
  • Modelling and Simulation

Fingerprint

Dive into the research topics of 'Conditional response augmentation for dialogue using knowledge distillation'. Together they form a unique fingerprint.

Cite this