Question Generation (QG) is a challenging Natural Language Processing (NLP) task which aims at generating questions with given answers and context. There are many works incorporating linguistic features to improve the performance of QG. However, similar to traditional word embedding, these works normally embed such features with a set of trainable parameters. Which, resulting in the linguistic features not fully exploited. In this work, inspired by the recent achievements of text representation, we propose to utilize linguistic information via large pre-trained neural models. First, these models are trained in several specific NLP tasks in order to better represent linguistic features. Then, such feature representation is fused into a seq2seq based QG model to guide question generation. Extensive experiments were conducted on two benchmark Question Generation datasets to evaluate the effectiveness of our approach. The experimental results demonstrate that our approach outperforms the state-of-the-art QG systems, as a result, it significantly improves the baseline by 17.2% and 6.2% under BLEU-4 metric on these two datasets, respectively.

2021 THE WEB CONFERENCE NEWSLETTER
The Web Conference is announcing latest news and developments biweekly or on a monthly basis. We respect The General Data Protection Regulation 2016/679.