We study the problem of incorporating prior knowledge into a deep Transformer-based model, i.e., BERT, to enhance its performance in semantic textual matching tasks. By probing and analyzing what BERT has already known about this task, we obtain better understanding of what task-specific knowledge BERT needs the most and where it is most needed. The analysis further motivates us to take a different approach than existing work: instead of using prior knowledge to create a new training task to fine-tune BERT, we directly inject knowledge into BERT’s multi-head attention mechanism. This leads us to a simple yet effective approach that enjoys fast training time as it saves the model from training on additional data or tasks other than the main task. Extensive experiments demonstrate that our knowledge-enhanced BERT is able to consistently improve semantic textual matching performance over the original BERT model, and the performance benefit is most salient when training data is scarce.

2021 THE WEB CONFERENCE NEWSLETTER
The Web Conference is announcing latest news and developments biweekly or on a monthly basis. We respect The General Data Protection Regulation 2016/679.