Relevance plays a central role in information retrieval (IR), which has received extensive studies starting from the 20th century. The definition and modeling of relevance have always been critical challenges in both information science and computer science. Along with the debate and exploration on relevance, IR has already become a core task in many real- world applications, such as Web search engines, question answering systems, conversational bots and so on. While relevance acts as a unified concept in all these tasks, its specific definitions are in general considered different due to the heterogeneity of these retrieval problems. This raises a question to us: Do these different forms of relevance really lead to different modeling focuses? To answer this question, in this work, we conduct a quantitative analysis on relevance modeling in three representative IR tasks, i.e., document retrieval, answer retrieval, and response retrieval. Specifically, we attempt to study the following two questions: 1) Does relevance modeling in these tasks really show differences in terms of natural language understanding (NLU)? We employ 16 linguistic tasks to probe a unified retrieval model over these three retrieval tasks to answer this question. 2) If there do exist differences, how can we leverage the findings to enhance the relevance modeling? We proposed a parameter intervention method to improve the relevance models with our findings. We believe the way we study the problem as well as our findings would be beneficial to the IR community.

2021 THE WEB CONFERENCE NEWSLETTER
The Web Conference is announcing latest news and developments biweekly or on a monthly basis. We respect The General Data Protection Regulation 2016/679.