User cold-start recommendation is a long-standing challenge for recommender systems due to the fact that only few interactions of cold-start users can be exploited. Recent studies investigate how to address such challenge from the perspective of meta learning. Most current meta-learning models follow a manner of parameter initialization, where the model parameters can be learned by a few steps of gradient updates. While these gradient-based meta-learning models achieve promising performances to some extent, a fundamental problem of them is how to adapt the global knowledge learned from previous tasks for the recommendations of cold-start users more effectively. In this paper, we develop a novel meta-learning model called task-adaptive neural process (TaNP). TaNP is a new member of the neural process family, where making recommendations for each user is associated with a corresponding stochastic process. TaNP directly maps the observed interactions of each user to a predictive distribution, sidestepping some training issues in gradient-based meta-learning models. More importantly, to balance the trade-off between model capacity and adaption reliability, we introduce a novel task-adaptive mechanism which enables TaNP to learn the relevance of different tasks as well as customizing the global knowledge to the task-related model parameters of our decoder for estimating the user preference. We validate TaNP on multiple benchmark datasets with different experimental settings. Empirical results demonstrate that TaNP yields consistent improvements over several state-of-the-art meta-learning recommenders.

2021 THE WEB CONFERENCE NEWSLETTER
The Web Conference is announcing latest news and developments biweekly or on a monthly basis. We respect The General Data Protection Regulation 2016/679.