Graph Neural Networks (GNNs) such as GCN can effectively learn document representations via the semantic relation graph among documents and words. However, most of previous work in this line of research (with only a few exceptions) does not consider the underlying topical semantics inherited in document contents and the relation graph, making the representations less effective and hard to interpret. In a few recent studies trying to incorporate latent topics into GNNs, the topics have been learned independently from the relation graph modeling. Intuitively, topic extraction can benefit much from the information propagation of the relation graph structure – connected and indirectly connected documents and words have similar topics. In this paper, we propose a novel Graph Topic Neural Network (GTNN) model to mine latent topic semantic for interpretable document representation learning, taking into account the document-document, document-word and word-word relationships in the graph.We also show that our model can be viewed as semi- amortized inference for relational topic model based on Poisson distribution, with high order correlations. We test our model in several settings: unsupervised, semi-supervised and supervised representation learning, for both connected and unconnected documents. In all the cases, our model outperforms the state-of-the-art models for these tasks.

2021 THE WEB CONFERENCE NEWSLETTER
The Web Conference is announcing latest news and developments biweekly or on a monthly basis. We respect The General Data Protection Regulation 2016/679.