In recent years, Graph Convolutional Networks~(GCNs) have been applied to benefit spatiotemporal predictions. The current shell for spatiotemporal predictions often relies heavily on the quality of handcraft, fixed graphical structures, however, we argue that such paradigm could be expensive and sub-optimal in many applications. To raise the bar, this paper proposes to jointly mine the spatial dependencies and model temporal patterns in a coupled framework, i.e., to make spatiotemporal-coupled predictions. We come up with a novel Reciprocal SpatioTemporal~(REST) framework, which introduces Edge Inference Networks~(EINs) to couple with GCNs. From the temporal side to the spatial side, EINs infer spatial dependencies among time series vertices and generate multi-modal directed weighted graphs to serve GCNs. And from the temporal side to the spatial side, GCNs utilize these spatial dependencies to make predictions and then introduce feedback to optimize EINs. The REST framework is incrementally trained for higher performance of spatiotemporal prediction, powered by the reciprocity between its comprised two components from such an iterative joint learning process. Additionally, to maximize the power of the REST framework, we design a phased heuristic approach, which effectively stabilizes training procedure and prevents early-stop. Extensive experiments on two real- world datasets have demonstrated that the proposed REST framework significantly outperforms baselines, and can learn meaningful spatial dependencies beyond predefined graphical structures.footnote{The codes will be released upon the paper acceptance.}

2021 THE WEB CONFERENCE NEWSLETTER
The Web Conference is announcing latest news and developments biweekly or on a monthly basis. We respect The General Data Protection Regulation 2016/679.