System-provided explanations for recommendations are an important component towards transparent and trustworthy AI. In state-of-the-art works, this is a one-way signal, though, to improve user acceptance. In this paper, we turn the role of explanations around and investigate how they can contribute to enhancing the quality of the generated recommendations themselves. We devise an active learning framework, called ELIXIR, where user feedback on explanations is leveraged for pair-wise learning of user preferences. ELIXIR leverages feedback on pairs of recommendations and explanations to learn user-specific latent preference vectors, overcoming sparseness by label propagation with item-similarity-based neighborhoods. Our framework is instantiated using generalized graph recommendation via Random Walk with Restart. Insightful experiments with a real-user study show significant improvements in the quality of movie recommendations over item-level feedback.