Hands on Explainable Recommender Systems
with Knowledge Graphs

to be held as part of the 16th ACM Conference on Recommender Systems (RecSys 2022)

September 18, 2022 - Seattle, WA, USA


The goal of this tutorial is to present the RecSys community with recent advances on the development and evaluation of explainable recommender systems with knowledge graphs. We will first introduce conceptual foundations, by surveying the state of the art and describing real-world examples of how knowledge graphs are being integrated into the recommendation pipeline, also for the purpose of providing explanations. This tutorial will continue with a systematic presentation of algorithmic solutions to model, integrate, train, and assess a recommender system with knowledge graphs, with particular attention to the explainability perspective. A practical part will then provide attendees with concrete implementations of recommender systems with knowledge graphs, leveraging open-source tools and public datasets; in this part, tutorial participants will be engaged in the design of explanations accompanying the recommendations and in articulating their impact. We conclude the tutorial by analyzing emerging open issues and future directions.

Target Audience

This beginner/intermediate-level tutorial is accessible to researchers, technologists and practitioners. For people not familiar with recommender systems, this tutorial covers necessary background material. Moreover, no prior knowledge on explanations, knowledge graphs, and recommender systems with knowledge graphs is assumed. Basic knowledge of Python programming and of quite common libraries, such as Pandas and NumPy, is preferred. One aspect relevant from the outline is that the explainability perspective of our tutorial is an interdisciplinary topic, touching on several dimensions beyond algorithms and being of interest for people with different backgrounds.


This tutorial will take place on September 18, 2022 in Seattle, WA, USA, as part of the 16th ACM Conference on Recommender Systems (RecSys 2022).

Timing Content
5 mins Welcome and Presenters' Introduction
60 mins Session I: Foundations
Introduction to explainable recommendation (20 mins)
  • We will first provide a historical overview of the explainable recommendation research. Indeed, though the term explainable recommendation was formally introduced in recent years, the basic concept dates back to some of the earliest works in personalized recommendation research.
  • We will present real-world examples where explanations can impact recommendation, considering domains such as music, education, and social platforms.
  • An explanation is a piece of information displayed to users, explaining why a particular item is recommended. In this part, we will focus on the different information sources (or display styles) of recommendation explanations.
  • We then provide a taxonomy of existing explainable recommendation methods, which can help the attendees to understand the state-of-the-art of explainable recommendation research.
  • We will present objectives influenced by explanations (utility, coverage, diversity, novelty, visibility, exposure) and provide related work. Explanations also have an impact on several perspectives such as the economy, law, society, trust, technology, and psychology.
Explainable recommendation models with knowledge graphs (20 mins)
  • We will provide an initial overview of the recommendation pipeline, to characterize how explanations should be enabled at several stages, namely, data acquisition and storage, data preparation, model training, model prediction, model evaluation, and recommendation delivery.
  • In this part, we dig deeper into the existing approaches that augment traditional models with KGs and embed a regularization term in the optimization function to implicitly encode high-order relationships between users and products from the KG.
  • We then detail approaches that rely on pre-computed paths (tuples) for modeling high-order relationships between users and products, according to the KG structure.
  • Finally, we show how different types of explanations can emerge from a recommender system with knowledge graphs, according to the considered approach.
Explainable recommendation evaluation (20 mins)
  • First, we will present state-of-the-art evaluation strategies that make use of user study based on volunteers or paid experiment subjects. Although it represents the most effective strategy, it is also the most time consuming and expensive.
  • We will then move to another type of evaluation, i.e., online evaluation, which evaluate explainable recommendation through online experiments, investigating different perspectives such as persuasiveness, effectiveness, efficiency, and satisfaction of the explanations.
  • Subsequently, we present the main approaches to evaluating recommendation explanations offline, such as evaluating the percentage of recommendations that can be explained by the explanation model and evaluating the explanation quality directly. In this part, we will also point out to our recent work on defining offline evaluation metrics for explanation quality.
  • We will present examples of real-world platforms, such as LinkedIn and Spotify, and of their approaches to deal with explanations, according to the covered modeling strategies.
10 mins
Questions and Discussion
30 mins Coffee Break
65 mins Session 2: Hands-on Case Studies
Recommendation models in practice (35 mins)
  • To practically show the approaches covered in the first part of the tutorial, we first present three data sets representing the movie (MovieLens-1M (ML1M)), music (LastFM-1B (LASTFM)) and e-commerce (Amazon-Cellphones) domains. They are public and vary in domain, extensiveness, and sparsity.
  • Then, we load and present the KG provided in the literature for the three considered datasets and show how such information should be pre-processed in order to enable recommendation models for leveraging their content during both optimization and recommendation generation.
  • In a subsequent step, we create and train at least two of the most recent path-based approaches that enable textual explanations, such as PGPR and CAFE, that rely on RL agents to optimize recommendations by navigating paths between users and recommended products in the KG.
  • We finally show how top-n recommendations can be created, starting from a pre-trained explainable recommender system, and measure traditional effectiveness metrics, such as NDCG.
Creation and impact of explanations (30 mins)
  • In this second part, we first show how textual explanations can be created, starting from a pre-trained explainable recommender system, to accompany the provided top-n recommendations of a user.
  • We then compute state-of-the-art offline evaluation metrics that assess explanation quality, such as linking interaction recency, shared entity popularity, and explanation type diversity.
  • Finally, we inspect and compare the explanations provided by the (at least two) models trained in the first part and analyze the impact of the explanations and aroused trade-offs.
10 mins Challenges, Final Remarks, and Discussion


The material accompanying this tutorial will be published here right after the live sessions.


Giacomo Balloccu

Giacomo Balloccu
University of Cagliari (Italy)

Ludovico Boratto

Ludovico Boratto
University of Cagliari (Italy)

Gianni Fenu

Gianni Fenu
University of Cagliari (Italy)

Mirko Marras

Mirko Marras
University of Cagliari (Italy)


Registration to the tutorial will be managed by the RecSys 2022 main conference organization. Registration is yet to open.


Please, reaching out to us at giacomo.balloccu@unica.it, ludovico.boratto@acm.org, fenu@unica.it, and mirko.marras@acm.org for any request you might have.