Effects of Locality and Rule Language on Explanations for Knowledge Graph Embeddings


Abstract:

Knowledge graphs (KGs) are key tools in many AI-related tasks such as reasoning or question answering. This has, in turn, propelled research in link pbkp_rediction in KGs, the task of pbkp_redicting missing relationships from the available knowledge. Solutions based on KG embeddings have shown promising results in this matter. On the downside, these approaches are usually unable to explain their pbkp_redictions. While some works have proposed to compute post-hoc rule explanations for embedding-based link pbkp_redictors, these efforts have mostly resorted to rules with unbounded atoms, e.g., bornIn(x, y) ⇒ residence(x, y), learned on a global scope, i.e., the entire KG. None of these works has considered the impact of rules with bounded atoms such as nationality(x, England) ⇒ speaks(x, English), or the impact of learning from regions of the KG, i.e., local scopes. We therefore study the effects of these factors on the quality of rule-based explanations for embedding-based link pbkp_redictors. Our results suggest that more specific rules and local scopes can improve the accuracy of the explanations. Moreover, these rules can provide further insights about the inner-workings of KG embeddings for link pbkp_rediction.

Año de publicación:

2023

Keywords:

  • knowledge graph embeddings
  • explainable ai

Fuente:

scopusscopus

Tipo de documento:

Conference Object

Estado:

Acceso restringido

Áreas de conocimiento:

  • Inteligencia artificial
  • Ciencias de la computación

Áreas temáticas:

  • Funcionamiento de bibliotecas y archivos

Contribuidores: