Effects of Locality and Rule Language on Explanations for Knowledge Graph Embeddings


Abstract:

Knowledge graphs (KGs) are key tools in many AI-related tasks such as reasoning or question answering. This has, in turn, propelled research in link prediction in KGs, the task of predicting missing relationships from the available knowledge. Solutions based on KG embeddings have shown promising results in this matter. On the downside, these approaches are usually unable to explain their predictions. While some works have proposed to compute post-hoc rule explanations for embedding-based link predictors, these efforts have mostly resorted to rules with unbounded atoms, e.g., bornIn(x, y) ⇒ residence(x, y), learned on a global scope, i.e., the entire KG. None of these works has considered the impact of rules with bounded atoms such as nationality(x, England) ⇒ speaks(x, English), or the impact of learning from regions of the KG, i.e., local scopes. We therefore study the effects of these factors on the quality of rule-based explanations for embedding-based link predictors. Our results suggest that more specific rules and local scopes can improve the accuracy of the explanations. Moreover, these rules can provide further insights about the inner-workings of KG embeddings for link prediction.

Año de publicación:

2023

Keywords:

  • knowledge graph embeddings
  • explainable ai

Fuente:

scopusscopus

Tipo de documento:

Conference Object

Estado:

Acceso restringido

Áreas de conocimiento:

  • Inteligencia artificial
  • Ciencias de la computación

Áreas temáticas de Dewey:

  • Funcionamiento de bibliotecas y archivos
Procesado con IAProcesado con IA

Objetivos de Desarrollo Sostenible:

  • ODS 9: Industria, innovación e infraestructura
  • ODS 17: Alianzas para lograr los objetivos
  • ODS 4: Educación de calidad
Procesado con IAProcesado con IA

Contribuidores: