The effects of domain knowledge on trust in explainable AI and task performance: A case of peer-to-peer lending


Abstract:

Increasingly, artificial intelligence (AI) is being used to assist complex decision-making such as financial investing. However, there are concerns regarding the black-box nature of AI algorithms. The field of explainable AI (XAI) has emerged to address these concerns. XAI techniques can reveal how an AI decision is formed and can be used to understand and appropriately trust an AI system. However, XAI techniques still may not be human-centred and may not support human decision-making adequately. In this work, we explored how domain knowledge, identified by expert decision makers, can be used to achieve a more human-centred approach to AI. We measured the effect of domain knowledge on trust in AI, reliance on AI, and task performance in an AI-assisted complex decision-making environment. In a peer-to-peer lending simulator, non-expert participants made financial investments using an AI assistant. The presence or absence of domain knowledge was manipulated. The results showed that participants who had access to domain knowledge relied less on the AI assistant when the AI assistant was incorrect and indicated less trust in AI assistant. However, overall investing performance was not affected. These results suggest that providing domain knowledge can influence how non-expert users use AI and could be a powerful tool to help these users develop appropriate levels of trust and reliance.

Año de publicación:

2022

Keywords:

  • Trust in AI
  • explainable ai
  • Human-AI interaction
  • Domain knowledge

Fuente:

scopusscopus

Tipo de documento:

Article

Estado:

Acceso abierto

Áreas de conocimiento:

  • Inteligencia artificial

Áreas temáticas:

  • Programación informática, programas, datos, seguridad
  • Interacción social
  • Métodos informáticos especiales