AI Trust Research Lacks Contextual Models and Rigorous Methods

Category: User-Centred Design · Effect: Strong effect · Year: 2023

Empirical research on trust in AI has predominantly relied on exploratory methods and has not sufficiently developed contextualized theoretical models, hindering a deep understanding of user reliance.

Design Takeaway

Prioritize the development and application of context-specific theoretical frameworks for AI trust, moving beyond purely exploratory research to inform more robust and reliable AI designs.

Why It Matters

For designers and engineers developing AI-powered products, understanding the nuances of user trust is paramount for successful adoption and safe interaction. A lack of robust, context-specific models means current design approaches may not adequately address the diverse factors influencing user confidence in AI systems.

Key Finding

The study found that most research on AI trust uses basic exploration rather than structured testing and that there's a need for more specific theories that consider different AI applications and user groups.

Key Findings

Research Evidence

Aim: What are the prevailing trends, overlooked issues, and future directions in empirical research on trust in Artificial Intelligence?

Method: Bibliometric and Qualitative Content Analysis

Procedure: A comprehensive bibliometric analysis was conducted on over two decades of empirical research measuring trust in AI, followed by a qualitative content analysis of the core articles to identify trends, gaps, and future research needs.

Sample Size: 1156 core articles

Context: Artificial Intelligence (AI) systems, user interaction with technology

Design Principle

Design for trust by grounding AI development in contextually relevant theoretical models and employing rigorous empirical validation.

How to Apply

When designing AI systems, consider the specific context of use and the target user group to develop tailored trust-building strategies, rather than applying generic principles.

Limitations

The review is based on published research, potentially missing unpublished work or emerging trends not yet widely documented. The qualitative analysis is subject to the interpretations of the researchers.

Student Guide (IB Design Technology)

Simple Explanation: Research on how much people trust AI has been going on for a long time, but it often uses simple methods and doesn't have clear theories for different situations, which makes it hard to know exactly how to build trustworthy AI.

Why This Matters: Understanding how users build trust in AI is crucial for creating products that people will actually use and rely on. This research highlights that current methods might not be enough to fully grasp this complex relationship.

Critical Thinking: Given the identified lack of contextualized theoretical models, how can designers proactively develop and test their own context-specific trust frameworks for novel AI applications?

IA-Ready Paragraph: This bibliometric review of AI trust research reveals a significant reliance on exploratory methodologies and a deficit in contextualized theoretical models. Consequently, designers must move beyond generic trust-building strategies and develop AI systems informed by context-specific theories and rigorous empirical validation to ensure user confidence and adoption.

Project Tips

How to Use in IA

Examiner Tips

Independent Variable: ["Research methodology (e.g., exploratory vs. experimental)","Theoretical model (e.g., presence/absence, type of model)"]

Dependent Variable: Level of user trust in AI

Controlled Variables: ["AI application domain","User demographics","Specific AI features"]

Strengths

Critical Questions

Extended Essay Application

Source

Twenty-Four Years of Empirical Research on Trust in AI: A Bibliometric Review of Trends, Overlooked Issues, and Future Directions · arXiv (Cornell University) · 2023 · 10.48550/arxiv.2309.09828