AI Trust Research Lacks Contextual Models and Rigorous Methods
Category: User-Centred Design · Effect: Strong effect · Year: 2023
Empirical research on trust in AI has predominantly relied on exploratory methods and has not sufficiently developed contextualized theoretical models, hindering a deep understanding of user reliance.
Design Takeaway
Prioritize the development and application of context-specific theoretical frameworks for AI trust, moving beyond purely exploratory research to inform more robust and reliable AI designs.
Why It Matters
For designers and engineers developing AI-powered products, understanding the nuances of user trust is paramount for successful adoption and safe interaction. A lack of robust, context-specific models means current design approaches may not adequately address the diverse factors influencing user confidence in AI systems.
Key Finding
The study found that most research on AI trust uses basic exploration rather than structured testing and that there's a need for more specific theories that consider different AI applications and user groups.
Key Findings
- A significant reliance on exploratory methodologies in AI trust research.
- A lack of contextualized theoretical models to explain trust in AI.
- Missing perspectives in global discussions on trust in AI.
Research Evidence
Aim: What are the prevailing trends, overlooked issues, and future directions in empirical research on trust in Artificial Intelligence?
Method: Bibliometric and Qualitative Content Analysis
Procedure: A comprehensive bibliometric analysis was conducted on over two decades of empirical research measuring trust in AI, followed by a qualitative content analysis of the core articles to identify trends, gaps, and future research needs.
Sample Size: 1156 core articles
Context: Artificial Intelligence (AI) systems, user interaction with technology
Design Principle
Design for trust by grounding AI development in contextually relevant theoretical models and employing rigorous empirical validation.
How to Apply
When designing AI systems, consider the specific context of use and the target user group to develop tailored trust-building strategies, rather than applying generic principles.
Limitations
The review is based on published research, potentially missing unpublished work or emerging trends not yet widely documented. The qualitative analysis is subject to the interpretations of the researchers.
Student Guide (IB Design Technology)
Simple Explanation: Research on how much people trust AI has been going on for a long time, but it often uses simple methods and doesn't have clear theories for different situations, which makes it hard to know exactly how to build trustworthy AI.
Why This Matters: Understanding how users build trust in AI is crucial for creating products that people will actually use and rely on. This research highlights that current methods might not be enough to fully grasp this complex relationship.
Critical Thinking: Given the identified lack of contextualized theoretical models, how can designers proactively develop and test their own context-specific trust frameworks for novel AI applications?
IA-Ready Paragraph: This bibliometric review of AI trust research reveals a significant reliance on exploratory methodologies and a deficit in contextualized theoretical models. Consequently, designers must move beyond generic trust-building strategies and develop AI systems informed by context-specific theories and rigorous empirical validation to ensure user confidence and adoption.
Project Tips
- When researching user trust in your design project, think about the specific context and user group.
- Try to base your research on existing theories of trust, or develop a new one that fits your specific AI application.
How to Use in IA
- Use this research to justify the need for rigorous user testing and the development of context-specific design guidelines for AI trust in your design project.
Examiner Tips
- Demonstrate an awareness of the limitations in current research methodologies for understanding user trust in AI, and propose how your design project addresses these gaps.
Independent Variable: ["Research methodology (e.g., exploratory vs. experimental)","Theoretical model (e.g., presence/absence, type of model)"]
Dependent Variable: Level of user trust in AI
Controlled Variables: ["AI application domain","User demographics","Specific AI features"]
Strengths
- Comprehensive bibliometric analysis covering a long period.
- Qualitative content analysis provides depth to the quantitative findings.
Critical Questions
- To what extent do the identified 'elephants in the room' reflect a global consensus or regional biases in AI trust research?
- How can the design community actively contribute to bridging the gap between exploratory research and the development of robust, contextualized trust models for AI?
Extended Essay Application
- An Extended Essay could investigate the development and validation of a context-specific trust model for a particular AI application (e.g., AI in healthcare, AI in education), drawing on the identified need for more rigorous and theoretical approaches.
Source
Twenty-Four Years of Empirical Research on Trust in AI: A Bibliometric Review of Trends, Overlooked Issues, and Future Directions · arXiv (Cornell University) · 2023 · 10.48550/arxiv.2309.09828