Trust in AI: Evolving from Automation to Healthcare Applications
Category: User-Centred Design · Effect: Strong effect · Year: 2025
Understanding the evolution of human trust in automated systems to AI in healthcare is crucial for designing effective and user-accepted AI technologies.
Design Takeaway
Designers should prioritize transparency, explainability, and clear communication of AI capabilities and limitations to foster user trust, especially in high-stakes domains like healthcare.
Why It Matters
As AI becomes more integrated into critical fields like healthcare, designers must consider the nuanced factors that build and maintain user trust. This requires a shift from simply ensuring functional automation to fostering confidence in intelligent, adaptive systems.
Key Finding
Over 30 years, trust in automated systems has transformed into trust in AI, especially in healthcare. This trust is influenced by the user, the AI's features, and the surrounding environment, and is now measured more dynamically. A key challenge is ensuring users perceive AI as trustworthy as it actually is.
Key Findings
- Human trust has shifted from automation to AI, with expanded research paradigms and disciplines.
- Key determinants of human-AI trust in healthcare include user characteristics, AI system attributes, and contextual factors.
- Measurement of trust has evolved from self-report to dynamic, multimodal, and psychophysiological approaches.
- Bridging actual trustworthiness and perceived trust is essential for human-centered AI.
Research Evidence
Aim: How has the concept of human trust in automated systems evolved into trust in AI, particularly within healthcare, and what framework can guide the design of trustworthy human-AI systems?
Method: Narrative Review
Procedure: The researchers conducted a longitudinal review of literature spanning 30 years to trace the evolution of human-machine trust, focusing on AI in healthcare. They developed an interdisciplinary framework (I-HATR) and identified key determinants of trust, measurement approaches, and practical challenges.
Context: Healthcare AI
Design Principle
Design for trust by aligning AI's actual capabilities with user perceptions through transparent design and effective communication.
How to Apply
When designing AI-powered healthcare tools, explicitly map out how user characteristics, AI attributes (e.g., explainability features), and contextual factors (e.g., clinical workflow integration) will influence user trust, and plan for dynamic trust evaluation.
Limitations
The review is based on existing literature and may not capture all emerging trends or niche applications. The focus on healthcare might limit direct applicability to other domains without adaptation.
Student Guide (IB Design Technology)
Simple Explanation: This study shows how people's trust in technology has changed from simple machines to smart AI, especially in hospitals. It gives designers ideas on how to make AI that people will trust and use safely.
Why This Matters: Understanding trust is vital for user adoption and effective use of any design, particularly for complex systems like AI. This research provides a roadmap for building user confidence.
Critical Thinking: How might the 'black box' nature of some advanced AI algorithms inherently conflict with the need for explainability to build user trust, and what design strategies can mitigate this conflict?
IA-Ready Paragraph: This research highlights the critical shift in user trust from basic automation to sophisticated AI, particularly within healthcare. The study identifies user characteristics, AI system attributes, and contextual factors as key determinants of trust. Understanding this evolution and these determinants is essential for designing AI systems that users will not only accept but also rely on effectively and safely, by ensuring perceived trustworthiness aligns with actual system capabilities.
Project Tips
- When researching user trust in your design project, consider how the user's background and the environment affect their trust.
- Think about how you can make your AI system's decisions understandable to the user.
How to Use in IA
- Use the identified determinants of trust (user, AI, context) to structure your user research and analysis.
- Refer to the evolution of trust measurement to justify your chosen methods for evaluating user trust in your design.
Examiner Tips
- Demonstrate an understanding of how trust in technology has evolved and how this impacts user acceptance of new designs.
- Clearly articulate the factors influencing trust in your specific design context.
Independent Variable: Evolution of AI technology, types of AI applications (automation vs. AI), research paradigms.
Dependent Variable: Human trust in automated systems/AI, perceived trustworthiness, user acceptance.
Controlled Variables: Healthcare context, specific AI system features, user demographics, task complexity.
Strengths
- Provides a comprehensive 30-year longitudinal perspective.
- Offers an interdisciplinary framework (I-HATR) for research and design.
Critical Questions
- To what extent can the I-HATR framework be generalized beyond healthcare?
- What are the ethical implications of designing AI systems to intentionally influence user trust?
Extended Essay Application
- Investigate the evolution of trust in a specific technology domain (e.g., autonomous vehicles, financial advisory AI) and propose design guidelines for future systems.
- Develop and test a prototype that incorporates specific features aimed at enhancing user trust, using the I-HATR framework as a guide.
Source
From Trust in Automation to Trust in AI in Healthcare: A 30-Year Longitudinal Review and an Interdisciplinary Framework · Bioengineering · 2025 · 10.3390/bioengineering12101070