Trust in AI: Evolving from Automation to Healthcare Applications

Category: User-Centred Design · Effect: Strong effect · Year: 2025

Understanding the evolution of human trust in automated systems to AI in healthcare is crucial for designing effective and user-accepted AI technologies.

Design Takeaway

Designers should prioritize transparency, explainability, and clear communication of AI capabilities and limitations to foster user trust, especially in high-stakes domains like healthcare.

Why It Matters

As AI becomes more integrated into critical fields like healthcare, designers must consider the nuanced factors that build and maintain user trust. This requires a shift from simply ensuring functional automation to fostering confidence in intelligent, adaptive systems.

Key Finding

Over 30 years, trust in automated systems has transformed into trust in AI, especially in healthcare. This trust is influenced by the user, the AI's features, and the surrounding environment, and is now measured more dynamically. A key challenge is ensuring users perceive AI as trustworthy as it actually is.

Key Findings

Research Evidence

Aim: How has the concept of human trust in automated systems evolved into trust in AI, particularly within healthcare, and what framework can guide the design of trustworthy human-AI systems?

Method: Narrative Review

Procedure: The researchers conducted a longitudinal review of literature spanning 30 years to trace the evolution of human-machine trust, focusing on AI in healthcare. They developed an interdisciplinary framework (I-HATR) and identified key determinants of trust, measurement approaches, and practical challenges.

Context: Healthcare AI

Design Principle

Design for trust by aligning AI's actual capabilities with user perceptions through transparent design and effective communication.

How to Apply

When designing AI-powered healthcare tools, explicitly map out how user characteristics, AI attributes (e.g., explainability features), and contextual factors (e.g., clinical workflow integration) will influence user trust, and plan for dynamic trust evaluation.

Limitations

The review is based on existing literature and may not capture all emerging trends or niche applications. The focus on healthcare might limit direct applicability to other domains without adaptation.

Student Guide (IB Design Technology)

Simple Explanation: This study shows how people's trust in technology has changed from simple machines to smart AI, especially in hospitals. It gives designers ideas on how to make AI that people will trust and use safely.

Why This Matters: Understanding trust is vital for user adoption and effective use of any design, particularly for complex systems like AI. This research provides a roadmap for building user confidence.

Critical Thinking: How might the 'black box' nature of some advanced AI algorithms inherently conflict with the need for explainability to build user trust, and what design strategies can mitigate this conflict?

IA-Ready Paragraph: This research highlights the critical shift in user trust from basic automation to sophisticated AI, particularly within healthcare. The study identifies user characteristics, AI system attributes, and contextual factors as key determinants of trust. Understanding this evolution and these determinants is essential for designing AI systems that users will not only accept but also rely on effectively and safely, by ensuring perceived trustworthiness aligns with actual system capabilities.

Project Tips

How to Use in IA

Examiner Tips

Independent Variable: Evolution of AI technology, types of AI applications (automation vs. AI), research paradigms.

Dependent Variable: Human trust in automated systems/AI, perceived trustworthiness, user acceptance.

Controlled Variables: Healthcare context, specific AI system features, user demographics, task complexity.

Strengths

Critical Questions

Extended Essay Application

Source

From Trust in Automation to Trust in AI in Healthcare: A 30-Year Longitudinal Review and an Interdisciplinary Framework · Bioengineering · 2025 · 10.3390/bioengineering12101070