Over-reliance on AI tutors can stem from misplaced trust, not accuracy

Category: User-Centred Design · Effect: Moderate effect · Year: 2023

Students often trust AI-generated answers, even when incorrect, influencing their overall perception of the AI's utility.

Design Takeaway

Design AI educational tools with explicit trust-building and trust-mitigating features, acknowledging that user perception is as important as functional accuracy.

Why It Matters

This highlights a critical gap in user understanding of AI capabilities. Designers must consider not just the functional accuracy of AI tools but also the psychological factors driving user trust and adoption, especially in educational contexts where misconceptions can hinder learning.

Key Finding

Students tend to trust AI tutors like ChatGPT, which positively influences how they view the tool, regardless of whether the AI's answers are actually correct. This suggests a potential for over-reliance due to misplaced trust.

Key Findings

Research Evidence

Aim: What are undergraduate physics students' perceptions of using ChatGPT as a virtual tutor, and how does their trust in the AI relate to its perceived accuracy and their overall experience?

Method: Quantitative and Qualitative Survey

Procedure: Undergraduate physics students were surveyed about their experiences using ChatGPT for physics questions. The survey assessed their perceptions of ChatGPT's accuracy, their trust levels in the AI's responses, and their overall satisfaction with it as a tutoring tool. Some qualitative data on misconceptions was also collected.

Sample Size: Not explicitly stated, but implied to be a group of undergraduate physics students.

Context: Higher education physics classrooms.

Design Principle

Design for critical engagement: AI tools should encourage users to question and verify information, rather than blindly accepting it.

How to Apply

When designing AI tutors, build in features that prompt users to cross-reference information or indicate confidence levels of the AI's responses.

Limitations

The study focuses on a specific subject (physics) and student demographic (undergraduates), and perceptions may vary across different disciplines and age groups. The study also relies on self-reported perceptions.

Student Guide (IB Design Technology)

Simple Explanation: People often believe AI answers are correct, even when they aren't, and this makes them feel the AI is helpful.

Why This Matters: Understanding user trust is key to designing effective and responsible AI tools, especially in learning environments where misinformation can be detrimental.

Critical Thinking: To what extent should designers aim to build user trust in AI tools, and at what point does this trust become detrimental to critical thinking and learning?

IA-Ready Paragraph: This study by Ding et al. (2023) highlights that users often develop trust in AI tools like ChatGPT, influencing their perception of its utility, even when the AI provides inaccurate information. This suggests that design interventions should not only focus on functional accuracy but also on managing user trust and promoting critical evaluation of AI-generated content.

Project Tips

How to Use in IA

Examiner Tips

Independent Variable: ["Accuracy of ChatGPT answers","Student trust levels in ChatGPT"]

Dependent Variable: ["Students' perceptions of ChatGPT's accuracy","Students' overall perceptions of ChatGPT as a tutor"]

Controlled Variables: ["Subject matter (Physics)","Student level (Undergraduate)"]

Strengths

Critical Questions

Extended Essay Application

Source

Students’ perceptions of using ChatGPT in a physics class as a virtual tutor · International Journal of Educational Technology in Higher Education · 2023 · 10.1186/s41239-023-00434-1