Perceived Intelligence and Ease of Use Drive ChatGPT Adoption More Than Perceived Usefulness
Category: User-Centred Design · Effect: Moderate effect · Year: 2025
For new technologies like generative AI, users are more influenced by how easy and intelligent they perceive the tool to be, rather than just its perceived usefulness, with trust and risk acting as significant mediating factors.
Design Takeaway
Focus on making AI tools exceptionally easy to use and showcasing their intelligent features to drive initial user interest, while proactively addressing trust and risk concerns to ensure sustained adoption.
Why It Matters
This insight is crucial for designers and product managers developing AI tools. It suggests that initial adoption hinges on intuitive interfaces and demonstrable AI capabilities, while trust and risk perceptions can either facilitate or hinder uptake, even if the tool is perceived as useful.
Key Finding
Users are more likely to adopt AI tools if they find them easy to use and intelligent, with perceived usefulness being less influential directly. Trust and risk perceptions play a significant role in mediating these relationships, affecting whether users will adopt the technology.
Key Findings
- Perceived ease of use and perceived intelligence significantly predict adoption intentions.
- Perceived usefulness has a limited direct impact on adoption intentions.
- Perceived risk fully mediates the relationship between perceived usefulness and adoption intention, and partially mediates the relationships from perceived ease of use and perceived intelligence.
- Perceived trust fully mediates the relationship between perceived usefulness and adoption intention, and partially mediates the relationship from perceived ease of use.
Research Evidence
Aim: To understand the key factors influencing the intention to adopt generative AI tools like ChatGPT in higher education, specifically examining the roles of perceived ease of use, perceived intelligence, perceived usefulness, trust, and risk.
Method: Quantitative cross-sectional study using structural equation modeling (SEM).
Procedure: Data was collected from participants regarding their perceptions of ease of use, intelligence, usefulness, trust, and risk associated with ChatGPT, and their intention to adopt it. SEM was used to analyze the relationships between these variables.
Sample Size: 435 participants
Context: Higher education sector, focusing on the adoption of generative AI tools.
Design Principle
For novel technology adoption, emphasize intuitive interaction and demonstrable intelligence, supported by robust trust-building and risk-mitigation strategies.
How to Apply
When designing or introducing a new AI-powered tool, conduct user testing focused on ease of use and perceived intelligence. Develop clear communication plans that highlight AI capabilities and address potential user concerns about data privacy, accuracy, and ethical implications.
Limitations
The study is cross-sectional, meaning it captures a snapshot in time and cannot establish causality. Demographic differences were explored, but further research could delve deeper into specific user segments.
Student Guide (IB Design Technology)
Simple Explanation: People are more likely to try new AI tools if they seem easy to use and smart, rather than just useful. Whether they trust the tool and feel it's not risky is also very important.
Why This Matters: Understanding what makes users adopt new technologies is key to designing successful products. This research shows that for AI, ease of use and perceived intelligence are critical drivers, alongside trust and risk.
Critical Thinking: How might the perceived 'intelligence' of an AI tool be subjective, and how can designers ensure this perception aligns with actual functionality?
IA-Ready Paragraph: This research indicates that for generative AI adoption, perceived intelligence and ease of use are more significant drivers than perceived usefulness alone. Trust and risk perceptions act as crucial mediators, influencing whether users intend to adopt the technology. Therefore, design efforts should prioritize intuitive interfaces and demonstrable AI capabilities, while actively addressing user concerns to foster trust and mitigate perceived risks.
Project Tips
- When researching user adoption of a new technology, consider factors beyond just 'usefulness'.
- Investigate how users' feelings of trust and their perception of risk influence their willingness to use a product.
How to Use in IA
- Use the findings to inform your user research by asking about perceived ease of use, intelligence, trust, and risk when exploring adoption of a new technology.
- If you are developing a prototype, focus on making it intuitive and clearly demonstrating its intelligent features.
Examiner Tips
- Ensure your research clearly distinguishes between perceived usefulness and perceived intelligence.
- Consider how you will measure and analyze the mediating roles of trust and risk in your own design project.
Independent Variable: ["Perceived Ease of Use (PE)","Perceived Intelligence (PI)","Perceived Usefulness (PUSE)"]
Dependent Variable: ["ChatGPT Adoption Intention (CGPTAI)"]
Controlled Variables: ["Perceived Trust (PT)","Perceived Risk (PR)"]
Strengths
- Uses a robust statistical method (SEM) to analyze complex relationships.
- Extends a well-established model (TAM) to a novel technological context.
Critical Questions
- To what extent do the findings on perceived intelligence generalize to AI tools that are not language-based?
- How can designers proactively build trust and mitigate risk in AI systems, especially when the technology is rapidly evolving?
Extended Essay Application
- Investigate the adoption intentions of a specific AI tool within a chosen user group, using a survey that measures perceived ease of use, intelligence, usefulness, trust, and risk.
- Analyze the data using statistical methods to identify the most influential factors and mediating roles.
Source
Determinants of ChatGPT Adoption Intention in Higher Education: Expanding on TAM with the Mediating Roles of Trust and Risk · Information · 2025 · 10.3390/info16020082