System Usability Score of 64.76 indicates a need for interface refinement in research submission platforms.
Category: User-Centred Design · Effect: Moderate effect · Year: 2025
A usability score of 64.76 suggests that while a research submission system is functional, it requires targeted improvements to enhance user experience and satisfaction.
Design Takeaway
Designers should prioritize a holistic approach to usability, combining objective performance metrics with subjective user feedback and expert heuristic reviews to identify and address all facets of user experience.
Why It Matters
Understanding user interaction with complex digital systems is crucial for effective design. Low usability scores can lead to user frustration, decreased efficiency, and potential errors, impacting the overall adoption and success of the system.
Key Finding
While users were largely successful in completing tasks, the system's overall usability score suggests room for improvement in areas beyond basic task completion, likely related to ease of use and user satisfaction. Expert review and user feedback highlighted specific areas for interface refinement.
Key Findings
- The System Usability Scale (SUS) score was 64.76, indicating the system is not yet fully acceptable and requires improvements.
- Effectiveness was high, with success rates of 100% for expert users and 97.9% for novice users, exceeding the 80% benchmark.
- Efficiency was moderate, with expert users taking approximately 2.35 minutes per task and novice users taking approximately 2 minutes per task.
- Concurrent Think Aloud and Heuristic Evaluation effectively identified the root causes of issues identified by the SUS and efficiency metrics.
Research Evidence
Aim: To evaluate the usability of a research ethics submission system and provide actionable recommendations for improvement.
Method: Mixed-methods usability evaluation
Procedure: The study employed a combination of usability testing techniques, including Concurrent Think Aloud (CTA), Heuristic Evaluation (HE), and the System Usability Scale (SUS). Participants performed tasks within the system, verbalizing their thoughts (CTA), while experts assessed the interface against established heuristics (HE). User satisfaction was measured using the SUS questionnaire.
Sample Size: 32 participants
Context: Academic research submission systems
Design Principle
Usability is a multi-faceted quality that encompasses effectiveness, efficiency, and user satisfaction, requiring a blend of quantitative measurement and qualitative insight for thorough evaluation.
How to Apply
When designing or evaluating digital interfaces, use a combination of SUS questionnaires, task-based performance tracking (success rate and time), and expert heuristic reviews to gain a complete picture of usability.
Limitations
The study focused on a specific user group (S1 students and lecturers in the Faculty of Medicine) and a single system, limiting generalizability to other contexts or user demographics.
Student Guide (IB Design Technology)
Simple Explanation: Even if people can use a system to get their work done, it might not be the easiest or most pleasant experience. This study shows how to check for those problems and make the system better.
Why This Matters: Understanding usability helps you design products that are not only functional but also enjoyable and efficient for users, leading to greater adoption and success.
Critical Thinking: How might the specific domain of academic research submission influence the interpretation of usability scores compared to a general consumer application?
IA-Ready Paragraph: This research evaluated the usability of a digital system using a mixed-methods approach, combining quantitative measures like the System Usability Scale (SUS) with qualitative feedback from Concurrent Think Aloud and expert Heuristic Evaluation. The findings revealed a moderate usability score, indicating that while tasks were largely achievable, opportunities for enhancing user efficiency and satisfaction exist, necessitating targeted design refinements.
Project Tips
- Clearly define the tasks users will perform during usability testing.
- Ensure your heuristic evaluation criteria are well-defined and consistently applied.
- Triangulate findings from different methods to strengthen your conclusions.
How to Use in IA
- Use the System Usability Scale (SUS) as a quantitative measure of user satisfaction in your design project.
- Incorporate heuristic evaluation principles to identify potential usability flaws in your design prototypes.
Examiner Tips
- Ensure that the chosen usability testing methods align with the research aims.
- Demonstrate a clear link between identified usability issues and proposed design solutions.
Independent Variable: Usability testing methods (CTA, HE, SUS), user expertise (expert vs. novice)
Dependent Variable: Effectiveness (success rate), efficiency (time per task), user satisfaction (SUS score)
Controlled Variables: System interface, task complexity, participant demographics (implicitly controlled by sampling)
Strengths
- Utilized a robust combination of established usability evaluation methods.
- Provided specific, actionable recommendations for system improvement.
Critical Questions
- Were the tasks chosen representative of real-world user interactions with the system?
- How might cultural or linguistic differences among users impact the interpretation of the 'think aloud' data?
Extended Essay Application
- Investigate the impact of different interface design paradigms (e.g., minimalist vs. feature-rich) on the usability of complex data visualization tools for scientific research.
Source
Evaluasi Usability Sistem Informasi Pengajuan Etik Penelitian (Sietik) Undiksha Berdasarkan Metode Usability Testing Menggunakan Teknik Concurrent Think Aloud, Heuristic Evaluation Dan System Usability Scale · INSERT · 2025 · 10.23887/insert.v6i2.102446