Potential of explanations in enhancing trust – What can we learn from autonomous vehicles to foster the development of trustworthy autonomous vessels?

Rohit Ranjan, Ketki Kulkarni, Mashrura Musharraf*

*Corresponding author for this work

Research output: Contribution to journalReview Articlepeer-review

Abstract

The development of autonomous vessels presents a complex socio-technical challenge where AI and humans must coexist and cooperate. A crucial aspect of successfully deploying these systems is ensuring trust in the AI-powered autonomy. Our research aims to explore the potential of explanations in enhancing trust and its correlated metrics (such as preference, understanding, anxiety) in autonomous vessels. While the investigation of explainability and its role in increasing end-user trust is still at an elementary level for autonomous vessels, it has already been identified as a key requirement for successful adoption of self-driving cars and highly automated vehicles in general. We conducted a systematic literature review to investigate how the impact of explainability on trust and its correlated metrics has been studied in the domain of autonomous vehicles. We examined the diverse experimental setups employed to assess trust-building, exploring instruments, explanation modes, types, timings, and additional human factors influencing trust. The study scrutinizes prevalent data collection methods and commonly used questionnaires for measuring trust levels following explanations and examines the characteristics and theories integral to effective explanations for trust development. Review results indicate that explanations generally have a positive impact on trust and its correlated metrics preference, although this impact is not statistically significant in all cases. The effect of explanations on correlated metrics understanding was found to be statistically significant in all cases. For correlated metrics anxiety, a decrease was observed with the presence of explanations in most cases, even though this decrease wasn't always statistically significant. This study discusses how lessons learned from autonomous vehicles can be applied in the context of autonomous vessels, with the aim of fostering the development of trustworthy autonomous vessels.

Original languageEnglish
Article number120753
Peer-reviewed scientific journalOcean Engineering
Volume325
ISSN0029-8018
DOIs
Publication statusPublished - 01.03.2025
MoE publication typeA2 Review article in a scientific journal

Keywords

  • 214 Mechanical engineering
  • 117,1 Geosciences
  • Autonomous vessels
  • Explainable AI
  • Systematic literature review
  • Trustworthiness

Fingerprint

Dive into the research topics of 'Potential of explanations in enhancing trust – What can we learn from autonomous vehicles to foster the development of trustworthy autonomous vessels?'. Together they form a unique fingerprint.

Cite this