Sammanfattning
Government leaders across the globe are grappling with how to harness and integrate artificial intelligence (AI) to enhance public service delivery and efficiency. Yet, a key challenge faced is how to build and maintain the trust of stakeholders. Trust is critical for the acceptance and sustained adoption of AI technologies, as well as to gain the requisite funding, resourcing and authorization to implement AI solutions. However, inherent features of AI—its autonomous capabilities, dynamic learning, and inscrutable operating logic—create challenges for trust, particularly in public services that are subject to high expectations of accountability, transparency, and fairness. We present an in-depth case analysis of how an Australian government department was able to deploy a solution that was widely accepted, and identified as an exemplar of trustworthy AI use. We identify six trust-supporting approaches: benevolent customer-centricity, radical honesty, diverse input, rigorous development and testing, human discretion in decision-making, and aligning the authorising environment. For each approach, we explain how and why it supports trust, and then contrast that approach with a prominent, but widely distrusted application in the Australian government. We conclude with implications for public sector leaders seeking to engender trust in their use of AI.
| Originalspråk | Engelska |
|---|---|
| Referentgranskad vetenskaplig tidskrift | Information Systems Journal |
| ISSN | 1350-1917 |
| DOI | |
| Status | Publicerad - 10.03.2026 |
| MoE-publikationstyp | A1 Originalartikel i en vetenskaplig tidskrift |
Nyckelord
- 512 Företagsekonomi
- 113 Data- och informationsvetenskap
Fingeravtryck
Fördjupa i forskningsämnen för ”How to Demonstrate Trustworthy Use of AI in Public Services: A Case Study”. Tillsammans bildar de ett unikt fingeravtryck.Citera det här
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver