Abstract
In this chapter, we discuss how decisions made by artificially intelligent (AI) systems change our norms and practices, and thereby our morality. AI systems are often seen as useful tools for decision-making and other tasks, relying on processing big data; the often-heard belief is AI systems can reduce human error and bias in decision-making. However, we argue that even when the output of AI systems is error-free or unbiased, their use may still affect and transform human morality. This is because there is a recursive relationship between morality and decision making. We develop this argument in four sections, first explaining the link between morality and AI decision-making; second, reviewing sources of error and bias in AI systems; third, proposing anthropomorphizing as a mechanism toward overreliance on AI; concluding with insights into how overreliance on AI may affect morality.
| Original language | English |
|---|---|
| Title of host publication | Elgar Business Ethics Encyclopaedia |
| Editors | Kleio Akrivou, César González-Cantón |
| Publisher | Edward Elgar |
| Publication status | Accepted/In press - 2026 |
| MoE publication type | A3 Book chapter |
Keywords
- 512 Business and Management
- anthropomorphizing
- bias
- decisionmaking
- overreliance
- error
- judgment