Article

Truth machines: synthesizing veracity in AI language models

Details

Citation

Munn L, Magee L & Arora V (2023) Truth machines: synthesizing veracity in AI language models. AI & SOCIETY. https://doi.org/10.1007/s00146-023-01756-4

Abstract
As AI technologies are rolled out into healthcare, academia, human resources, law, and a multitude of other domains, they become de-facto arbiters of truth. But truth is highly contested, with many different definitions and approaches. This article discusses the struggle for truth in AI systems and the general responses to date. It then investigates the production of truth in InstructGPT, a large language model, highlighting how data harvesting, model architectures, and social feedback mechanisms weave together disparate understandings of veracity. It conceptualizes this performance as an operationalization of truth, where distinct, often-conflicting claims are smoothly synthesized and confidently presented into truth-statements. We argue that these same logics and inconsistencies play out in Instruct’s successor, ChatGPT, reiterating truth as a non-trivial problem. We suggest that enriching sociality and thickening “reality” are two promising vectors for enhancing the truth-evaluating capacities of future language models. We conclude, however, by stepping back to consider AI truth-telling as a social practice: what kind of “truth” do we as listeners desire?

Keywords
Truthfulness; Veracity; AI; Large language model; GPT-3; InstructGPT; ChatGPT

Journal
AI & SOCIETY

StatusIn Press
Publication date online31/08/2023
Date accepted by journal14/08/2023
URLhttp://hdl.handle.net/1893/35437
PublisherSpringer Science and Business Media LLC
ISSN0951-5666
eISSN1435-5655

People (1)

People

Dr Vanicka Arora

Dr Vanicka Arora

Lecturer in Heritage, History