
ChatGPT frequently produces false information: output that appears plausible but is not factual. This is known as ‘hallucinations’. The reason behind this is the fact that large language models (LLM) are trained to predict strings of words (rather than being a repository of ‘facts’). Crucially, an AI does not “know” about the truthfulness of its output. Nevertheless, AI-tools are increasingly used to provide “information” in professional and private settings. Why are we inclined to rely on this non-reliable source?
In this course, we explore this question from a linguistic angle. We compare the logic and architecture behind LLMs (which underlie AI-tools) with the logic and architecture behind human cognition (including the capacity for language). At the root of our "trust" in AI-tools is the apparent flawless language output, which can lead to anthropomorphization, which in turn leads users to expect that it follows the same conversational principles as humans do.
In this course, we explore several aspects of human language that contribute to our inclination to take AI generated output at face value:
i) the fact that meaning in language is based on truth-conditions;
ii) the fact that humans mark uncertainty with linguistic means that are conspicuously absent in AI-generated text;
iii) the fact that human communication is based on the cooperative principle (according to which we assume that our interlocutors are reliable).
- Teacher: MARTINA ELISABETH WILTSCHKO