Category Archives: hallucinations

Is Generative AI Currently Being Misused?

Author: Gerd Doeben-Henisch

Changelog: November 6, 2025 – November 6, 2025

Email: info@uffmm.org

CONTENT TREE

This text is part of the TOPIC Philosophy of Science.


Is Generative AI Currently Being Misused?

A Growing Awareness

Since the spread of generative AI technology in November 2022, its use—mainly through chatbots that offer a dialogue interface to a frozen language model—has encouraged millions of people to obtain “answers” from these machines, often accompanied by the belief that such answers are superior to those produced by humans alone.

In all market-driven industries, where competition and cost efficiency are crucial, a new hope quickly emerged: that this technology could deliver the same or even better services with fewer and fewer people involved.

It did not take long, however, for users to notice a strange phenomenon—so-called hallucinations—where the generated answers were flawed, unusable, or even dangerous, something that should not happen at all.

Attempts to “correct” these hallucinations soon revealed the dark side of the promise: more and more time and skilled human labor were required to replace hallucinated answers with correct ones. This increase in effort and cost often offsets any financial gains from using the technology in the first place.

Hallucinations as a Built-in Feature

In humans, hallucinations are usually described as perceptions occurring in the absence of an external stimulus, yet accompanied by a compelling sense of reality [1]. In the case of generative-AI-based chatbots (gen-Chat-Bots), strictly speaking, all of their answers are hallucinations. This follows directly from the architecture of such systems.

Generative chatbots have a dual structure:

  1. The language model.
    At their foundation lies a frozen language model, extracted from a massive collection of texts in a given human language. This model consists of an enormous number of language elements derived from linguistic expressions. These elements are embedded in a statistical dynamic that reflects the non-random order of human language use. Humans do not produce expressions arbitrarily; they follow patterns linked to the meaning structures within their brains. Thus, certain expressions occur only when they are consistent with the internal organization and dynamics of meaning.
  2. The dialogue component.
    The chatbot’s dialogue system takes a user’s input and generates a possible sequence of language elements using the statistical dynamics of the frozen model. The output typically resembles what a human might say in a similar situation. This input-output process can be repeated indefinitely.

In simple cases, the cumulative outputs of a gen-Chat-Bot may look nice or human-like. Yet if the human user needs these outputs for practical use in the real world, then “nice language” is not enough. The user needs expressions that are seriously connected to meaning—and for practical purposes, meaning must be validated against the real-world domain in which the user operates.

At this critical point of real-world usage, the fundamentally hypothetical nature of a chatbot’s language production becomes visible. Because its architecture is completely void of meaning, the system cannot determine whether its linguistic output corresponds in any way to reality. In this sense, it is always dreaming. Dreaming is its only mode of “thinking.”

Conclusions

At OpenAI, the phenomenon of hallucinations is well known [2]. Reading their publications might give the impression that hallucinations could somehow be fixed. But this is not the case.

In a detailed analysis, Gyana Swain [3] shows that hallucinations are mathematically inevitable, not merely engineering flaws. This confirms the preceding argument: hallucinations stem from the radical absence of meaning in such systems.

Humans, by contrast, can correlate empirical experiences with the symbolic space of language within their bodies. Even so, humans can never achieve a perfectly precise correspondence between linguistic expressions, their hypothetical meanings, and the observable real world—especially when communicating with each other.

The final consequence of these considerations is not necessarily that the whole gen-Chat-Bot approach is wrong. In many ways, it is ingenious. But the assumption that this technology could serve as an ideal tool for solving every kind of real-world problem with fewer and fewer humans is likely to lead us toward a serious crash.

Comments

[1] See a more detailed introduction here: Wikipedia – Hallucination
[2] OpenAI, Why language models hallucinate, Sept 5 2025. OpenAI article, Original paper
[3] Gyana Swain, “OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws,” Computerworld, Sept 18 2025. Link