Category Archives: empirical truth

The Invasion of the Storytellers

Author: Gerd Doeben-Henisch

Changelog: April 30, 2024 – May 3, 2024

May 3,24: I added two Epilogs

Email: info@uffmm.org

TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software @chatGPT4 with manual modifications.

CONTEXT

Originally I wrote, that “this text is not a direct continuation of another text, but that there exist before various articles from the author on similar topics. In this sense, the current text is a kind of ‘further development’ of these ideas”. But, indeed, at least the text “NARRATIVES RULE THE WORLD. CURSE & BLESSING. COMMENTS FROM @CHATGPT4” ( https://www.uffmm.org/2024/02/03/narratives-rule-the-world-curse-blessing-comments-from-chatgpt4/ ) is a text, which can be understood as a kind of precursor.

In everyday life … magical links …

Almost everyone knows someone—or even several people—who send many emails—or other messages—that only contain links, links to various videos, of which the internet provides plenty nowadays, or images with a few keywords.

Since time is often short, one would like to know if it’s worth clicking on this video. But explanatory information is missing.

When asked about it, whether it would not be possible to include a few explanatory words, the sender almost always replies that they cannot formulate it as well as the video itself.

Interesting: Someone sends a link to a video without being able to express their opinion about it in their own words…

Follow-up questions…

When I click on a link and try to form an opinion, one of the first questions naturally is who published the video (or a text). The same set of facts can be narrated quite differently, even in complete contradiction, depending on the observer’s perspective, as evidenced and verifiable in everyday life. And since what we can sensually perceive is always only very fragmentary, is attached to the surfaces and is connected to some moment of time, it does not necessarily allow us to recognize different relationships to other aspects. And this vagueness is offering plenty of room for interpretation with each observation. Without a thorough consideration of the context and the backstory, interpretation is simply not possible … unless someone already has a ‘finished opinion’ that ‘integrates’ the ‘involuntary fragment of observation’ without hesitation.

So questioning and researching is quite ‘normal’, but our ‘quick brain’ first seeks ‘automatic answers’, as it doesn’t require much thought, is faster, requires less energy, and despite everything, this ‘automatic interpretation’ still provides a ‘satisfying feeling’: Yes, one ‘knows exactly what is presented’. So why question?

Immunizing…

As a scientist, I am trained to clarify all framework conditions, including my own assumptions. Of course, this takes effort and time and is anything but error-free. Hence, multiple checks, inquiries with others about their perspectives, etc. are a common practice.

However, when I ask the ‘wordless senders of links’, if something catches my attention, especially when I address a conflict with the reality I know, the reactions vary in the direction that I have misunderstood or that the author did not mean it that way at all. If I then refer to other sources that are considered ‘strongly verified’, they are labeled as ‘lying press’ or the authors are immediately exposed as ‘agents of a dark power’ (there is a whole range of such ‘dark powers’), and if I dare to inquire here as well, where the information comes from, then I quickly become a naive, stupid person for not knowing all this.

So, any attempt to clarify the basics of statements, to trace them back to comprehensible facts, ends in some kind of conflict long before any clarification has been realized.

Truth, Farewell…

Now, the topic of ‘truth’ has become even in philosophy unfortunately no more than a repository of multiple proposals. And even the modern sciences, fundamentally empirical, increasingly entangle themselves in the multitude of their disciplines and methods in a way that ‘integrative perspectives’ are rare and the ‘average citizen’ tends to have a problem of understanding. Not a good starting point to effectively prevent the spread of the ‘cognitive fairy tale virus’.

Democracy and the Internet as a Booster

The bizarre aspect of our current situation is that precisely the two most significant achievements of humanity, the societal form of ‘modern democracy’ (for about 250 years (in a history of about 300,000 years)) and the technology of the ‘internet’ (browser-based since about 1993), which for the first time have made a maximum of freedom and diversity of expression possible, that precisely these two achievements have now created the conditions for the cognitive fairy tale virus to spread so unrestrainedly.

Important: today’s cognitive fairy tale virus occurs in the context of ‘freedom’! In previous millennia, the cognitive fairy tale virus already existed, but it was under the control of the respective authoritarian rulers, who used it to steer the thoughts and feelings of their subjects in their favor. The ‘ambiguities’ of meanings have always allowed almost all interpretations; and if a previous fairy tale wasn’t enough, a new one was quickly invented. As long as control by reality is not really possible, anything can be told.

With the emergence of democracy, the authoritarian power structures disappeared, but the people who were allowed and supposed to vote were ultimately the same as before in authoritarian regimes. Who really has the time and desire to deal with the complicated questions of the real world, especially if it doesn’t directly affect oneself? That’s what our elected representatives are supposed to do…

In the (seemingly) quiet years since World War II, the division of tasks seemed to work well: here the citizens delegating everything, and there the elected representatives who do everything right. ‘Control’ of power was supposed to be guaranteed through constitution, judiciary, and through a functioning public…

But what was not foreseen were such trifles as:

  1. The increase in population and the advancement of technologies induced ever more complex processes with equally complex interactions that could no longer be adequately managed with the usual methods from the past. Errors and conflicts were inevitable.
  2. Delegating to a few elected representatives with ‘normal abilities’ can only work if these few representatives operate within contexts that provide them with all the necessary competencies their office requires. This task seems to be increasingly poorly addressed.
  3. The important ‘functioning public’ has been increasingly fragmented by the tremendous possibilities of the internet: there is no longer ‘the’ public, but many publics. This is not inherently bad, but when the available channels are attracting the ‘quick and convenient brain’ like light attracts mosquitoes, then heads increasingly fall into the realm of ‘cognitive viruses’ that, after only short ‘incubation periods,’ take possession of a head and control it from there.

The effects of these three factors have been clearly observable for several years now: the unresolved problems of society, which are increasingly poorly addressed by the existing democratic-political system, make individual people in the everyday situation to interpret their dissatisfaction and fears more and more exclusively under the influence of the cognitive fairy tale virus and to act accordingly. This gradually worsens the situation, as the constructive capacities for problem analysis and the collective strength for problem-solving diminish more and more..

No remedies available?

Looking back over the thousands of years of human history, it’s evident that ‘opinions’, ‘views of the world’, have always only harmonized with the real world in limited areas, where it was important to survive. But even in these small areas, for millennia, there were many beliefs that were later found to be ‘wrong’.

Very early on, we humans mastered the art of telling ourselves stories about how everything is connected. These were eagerly listened to, they were believed, and only much later could one sometimes recognize what was entirely or partially wrong about the earlier stories. But in their lifetimes, for those who grew up with these stories, these tales were ‘true’, made ‘sense’, people even went to their deaths for them.

Only at the very end of humanity’s previous development (the life form of Homo sapiens), so — with 300,000 years as 24 hours — after about 23 hours and 59 minutes, did humans discover with empirical sciences a method of obtaining ‘true knowledge’ that not only works for the moment but allows us to look millions, even billions of years ‘back in time’, and for many factors, billions of years into the future. With this, science can delve into the deepest depths of matter and increasingly understand the complex interplay of all the wonderful factors.

And just at this moment of humanity’s first great triumphs on the planet Earth, the cognitive fairy tale virus breaks out unchecked and threatens even to completely extinguish modern sciences!

Which people on this planet can resist this cognitive fairy tale virus?

Here’s a recent message from the Uppsala University [1,2], reporting on an experiment by Swedish scientists with students, showing that it was possible to measurably sharpen students’ awareness of ‘fake news’ (here: the cognitive fairy tale virus).

Yes, we know that young people can shape their awareness to be better equipped against the cognitive fairy tale virus through appropriate education. But what happens when official educational institutions aren’t able to provide the necessary eduaction because either the teachers cannot conduct such knowledge therapy or the teachers themselves could do it, but the institutions do not allow it? The latter cases are known, even in so-called democracies!

Epilog 1

The following working hypotheses are emerging:

  1. The fairy tale virus, the unrestrained inclination to tell stories (uncontrolled), is genetically ingrained in humans.
  2. Neither intelligence nor so-called ‘academic education’ automatically protect against it.
  3. Critical thinking’ and ’empirical science’ are special qualities that people can only acquire with their own great commitment. Minimal conditions must exist in a society for these qualities, without which it is not possible.
  4. Active democracies seem to be able to contain the fairy tale virus to about 15-20% of societal practice (although it is always present in people). As soon as the percentage of active storytellers perceptibly increases, it must be assumed that the concept of ‘democracy’ is increasingly weakening in societal practice — for various reasons.

Epilog 2

Anyone actively affected by the fairy tale virus has a view of the world, of themselves, and of others, that has so little to do with the real world ‘out there’, beyond their own thinking, that real events no longer influence their own thinking. They live in their own ‘thought bubble’. Those who have learned to think ‘critically and scientifically’ have acquired techniques and apply them that repeatedly subject their thinking within their own bubble to a ‘reality check’. This check is not limited to specific events or statements… and that’s where it gets difficult.

References

[1] Here’s the website of Uppsala University, Sweden, where the researchers come from: https://www.uu.se/en/press/press-releases/2024/2024-04-24-computer-game-in-school-made-students-better-at-detecting-fake-news

[2] And here’s the full scientific article with open access: “Bad News in the civics classroom: How serious gameplay fosters teenagers’ ability to discern misinformation techniques.” Carl-Anton Werner Axelsson, Thomas Nygren, Jon Roozenbeek & Sander van der Linden, Received 26 Sep 2023, Accepted 29 Mar 2024, Published online: 19 Apr 2024: https://doi.org/10.1080/15391523.2024.2338451

chatGPT – How drunk do you have to be …

eJournal: uffmm.org
ISSN 2567-6458, 14.February 2023 – 17.April 2023
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is a text in the context of ‘Different Findings about chatGPT’ (https://www.uffmm.org/2023/02/23/chatgbt-different-findings/).

Since the release of the chatbot ‘chatGPT’ to the larger public, a kind of ‘earthquake’ has been going through the media, worldwide, in many areas, from individuals to institutions, companies, government agencies …. everyone is looking for the ‘chatGPT experience’. These reactions are amazing, and frightening at the same time.

Remark: The text of this post represents a later ‘stage’ of my thinking about the usefulness of the chatGPT algorithm, which started with my first reflections in the text entitled “chatGBT about Rationality: Emotions, Mystik, Unconscious, Conscious, …” from 15./16.January 2023. The main text to this version is an English translation from an originally German text partially generated with the www.DeepL.com/Translator (free version).

FORM

The following lines form only a short note, since it is hardly worthwhile to discuss a ‘surface phenomenon’ so intensively, when the ‘deep structures’ should be explained. Somehow the ‘structures behind chatGPT’ seem to interest hardly anybody (I do not mean technical details of the used algorithms).

chatGPT as an object


The chatbot named ‘chatGPT’ is a piece of software, an algorithm that (i) was invented and programmed by humans. When (ii) people ask it questions, then (iii) it searches the database of documents known to it, which in turn have been created by humans, (iv) for text patterns that have a relation to the question according to certain formal criteria (partly given by the programmers). These ‘text finds’ are (v) also ‘arranged’ according to certain formal criteria (partly given by the programmers) into a new text, which (vi) should come close to those text patterns, which a human reader is ‘used’ to accept as ‘meaningful’.

Text surface – text meaning – truthfulness

A normal human being can distinguish – at least ‘intuitively’ – between the (i) ‘strings’ used as ‘expressions of a language’ and those (ii) ‘knowledge elements’ (in the mind of the hearer-speaker) which are as such ‘independent’ of the language elements, but which (iii) can be ‘freely associated’ by speakers-hearers of a language, so that the correlated ‘knowledge elements’ become what is usually called the ‘meaning’ of the language elements. [1] Of these knowledge elements (iv), every language participant already ‘knows’ ‘pre-linguistically’, as a learning child [2], that some of these knowledge elements are ‘correlatable’ with circumstances of the everyday world under certain circumstances. And the normal language user also ‘intuitively’ (automatically, unconsciously) has the ability to assess such correlation – in the light of the available knowledge – as (v) ‘possible’ or (vi) as rather ‘improbable’ or (vi) as ‘mere fancifulness’.”[3]

The basic ability of a human being to be able to establish a ‘correlation’ of meanings with (intersubjective) environmental facts is called – at least by some – philosophers ‘truth ability’ and in the execution of truth ability one then also can speak of ‘true’ linguistic utterances or of ‘true statements’.[5]

Distinctions like ‘true’, ‘possibly true’, ‘rather not true’ or ‘in no case true’ indicate that the reality reference of human knowledge elements is very diverse and ‘dynamic’. Something that was true a moment ago may not be true the next moment. Something that has long been dismissed as ‘mere fantasy’ may suddenly appear as ‘possible’ or ‘suddenly true’. To move in this ‘dynamically correlated space of meaning’ in such a way that a certain ‘inner and outer consistency’ is preserved, is a complex challenge, which has not yet been fully understood by philosophy and the sciences, let alone even approximately ‘explained’.

The fact is: we humans can do this to a certain extent. Of course, the more complex the knowledge space is, the more diverse the linguistic interactions with other people become, the more difficult it becomes to completely understand all aspects of a linguistic statement in a situation.

‘Air act’ chatGPT

Comparing the chatbot chatGPT with these ‘basic characteristics’ of humans, one can see that chatGPT can do none of these things. (i) It cannot ask questions meaningfully on its own, since there is no reason why it should ask (unless someone induces it to ask). (ii) Text documents (of people) are sets of expressions for him, for which he has no independent assignment of meaning. So he could never independently ask or answer the ‘truth question’ – with all its dynamic shades. He takes everything at ‘face value’ or one says right away that he is ‘only dreaming’.

If chatGPT, because of its large text database, has a subset of expressions that are somehow classified as ‘true’, then the algorithm can ‘in principle’ indirectly determine ‘probabilities’ that other sets of expressions that are not classified as ‘true’ then do ‘with some probability’ appear to be ‘true’. Whether the current chatGPT algorithm uses such ‘probable truths’ explicitly is unclear. In principle, it translates texts into ‘vector spaces’ that are ‘mapped into each other’ in various ways, and parts of these vector spaces are then output again in the form of a ‘text’. The concept of ‘truth’ does not appear in these mathematical operations – to my current knowledge. If, then it would be also only the formal logical concept of truth [4]; but this lies with respect to the vector spaces ‘above’ the vector spaces, forms with respect to these a ‘meta-concept’. If one wanted to actually apply this to the vector spaces and operations on these vector spaces, then one would have to completely rewrite the code of chatGPT. If one would do this – but nobody will be able to do this – then the code of chatGPT would have the status of a formal theory (as in mathematics) (see remark [5]). From an empirical truth capability chatGPT would then still be miles away.

Hybrid illusory truths

In the use case where the algorithm named ‘chatGPT’ uses expression sets similar to the texts that humans produce and read, chatGPT navigates purely formally and with probabilities through the space of formal expression elements. However, a human who ‘reads’ the expression sets produced by chatGPT automatically (= unconsciously!) activates his or her ‘linguistic knowledge of meaning’ and projects it into the abstract expression sets of chatGBT. As one can observe (and hears and reads from others), the abstract expression sets produced by chatGBT are so similar to the usual text input of humans – purely formally – that a human can seemingly effortlessly correlate his meaning knowledge with these texts. This has the consequence that the receiving (reading, listening) human has the ‘feeling’ that chatGPT produces ‘meaningful texts’. In the ‘projection’ of the reading/listening human YES, but in the production of chatGPT NO. chatGBT has only formal expression sets (coded as vector spaces), with which it calculates ‘blindly’. It does not have ‘meanings’ in the human sense even rudimentarily.

Back to the Human?

(Last change: 27.February 2023)

How easily people are impressed by a ‘fake machine’ to the point of apparently forgetting themselves in face of the machine by feeling ‘stupid’ and ‘inefficient’, although the machine only makes ‘correlations’ between human questions and human knowledge documents in a purely formal way, is actually frightening [6a,b], [7], at least in a double sense: (i)Instead of better recognizing (and using) one’s own potentials, one stares spellbound like the famous ‘rabbit at the snake’, although the machine is still a ‘product of the human mind’. (ii) This ‘cognitive deception’ misses to better understand the actually immense potential of ‘collective human intelligence’, which of course could then be advanced by at least one evolutionary level higher by incorporating modern technologies. The challenge of the hour is ‘Collective Human-Machine Intelligence’ in the context of sustainable development with priority given to human collective intelligence. The current so-called ‘artificial (= machine) intelligence’ is only present by rather primitive algorithms. Integrated into a developed ‘collective human intelligence’ quite different forms of ‘intelligence’ could be realized, ones we currently can only dream of at most.

Commenting on other articles from other authors about chatGPT

(Last change: 14.April 2023)

[7], [8],[9],[11],[12],[13],[14]

Comments

(Last change: 3.April 2023)

wkp-en: en.wikipedia.org

[1] In the many thousands of ‘natural languages’ of this world one can observe how ‘experiential environmental facts’ can become ‘knowledge elements’ via ‘perception’, which are then correlated with different expressions in each language. Linguists (and semioticians) therefore speak here of ‘conventions’, ‘freely agreed assignments’.

[2] Due to physical interaction with the environment, which enables ‘perceptual events’ that are distinguishable from the ‘remembered and known knowledge elements’.

[3] The classification of ‘knowledge elements’ as ‘imaginations/ fantasies’ can be wrong, as many examples show, like vice versa, the classification as ‘probably correlatable’ can be wrong too!

[4] Not the ‘classical (Aristotelian) logic’ since the Aristotelian logic did not yet realize a stricCommenting on other articles from other authors about chatGPTt separation of ‘form’ (elements of expression) and ‘content’ (meaning).

[5] There are also contexts in which one speaks of ‘true statements’ although there is no relation to a concrete world experience. For example in the field of mathematics, where one likes to say that a statement is ‘true’. But this is a completely ‘different truth’. Here it is about the fact that in the context of a ‘mathematical theory’ certain ‘basic assumptions’ were made (which must have nothing to do with a concrete reality), and one then ‘derives’ other statements starting from these basic assumptions with the help of a formal concept of inference (the formal logic). A ‘derived statement’ (usually called a ‘theorem’), also has no relation to a concrete reality. It is ‘logically true’ or ‘formally true’. If one would ‘relate’ the basic assumptions of a mathematical theory to concrete reality by – certainly not very simple – ‘interpretations’ (as e.g. in ‘applied physics’), then it may be, under special conditions, that the formally derived statements of such an ’empirically interpreted abstract theory’ gain an ’empirical meaning’, which may be ‘correlatable’ under certain conditions; then such statements would not only be called ‘logically true’, but also ’empirically true’. As the history of science and philosophy of science shows, however, the ‘transition’ from empirically interpreted abstract theories to empirically interpretable inferences with truth claims is not trivial. The reason lies in the used ‘logical inference concept’. In modern formal logic there are almost ‘arbitrarily many’ different formal inference terms possible. Whether such a formal inference term really ‘adequately represents’ the structure of empirical facts via abstract structures with formal inferences is not at all certain! This pro’simulation’blem is not really clarified in the philosophy of science so far!

[6a] Weizenbaum’s 1966 chatbot ‘Eliza’, despite its simplicity, was able to make human users believe that the program ‘understood’ them even when they were told that it was just a simple algorithm. See the keyword  ‚Eliza‘ in wkp-en: https://en.wikipedia.org/wiki/ELIZA

[6b] Joseph Weizenbaum, 1966, „ELIZA. A Computer Program For the Study of Natural Language. Communication Between Man And Machine“, Communications of the ACM, Vol.9, No.1, January 1966, URL: https://cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1966.pdf . Note: Although the program ‘Eliza’ by Weizenbaum was very simple, all users were fascinated by the program because they had the feeling “It understands me”, while the program only mirrored the questions and statements of the users. In other words, the users were ‘fascinated by themselves’ with the program as a kind of ‘mirror’.

[7] Ted Chiang, 2023, “ChatGPT Is a Blurry JPEG of the Web. OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?”, The NEW YORKER, February 9, 2023. URL: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web . Note: Chang looks to the chatGPT program using the paradigm of a ‘compression algorithm’: the abundance of information is ‘condensed/abstracted’ so that a slightly blurred image of the text volumes is created, not a 1-to-1 copy. This gives the user the impression of understanding at the expense of access to detail and accuracy. The texts of chatGPT are not ‘true’, but they ‘mute’.

[8] Dietmar Hansch, 2023, “The more honest name would be ‘Simulated Intelligence’. Which deficits bots like chatGBT suffer from and what that must mean for our dealings with them.”, FAZ Frankfurter Allgemeine Zeitung, March 1, 2023, p.N1 . Note: While Chiang (see [7]) approaches the phenomenon chatGPT with the concept ‘compression algorithm’ Hansch prefers the terms ‘statistical-incremental learning’ as well as ‘insight learning’. For Hansch, insight learning is tied to ‘mind’ and ‘consciousness’, for which he postulates ‘equivalent structures’ in the brain. Regarding insight learning, Hansch further comments “insight learning is not only faster, but also indispensable for a deep, holistic understanding of the world, which grasps far-reaching connections as well as conveys criteria for truth and truthfulness.” It is not surprising then when Hansch writes “Insight learning is the highest form of learning…”. With reference to this frame of reference established by Hansch, he classifies chatGPT in the sense that it is only capable of ‘statistical-incremental learning’. Further, Hansch postulates for humans, “Human learning is never purely objective, we always structure the world in relation to our needs, feelings, and conscious purposes…”. He calls this the ‘human reference’ in human cognition, and it is precisely this what he also denies for chatGPT. For common designation ‘AI’ as ‘Artificial Intelligence’ he postulates that the term ‘intelligence’ in this word combination has nothing to do with the meaning we associate with ‘intelligence’ in the case of humans, so in no case has the term intelligence anything to do with ‘insight learning’, as he has stated before. To give more expression to this fact of mismatch he would rather use the term ‘simulated intelligence’ (see also [9]). This conceptual strategy seems strange, since the term simulation [10] normally presupposes that there is a clear state of affairs, for which one defines a simplified ‘model’, by means of which the behavior of the original system can then be — simplified — viewed and examined in important respects. In the present case, however, it is not quite clear what the original system should be, which is to be simulated in the case of AI. There is so far no unified definition of ‘intelligence’ in the context of ‘AI’! As far as Hansch’s terminology itself is concerned, the terms ‘statistical-incremental learning’ as well as ‘insight learning’ are not clearly defined either; the relation to observable human behavior let alone to the postulated ‘equivalent brain structures’ is arbitrarily unclear (which is not improved by the relation to terms like ‘consciousness’ and ‘mind’ which are not defined yet).

[9] Severin Tatarczyk, Feb 19, 2023, on ‘Simulated Intelligence’: https://www.severint.net/2023/02/19/kompakt-warum-ich-den-begriff-simulierte-intelligenz-bevorzuge-und-warum-chatbots-so-menschlich-auf-uns-wirken/

[10] See the term ‘simulation’ in wkp-en: https://en.wikipedia.org/wiki/Simulation

[11] Doris Brelowski pointed me to the following article: James Bridle, 16.March 2023, „The stupidity of AI. Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous“, URL: https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt?CMP=Share_AndroidApp_Other . Comment: An article that knowledgeably and very sophisticatedly describes the interplay between forms of AI that are being ‘unleashed’ on the entire Internet by large corporations, and what this is doing to human culture and then, of course, to humans themselves. Two quotes from this very readable article: Quote 1: „The entirety of this kind of publicly available AI, whether it works with images or words, as well as the many data-driven applications like it, is based on this wholesale appropriation of existing culture, the scope of which we can barely comprehend. Public or private, legal or otherwise, most of the text and images scraped up by these systems exist in the nebulous domain of “fair use” (permitted in the US, but questionable if not outright illegal in the EU). Like most of what goes on inside advanced neural networks, it’s really impossible to understand how they work from the outside, rare encounters such as Lapine’s aside. But we can be certain of this: far from being the magical, novel creations of brilliant machines, the outputs of this kind of AI is entirely dependent on the uncredited and unremunerated work of generations of human artists.“ Quote 2: „Now, this didn’t happen because ChatGPT is inherently rightwing. It’s because it’s inherently stupid. It has read most of the internet, and it knows what human language is supposed to sound like, but it has no relation to reality whatsoever. It is dreaming sentences that sound about right, and listening to it talk is frankly about as interesting as listening to someone’s dreams. It is very good at producing what sounds like sense, and best of all at producing cliche and banality, which has composed the majority of its diet, but it remains incapable of relating meaningfully to the world as it actually is. Distrust anyone who pretends that this is an echo, even an approximation, of consciousness. (As this piece was going to publication, OpenAI released a new version of the system that powers ChatGPT, and said it was “less likely to make up facts”.)“

[12] David Krakauer in an Interview with Brian Gallagher in Nautilus, March 27, 2023, Does GPT-4 Really Understand What We’re Saying?, URL: https://nautil.us/does-gpt-4-really-understand-what-were-saying-291034/?_sp=d9a7861a-9644-44a7-8ba7-f95ee526d468.1680528060130. David Krakauer, an evolutionary theorist and president of the Santa Fe Institute for complexity science, analyzes the role of chat-GPT-4 models compared to the human language model and a more differentiated understanding of what ‘understanding’ and ‘Intelligence’ could mean. His main points of criticism are in close agreement with the position int he text above. He points out that (i) one has clearly to distinguish between the ‘information concept’ of Shannon and the concept of ‘meaning’. Something can represent a high information load but can nevertheless be empty of any meaning. Then he points out (ii) that there are several possible variants of the meaning of ‘understanding’. Coordinating with human understanding can work, but to understand in a constructive sense: no. Then Krakauer (iii) relates GPT-4 to the standard model of science which he characterizes as ‘parsimony’; chat-GPT-4 is clearly the opposite. Another point (iv) is the fact, that human experience has an ’emotional’ and a ‘physical’ aspect based on somato-sensory perceptions within its body. This is missing with GPT-4. This is somehow related (v) to the fact, that the human brain with its ‘algorithms’ is the product of millions of years of evolution in a complex environment. The GPT-4 algorithms have nothing comparable; they have only to ‘convince’ humans. Finally (vi) humans can generate ‘physical models’ inspired by their experience and can quickly argue by using such models. Thus Krakauer concludes “So the narrative that says we’ve rediscovered human reasoning is so misguided in so many ways. Just demonstrably false. That can’t be the way to go.”

[13] By Marie-José Kolly (text) and Merlin Flügel (illustration), 11.04.2023, “Chatbots like GPT can form wonderful sentences. That’s exactly what makes them a problem.” Artificial intelligence fools us into believing something that is not. A plea against the general enthusiasm. Online newspaper ‘Republik’ from Schweiz, URL: https://www.republik.ch/2023/04/11/chatbots-wie-gpt-koennen-wunderbare-saetze-bilden-genau-das-macht-sie-zum-problem? Here are some comments:

The text by Marie-José Kolly stands out because the algorithm named chatGPT(4) is characterized here both in its input-output behavior and additionally a comparison to humans is made at least to some extent.

The basic problem of the algorithm chatGPT(4) is (as also pointed out in my text above) that it has as input data exclusively text sets (also those of the users), which are analyzed according to purely statistical procedures in their formal properties. On the basis of the analyzed regularities, arbitrary text collages can then be generated, which are very similar in form to human texts, so much so that many people take them for ‘human-generated texts’. In fact, however, the algorithm lacks what we humans call ‘world knowledge’, it lacks real ‘thinking’, it lacks ‘own’ value positions, and the algorithm ‘does not understand’ its own text.

Due to this lack of its own reference to the world, the algorithm can be manipulated very easily via the available text volumes. A ‘mass production’ of ‘junk texts’, of ‘disinformation’ is thus very easily possible.

If one considers that modern democracies can only function if the majority of citizens have a common basis of facts that can be assumed to be ‘true’, a common body of knowledge, and reliable media, then the chatGPT(4) algorithm can massively destroy precisely these requirements for a democracy.

The interesting question then is whether chatGPT(4) can actually support a human society, especially a democratic society, in a positive-constructive way?

In any case, it is known that humans learn the use of their language from childhood on in direct contact with a real world, largely playfully, in interaction with other children/people. For humans ‘words’ are never isolated quantities, but they are always dynamically integrated into equally dynamic contexts. Language is never only ‘form’ but always at the same time ‘content’, and this in many different ways. This is only possible because humans have complex cognitive abilities, which include corresponding memory abilities as well as abilities for generalization.

The cultural-historical development from spoken language, via writing, books, libraries up to enormous digital data memories has indeed achieved tremendous things concerning the ‘forms’ of language and therein – possibly – encoded knowledge, but there is the impression that the ‘automation’ of the forms drives them into ‘isolation’, so that the forms lose more and more their contact to reality, to meaning, to truth. Language as a central moment of enabling more complex knowledge and more complex action is thus increasingly becoming a ‘parasite’ that claims more and more space and in the process destroys more and more meaning and truth.

[14] Gary Marcus, April 2023, Hoping for the Best as AI Evolves, Gary Marcus on the systems that “pose a real and imminent threat to the fabric of society.” Communications of the ACM, Volume 66, Issue 4, April 2023 pp 6–7, https://doi.org/10.1145/3583078 , Comment: Gary Marcus writes on the occasion of the effects of systems like chatGPT(OpenAI), Dalle-E2 and Lensa about the seriously increasing negative effects these tools can have within a society, to an extent that poses a serious threat to every society! These tools are inherently flawed in the areas of thinking, facts and hallucinations. At near zero cost, they can be used to create and execute large-scale disinformation campaigns very quickly. Looking to the globally important website ‘Stack Overflow’ for programmers as an example, one could (and can) see how the inflationary use of chatGPT due to its inherent many flaws pushes the Stack Overflow’s management team having to urge its users to completely stop using chatGPT in order to prevent the site’s collapse after 14 years. In the case of big players who specifically target disinformation, such a measure is ineffective. These players aim to create a data world in which no one will be able to trust anyone. With this in mind, Gary Marcus sets out 4 postulates that every society should implement: (1) Automatically generated not certified content should be completely banned; (2) Legally effective measures must be adopted that can prevent ‘misinformation’; (3) User accounts must be made tamper-proof; (4) A new generation of AI tools is needed that can verify facts. (Translated with partial support from www.DeepL.com/Translator (free version))