ISSN 2567-6458, 14.February 2023 – 17.March 2023
Author: Gerd Doeben-Henisch
This is an English translation from an originally German text partially generated with the www.DeepL.com/Translator (free version).
Since the release of the chatbot ‘chatGPT’ to the larger public, a kind of ‘earthquake’ has been going through the media, worldwide, in many areas, from individuals to institutions, companies, government agencies …. everyone is looking for the ‘chatGPT experience’. These reactions are amazing, and frightening at the same time.
Remark: The text of this post represents a later ‘stage’ of my thinking about the usefulness of the chatGPT algorithm, which started with my first reflections in the text entitled “chatGBT about Rationality: Emotions, Mystik, Unconscious, Conscious, …” from 15./16.January 2023.
The following lines form only a short note, since it is hardly worthwhile to discuss a ‘surface phenomenon’ so intensively, when the ‘deep structures’ should be explained. Somehow the ‘structures behind chatGPT’ seem to interest hardly anybody (I do not mean technical details of the used algorithms).
chatGPT as an object
The chatbot named ‘chatGPT’ is a piece of software, an algorithm that (i) was invented and programmed by humans. When (ii) people ask it questions, then (iii) it searches the database of documents known to it, which in turn have been created by humans, (iv) for text patterns that have a relation to the question according to certain formal criteria (partly given by the programmers). These ‘text finds’ are (v) also ‘arranged’ according to certain formal criteria (partly given by the programmers) into a new text, which (vi) should come close to those text patterns, which a human reader is ‘used’ to accept as ‘meaningful’.
Text surface – text meaning – truthfulness
A normal human being can distinguish – at least ‘intuitively’ – between the (i) ‘strings’ used as ‘expressions of a language’ and those (ii) ‘knowledge elements’ (in the mind of the hearer-speaker) which are as such ‘independent’ of the language elements, but which (iii) can be ‘freely associated’ by speakers-hearers of a language, so that the correlated ‘knowledge elements’ become what is usually called the ‘meaning’ of the language elements.  Of these knowledge elements (iv), every language participant already ‘knows’ ‘pre-linguistically’, as a learning child , that some of these knowledge elements are ‘correlatable’ with circumstances of the everyday world under certain circumstances. And the normal language user also ‘intuitively’ (automatically, unconsciously) has the ability to assess such correlation – in the light of the available knowledge – as (v) ‘possible’ or (vi) as rather ‘improbable’ or (vi) as ‘mere fancifulness’.”
The basic ability of a human being to be able to establish a ‘correlation’ of meanings with (intersubjective) environmental facts is called – at least by some – philosophers ‘truth ability’ and in the execution of truth ability one then also can speak of ‘true’ linguistic utterances or of ‘true statements’.
Distinctions like ‘true’, ‘possibly true’, ‘rather not true’ or ‘in no case true’ indicate that the reality reference of human knowledge elements is very diverse and ‘dynamic’. Something that was true a moment ago may not be true the next moment. Something that has long been dismissed as ‘mere fantasy’ may suddenly appear as ‘possible’ or ‘suddenly true’. To move in this ‘dynamically correlated space of meaning’ in such a way that a certain ‘inner and outer consistency’ is preserved, is a complex challenge, which has not yet been fully understood by philosophy and the sciences, let alone even approximately ‘explained’.
The fact is: we humans can do this to a certain extent. Of course, the more complex the knowledge space is, the more diverse the linguistic interactions with other people become, the more difficult it becomes to completely understand all aspects of a linguistic statement in a situation.
‘Air act’ chatGPT
Comparing the chatbot chatGPT with these ‘basic characteristics’ of humans, one can see that chatGPT can do none of these things. (i) It cannot ask questions meaningfully on its own, since there is no reason why it should ask (unless someone induces it to ask). (ii) Text documents (of people) are sets of expressions for him, for which he has no independent assignment of meaning. So he could never independently ask or answer the ‘truth question’ – with all its dynamic shades. He takes everything at ‘face value’ or one says right away that he is ‘only dreaming’.
If chatGPT, because of its large text database, has a subset of expressions that are somehow classified as ‘true’, then the algorithm can ‘in principle’ indirectly determine ‘probabilities’ that other sets of expressions that are not classified as ‘true’ then do ‘with some probability’ appear to be ‘true’. Whether the current chatGPT algorithm uses such ‘probable truths’ explicitly is unclear. In principle, it translates texts into ‘vector spaces’ that are ‘mapped into each other’ in various ways, and parts of these vector spaces are then output again in the form of a ‘text’. The concept of ‘truth’ does not appear in these mathematical operations – to my current knowledge. If, then it would be also only the formal logical concept of truth ; but this lies with respect to the vector spaces ‘above’ the vector spaces, forms with respect to these a ‘meta-concept’. If one wanted to actually apply this to the vector spaces and operations on these vector spaces, then one would have to completely rewrite the code of chatGPT. If one would do this – but nobody will be able to do this – then the code of chatGPT would have the status of a formal theory (as in mathematics) (see remark ). From an empirical truth capability chatGPT would then still be miles away.
Hybrid illusory truths
In the use case where the algorithm named ‘chatGPT’ uses expression sets similar to the texts that humans produce and read, chatGPT navigates purely formally and with probabilities through the space of formal expression elements. However, a human who ‘reads’ the expression sets produced by chatGPT automatically (= unconsciously!) activates his or her ‘linguistic knowledge of meaning’ and projects it into the abstract expression sets of chatGBT. As one can observe (and hears and reads from others), the abstract expression sets produced by chatGBT are so similar to the usual text input of humans – purely formally – that a human can seemingly effortlessly correlate his meaning knowledge with these texts. This has the consequence that the receiving (reading, listening) human has the ‘feeling’ that chatGPT produces ‘meaningful texts’. In the ‘projection’ of the reading/listening human YES, but in the production of chatGPT NO. chatGBT has only formal expression sets (coded as vector spaces), with which it calculates ‘blindly’. It does not have ‘meanings’ in the human sense even rudimentarily.
Back to the Human?
(Last change: 27.February 2023)
How easily people are impressed by a ‘fake machine’ to the point of apparently forgetting themselves in face of the machine by feeling ‘stupid’ and ‘inefficient’, although the machine only makes ‘correlations’ between human questions and human knowledge documents in a purely formal way, is actually frightening [6a,b], , at least in a double sense: (i)Instead of better recognizing (and using) one’s own potentials, one stares spellbound like the famous ‘rabbit at the snake’, although the machine is still a ‘product of the human mind’. (ii) This ‘cognitive deception’ misses to better understand the actually immense potential of ‘collective human intelligence’, which of course could then be advanced by at least one evolutionary level higher by incorporating modern technologies. The challenge of the hour is ‘Collective Human-Machine Intelligence’ in the context of sustainable development with priority given to human collective intelligence. The current so-called ‘artificial (= machine) intelligence’ is only present by rather primitive algorithms. Integrated into a developed ‘collective human intelligence’ quite different forms of ‘intelligence’ could be realized, ones we currently can only dream of at most.
Commenting on other articles from other authors about chatGPT
(Last change: 17.March 2023)
(Last change: 17.March 2023)
 In the many thousands of ‘natural languages’ of this world one can observe how ‘experiential environmental facts’ can become ‘knowledge elements’ via ‘perception’, which are then correlated with different expressions in each language. Linguists (and semioticians) therefore speak here of ‘conventions’, ‘freely agreed assignments’.
 Due to physical interaction with the environment, which enables ‘perceptual events’ that are distinguishable from the ‘remembered and known knowledge elements’.
 The classification of ‘knowledge elements’ as ‘imaginations/ fantasies’ can be wrong, as many examples show, like vice versa, the classification as ‘probably correlatable’ can be wrong too!
 Not the ‘classical (Aristotelian) logic’ since the Aristotelian logic did not yet realize a stricCommenting on other articles from other authors about chatGPTt separation of ‘form’ (elements of expression) and ‘content’ (meaning).
 There are also contexts in which one speaks of ‘true statements’ although there is no relation to a concrete world experience. For example in the field of mathematics, where one likes to say that a statement is ‘true’. But this is a completely ‘different truth’. Here it is about the fact that in the context of a ‘mathematical theory’ certain ‘basic assumptions’ were made (which must have nothing to do with a concrete reality), and one then ‘derives’ other statements starting from these basic assumptions with the help of a formal concept of inference (the formal logic). A ‘derived statement’ (usually called a ‘theorem’), also has no relation to a concrete reality. It is ‘logically true’ or ‘formally true’. If one would ‘relate’ the basic assumptions of a mathematical theory to concrete reality by – certainly not very simple – ‘interpretations’ (as e.g. in ‘applied physics’), then it may be, under special conditions, that the formally derived statements of such an ’empirically interpreted abstract theory’ gain an ’empirical meaning’, which may be ‘correlatable’ under certain conditions; then such statements would not only be called ‘logically true’, but also ’empirically true’. As the history of science and philosophy of science shows, however, the ‘transition’ from empirically interpreted abstract theories to empirically interpretable inferences with truth claims is not trivial. The reason lies in the used ‘logical inference concept’. In modern formal logic there are almost ‘arbitrarily many’ different formal inference terms possible. Whether such a formal inference term really ‘adequately represents’ the structure of empirical facts via abstract structures with formal inferences is not at all certain! This pro’simulation’blem is not really clarified in the philosophy of science so far!
[6a] Weizenbaum’s 1966 chatbot ‘Eliza’, despite its simplicity, was able to make human users believe that the program ‘understood’ them even when they were told that it was just a simple algorithm. See the keyword ‚Eliza‘ in wkp-en: https://en.wikipedia.org/wiki/ELIZA
[6b] Joseph Weizenbaum, 1966, „ELIZA. A Computer Program For the Study of Natural Language. Communication Between Man And Machine“, Communications of the ACM, Vol.9, No.1, January 1966, URL: https://cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1966.pdf . Note: Although the program ‘Eliza’ by Weizenbaum was very simple, all users were fascinated by the program because they had the feeling “It understands me”, while the program only mirrored the questions and statements of the users. In other words, the users were ‘fascinated by themselves’ with the program as a kind of ‘mirror’.
 Ted Chiang, 2023, “ChatGPT Is a Blurry JPEG of the Web. OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?”, The NEW YORKER, February 9, 2023. URL: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web . Note: Chang looks to the chatGPT program using the paradigm of a ‘compression algorithm’: the abundance of information is ‘condensed/abstracted’ so that a slightly blurred image of the text volumes is created, not a 1-to-1 copy. This gives the user the impression of understanding at the expense of access to detail and accuracy. The texts of chatGPT are not ‘true’, but they ‘mute’.
 Dietmar Hansch, 2023, “The more honest name would be ‘Simulated Intelligence’. Which deficits bots like chatGBT suffer from and what that must mean for our dealings with them.”, FAZ Frankfurter Allgemeine Zeitung, March 1, 2023, p.N1 . Note: While Chiang (see ) approaches the phenomenon chatGPT with the concept ‘compression algorithm’ Hansch prefers the terms ‘statistical-incremental learning’ as well as ‘insight learning’. For Hansch, insight learning is tied to ‘mind’ and ‘consciousness’, for which he postulates ‘equivalent structures’ in the brain. Regarding insight learning, Hansch further comments “insight learning is not only faster, but also indispensable for a deep, holistic understanding of the world, which grasps far-reaching connections as well as conveys criteria for truth and truthfulness.” It is not surprising then when Hansch writes “Insight learning is the highest form of learning…”. With reference to this frame of reference established by Hansch, he classifies chatGPT in the sense that it is only capable of ‘statistical-incremental learning’. Further, Hansch postulates for humans, “Human learning is never purely objective, we always structure the world in relation to our needs, feelings, and conscious purposes…”. He calls this the ‘human reference’ in human cognition, and it is precisely this what he also denies for chatGPT. For common designation ‘AI’ as ‘Artificial Intelligence’ he postulates that the term ‘intelligence’ in this word combination has nothing to do with the meaning we associate with ‘intelligence’ in the case of humans, so in no case has the term intelligence anything to do with ‘insight learning’, as he has stated before. To give more expression to this fact of mismatch he would rather use the term ‘simulated intelligence’ (see also ). This conceptual strategy seems strange, since the term simulation  normally presupposes that there is a clear state of affairs, for which one defines a simplified ‘model’, by means of which the behavior of the original system can then be — simplified — viewed and examined in important respects. In the present case, however, it is not quite clear what the original system should be, which is to be simulated in the case of AI. There is so far no unified definition of ‘intelligence’ in the context of ‘AI’! As far as Hansch’s terminology itself is concerned, the terms ‘statistical-incremental learning’ as well as ‘insight learning’ are not clearly defined either; the relation to observable human behavior let alone to the postulated ‘equivalent brain structures’ is arbitrarily unclear (which is not improved by the relation to terms like ‘consciousness’ and ‘mind’ which are not defined yet).
 Severin Tatarczyk, Feb 19, 2023, on ‘Simulated Intelligence’: https://www.severint.net/2023/02/19/kompakt-warum-ich-den-begriff-simulierte-intelligenz-bevorzuge-und-warum-chatbots-so-menschlich-auf-uns-wirken/
 See the term ‘simulation’ in wkp-en: https://en.wikipedia.org/wiki/Simulation
 Doris Brelowski pointed me to the following article: James Bridle, 16.March 2023, „The stupidity of AI. Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous“, URL: https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt?CMP=Share_AndroidApp_Other . Comment: An article that knowledgeably and very sophisticatedly describes the interplay between forms of AI that are being ‘unleashed’ on the entire Internet by large corporations, and what this is doing to human culture and then, of course, to humans themselves. Two quotes from this very readable article: Quote 1: „The entirety of this kind of publicly available AI, whether it works with images or words, as well as the many data-driven applications like it, is based on this wholesale appropriation of existing culture, the scope of which we can barely comprehend. Public or private, legal or otherwise, most of the text and images scraped up by these systems exist in the nebulous domain of “fair use” (permitted in the US, but questionable if not outright illegal in the EU). Like most of what goes on inside advanced neural networks, it’s really impossible to understand how they work from the outside, rare encounters such as Lapine’s aside. But we can be certain of this: far from being the magical, novel creations of brilliant machines, the outputs of this kind of AI is entirely dependent on the uncredited and unremunerated work of generations of human artists.“ Quote 2: „Now, this didn’t happen because ChatGPT is inherently rightwing. It’s because it’s inherently stupid. It has read most of the internet, and it knows what human language is supposed to sound like, but it has no relation to reality whatsoever. It is dreaming sentences that sound about right, and listening to it talk is frankly about as interesting as listening to someone’s dreams. It is very good at producing what sounds like sense, and best of all at producing cliche and banality, which has composed the majority of its diet, but it remains incapable of relating meaningfully to the world as it actually is. Distrust anyone who pretends that this is an echo, even an approximation, of consciousness. (As this piece was going to publication, OpenAI released a new version of the system that powers ChatGPT, and said it was “less likely to make up facts”.)“