Category Archives: chatbot

chatGPT – How drunk do you have to be …

eJournal: uffmm.org
ISSN 2567-6458, 14.February 2023 – 17.April 2023
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is a text in the context of ‘Different Findings about chatGPT’ (https://www.uffmm.org/2023/02/23/chatgbt-different-findings/).

Since the release of the chatbot ‘chatGPT’ to the larger public, a kind of ‘earthquake’ has been going through the media, worldwide, in many areas, from individuals to institutions, companies, government agencies …. everyone is looking for the ‘chatGPT experience’. These reactions are amazing, and frightening at the same time.

Remark: The text of this post represents a later ‘stage’ of my thinking about the usefulness of the chatGPT algorithm, which started with my first reflections in the text entitled “chatGBT about Rationality: Emotions, Mystik, Unconscious, Conscious, …” from 15./16.January 2023. The main text to this version is an English translation from an originally German text partially generated with the www.DeepL.com/Translator (free version).

FORM

The following lines form only a short note, since it is hardly worthwhile to discuss a ‘surface phenomenon’ so intensively, when the ‘deep structures’ should be explained. Somehow the ‘structures behind chatGPT’ seem to interest hardly anybody (I do not mean technical details of the used algorithms).

chatGPT as an object


The chatbot named ‘chatGPT’ is a piece of software, an algorithm that (i) was invented and programmed by humans. When (ii) people ask it questions, then (iii) it searches the database of documents known to it, which in turn have been created by humans, (iv) for text patterns that have a relation to the question according to certain formal criteria (partly given by the programmers). These ‘text finds’ are (v) also ‘arranged’ according to certain formal criteria (partly given by the programmers) into a new text, which (vi) should come close to those text patterns, which a human reader is ‘used’ to accept as ‘meaningful’.

Text surface – text meaning – truthfulness

A normal human being can distinguish – at least ‘intuitively’ – between the (i) ‘strings’ used as ‘expressions of a language’ and those (ii) ‘knowledge elements’ (in the mind of the hearer-speaker) which are as such ‘independent’ of the language elements, but which (iii) can be ‘freely associated’ by speakers-hearers of a language, so that the correlated ‘knowledge elements’ become what is usually called the ‘meaning’ of the language elements. [1] Of these knowledge elements (iv), every language participant already ‘knows’ ‘pre-linguistically’, as a learning child [2], that some of these knowledge elements are ‘correlatable’ with circumstances of the everyday world under certain circumstances. And the normal language user also ‘intuitively’ (automatically, unconsciously) has the ability to assess such correlation – in the light of the available knowledge – as (v) ‘possible’ or (vi) as rather ‘improbable’ or (vi) as ‘mere fancifulness’.”[3]

The basic ability of a human being to be able to establish a ‘correlation’ of meanings with (intersubjective) environmental facts is called – at least by some – philosophers ‘truth ability’ and in the execution of truth ability one then also can speak of ‘true’ linguistic utterances or of ‘true statements’.[5]

Distinctions like ‘true’, ‘possibly true’, ‘rather not true’ or ‘in no case true’ indicate that the reality reference of human knowledge elements is very diverse and ‘dynamic’. Something that was true a moment ago may not be true the next moment. Something that has long been dismissed as ‘mere fantasy’ may suddenly appear as ‘possible’ or ‘suddenly true’. To move in this ‘dynamically correlated space of meaning’ in such a way that a certain ‘inner and outer consistency’ is preserved, is a complex challenge, which has not yet been fully understood by philosophy and the sciences, let alone even approximately ‘explained’.

The fact is: we humans can do this to a certain extent. Of course, the more complex the knowledge space is, the more diverse the linguistic interactions with other people become, the more difficult it becomes to completely understand all aspects of a linguistic statement in a situation.

‘Air act’ chatGPT

Comparing the chatbot chatGPT with these ‘basic characteristics’ of humans, one can see that chatGPT can do none of these things. (i) It cannot ask questions meaningfully on its own, since there is no reason why it should ask (unless someone induces it to ask). (ii) Text documents (of people) are sets of expressions for him, for which he has no independent assignment of meaning. So he could never independently ask or answer the ‘truth question’ – with all its dynamic shades. He takes everything at ‘face value’ or one says right away that he is ‘only dreaming’.

If chatGPT, because of its large text database, has a subset of expressions that are somehow classified as ‘true’, then the algorithm can ‘in principle’ indirectly determine ‘probabilities’ that other sets of expressions that are not classified as ‘true’ then do ‘with some probability’ appear to be ‘true’. Whether the current chatGPT algorithm uses such ‘probable truths’ explicitly is unclear. In principle, it translates texts into ‘vector spaces’ that are ‘mapped into each other’ in various ways, and parts of these vector spaces are then output again in the form of a ‘text’. The concept of ‘truth’ does not appear in these mathematical operations – to my current knowledge. If, then it would be also only the formal logical concept of truth [4]; but this lies with respect to the vector spaces ‘above’ the vector spaces, forms with respect to these a ‘meta-concept’. If one wanted to actually apply this to the vector spaces and operations on these vector spaces, then one would have to completely rewrite the code of chatGPT. If one would do this – but nobody will be able to do this – then the code of chatGPT would have the status of a formal theory (as in mathematics) (see remark [5]). From an empirical truth capability chatGPT would then still be miles away.

Hybrid illusory truths

In the use case where the algorithm named ‘chatGPT’ uses expression sets similar to the texts that humans produce and read, chatGPT navigates purely formally and with probabilities through the space of formal expression elements. However, a human who ‘reads’ the expression sets produced by chatGPT automatically (= unconsciously!) activates his or her ‘linguistic knowledge of meaning’ and projects it into the abstract expression sets of chatGBT. As one can observe (and hears and reads from others), the abstract expression sets produced by chatGBT are so similar to the usual text input of humans – purely formally – that a human can seemingly effortlessly correlate his meaning knowledge with these texts. This has the consequence that the receiving (reading, listening) human has the ‘feeling’ that chatGPT produces ‘meaningful texts’. In the ‘projection’ of the reading/listening human YES, but in the production of chatGPT NO. chatGBT has only formal expression sets (coded as vector spaces), with which it calculates ‘blindly’. It does not have ‘meanings’ in the human sense even rudimentarily.

Back to the Human?

(Last change: 27.February 2023)

How easily people are impressed by a ‘fake machine’ to the point of apparently forgetting themselves in face of the machine by feeling ‘stupid’ and ‘inefficient’, although the machine only makes ‘correlations’ between human questions and human knowledge documents in a purely formal way, is actually frightening [6a,b], [7], at least in a double sense: (i)Instead of better recognizing (and using) one’s own potentials, one stares spellbound like the famous ‘rabbit at the snake’, although the machine is still a ‘product of the human mind’. (ii) This ‘cognitive deception’ misses to better understand the actually immense potential of ‘collective human intelligence’, which of course could then be advanced by at least one evolutionary level higher by incorporating modern technologies. The challenge of the hour is ‘Collective Human-Machine Intelligence’ in the context of sustainable development with priority given to human collective intelligence. The current so-called ‘artificial (= machine) intelligence’ is only present by rather primitive algorithms. Integrated into a developed ‘collective human intelligence’ quite different forms of ‘intelligence’ could be realized, ones we currently can only dream of at most.

Commenting on other articles from other authors about chatGPT

(Last change: 14.April 2023)

[7], [8],[9],[11],[12],[13],[14]

Comments

(Last change: 3.April 2023)

wkp-en: en.wikipedia.org

[1] In the many thousands of ‘natural languages’ of this world one can observe how ‘experiential environmental facts’ can become ‘knowledge elements’ via ‘perception’, which are then correlated with different expressions in each language. Linguists (and semioticians) therefore speak here of ‘conventions’, ‘freely agreed assignments’.

[2] Due to physical interaction with the environment, which enables ‘perceptual events’ that are distinguishable from the ‘remembered and known knowledge elements’.

[3] The classification of ‘knowledge elements’ as ‘imaginations/ fantasies’ can be wrong, as many examples show, like vice versa, the classification as ‘probably correlatable’ can be wrong too!

[4] Not the ‘classical (Aristotelian) logic’ since the Aristotelian logic did not yet realize a stricCommenting on other articles from other authors about chatGPTt separation of ‘form’ (elements of expression) and ‘content’ (meaning).

[5] There are also contexts in which one speaks of ‘true statements’ although there is no relation to a concrete world experience. For example in the field of mathematics, where one likes to say that a statement is ‘true’. But this is a completely ‘different truth’. Here it is about the fact that in the context of a ‘mathematical theory’ certain ‘basic assumptions’ were made (which must have nothing to do with a concrete reality), and one then ‘derives’ other statements starting from these basic assumptions with the help of a formal concept of inference (the formal logic). A ‘derived statement’ (usually called a ‘theorem’), also has no relation to a concrete reality. It is ‘logically true’ or ‘formally true’. If one would ‘relate’ the basic assumptions of a mathematical theory to concrete reality by – certainly not very simple – ‘interpretations’ (as e.g. in ‘applied physics’), then it may be, under special conditions, that the formally derived statements of such an ’empirically interpreted abstract theory’ gain an ’empirical meaning’, which may be ‘correlatable’ under certain conditions; then such statements would not only be called ‘logically true’, but also ’empirically true’. As the history of science and philosophy of science shows, however, the ‘transition’ from empirically interpreted abstract theories to empirically interpretable inferences with truth claims is not trivial. The reason lies in the used ‘logical inference concept’. In modern formal logic there are almost ‘arbitrarily many’ different formal inference terms possible. Whether such a formal inference term really ‘adequately represents’ the structure of empirical facts via abstract structures with formal inferences is not at all certain! This pro’simulation’blem is not really clarified in the philosophy of science so far!

[6a] Weizenbaum’s 1966 chatbot ‘Eliza’, despite its simplicity, was able to make human users believe that the program ‘understood’ them even when they were told that it was just a simple algorithm. See the keyword  ‚Eliza‘ in wkp-en: https://en.wikipedia.org/wiki/ELIZA

[6b] Joseph Weizenbaum, 1966, „ELIZA. A Computer Program For the Study of Natural Language. Communication Between Man And Machine“, Communications of the ACM, Vol.9, No.1, January 1966, URL: https://cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1966.pdf . Note: Although the program ‘Eliza’ by Weizenbaum was very simple, all users were fascinated by the program because they had the feeling “It understands me”, while the program only mirrored the questions and statements of the users. In other words, the users were ‘fascinated by themselves’ with the program as a kind of ‘mirror’.

[7] Ted Chiang, 2023, “ChatGPT Is a Blurry JPEG of the Web. OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?”, The NEW YORKER, February 9, 2023. URL: https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web . Note: Chang looks to the chatGPT program using the paradigm of a ‘compression algorithm’: the abundance of information is ‘condensed/abstracted’ so that a slightly blurred image of the text volumes is created, not a 1-to-1 copy. This gives the user the impression of understanding at the expense of access to detail and accuracy. The texts of chatGPT are not ‘true’, but they ‘mute’.

[8] Dietmar Hansch, 2023, “The more honest name would be ‘Simulated Intelligence’. Which deficits bots like chatGBT suffer from and what that must mean for our dealings with them.”, FAZ Frankfurter Allgemeine Zeitung, March 1, 2023, p.N1 . Note: While Chiang (see [7]) approaches the phenomenon chatGPT with the concept ‘compression algorithm’ Hansch prefers the terms ‘statistical-incremental learning’ as well as ‘insight learning’. For Hansch, insight learning is tied to ‘mind’ and ‘consciousness’, for which he postulates ‘equivalent structures’ in the brain. Regarding insight learning, Hansch further comments “insight learning is not only faster, but also indispensable for a deep, holistic understanding of the world, which grasps far-reaching connections as well as conveys criteria for truth and truthfulness.” It is not surprising then when Hansch writes “Insight learning is the highest form of learning…”. With reference to this frame of reference established by Hansch, he classifies chatGPT in the sense that it is only capable of ‘statistical-incremental learning’. Further, Hansch postulates for humans, “Human learning is never purely objective, we always structure the world in relation to our needs, feelings, and conscious purposes…”. He calls this the ‘human reference’ in human cognition, and it is precisely this what he also denies for chatGPT. For common designation ‘AI’ as ‘Artificial Intelligence’ he postulates that the term ‘intelligence’ in this word combination has nothing to do with the meaning we associate with ‘intelligence’ in the case of humans, so in no case has the term intelligence anything to do with ‘insight learning’, as he has stated before. To give more expression to this fact of mismatch he would rather use the term ‘simulated intelligence’ (see also [9]). This conceptual strategy seems strange, since the term simulation [10] normally presupposes that there is a clear state of affairs, for which one defines a simplified ‘model’, by means of which the behavior of the original system can then be — simplified — viewed and examined in important respects. In the present case, however, it is not quite clear what the original system should be, which is to be simulated in the case of AI. There is so far no unified definition of ‘intelligence’ in the context of ‘AI’! As far as Hansch’s terminology itself is concerned, the terms ‘statistical-incremental learning’ as well as ‘insight learning’ are not clearly defined either; the relation to observable human behavior let alone to the postulated ‘equivalent brain structures’ is arbitrarily unclear (which is not improved by the relation to terms like ‘consciousness’ and ‘mind’ which are not defined yet).

[9] Severin Tatarczyk, Feb 19, 2023, on ‘Simulated Intelligence’: https://www.severint.net/2023/02/19/kompakt-warum-ich-den-begriff-simulierte-intelligenz-bevorzuge-und-warum-chatbots-so-menschlich-auf-uns-wirken/

[10] See the term ‘simulation’ in wkp-en: https://en.wikipedia.org/wiki/Simulation

[11] Doris Brelowski pointed me to the following article: James Bridle, 16.March 2023, „The stupidity of AI. Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous“, URL: https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt?CMP=Share_AndroidApp_Other . Comment: An article that knowledgeably and very sophisticatedly describes the interplay between forms of AI that are being ‘unleashed’ on the entire Internet by large corporations, and what this is doing to human culture and then, of course, to humans themselves. Two quotes from this very readable article: Quote 1: „The entirety of this kind of publicly available AI, whether it works with images or words, as well as the many data-driven applications like it, is based on this wholesale appropriation of existing culture, the scope of which we can barely comprehend. Public or private, legal or otherwise, most of the text and images scraped up by these systems exist in the nebulous domain of “fair use” (permitted in the US, but questionable if not outright illegal in the EU). Like most of what goes on inside advanced neural networks, it’s really impossible to understand how they work from the outside, rare encounters such as Lapine’s aside. But we can be certain of this: far from being the magical, novel creations of brilliant machines, the outputs of this kind of AI is entirely dependent on the uncredited and unremunerated work of generations of human artists.“ Quote 2: „Now, this didn’t happen because ChatGPT is inherently rightwing. It’s because it’s inherently stupid. It has read most of the internet, and it knows what human language is supposed to sound like, but it has no relation to reality whatsoever. It is dreaming sentences that sound about right, and listening to it talk is frankly about as interesting as listening to someone’s dreams. It is very good at producing what sounds like sense, and best of all at producing cliche and banality, which has composed the majority of its diet, but it remains incapable of relating meaningfully to the world as it actually is. Distrust anyone who pretends that this is an echo, even an approximation, of consciousness. (As this piece was going to publication, OpenAI released a new version of the system that powers ChatGPT, and said it was “less likely to make up facts”.)“

[12] David Krakauer in an Interview with Brian Gallagher in Nautilus, March 27, 2023, Does GPT-4 Really Understand What We’re Saying?, URL: https://nautil.us/does-gpt-4-really-understand-what-were-saying-291034/?_sp=d9a7861a-9644-44a7-8ba7-f95ee526d468.1680528060130. David Krakauer, an evolutionary theorist and president of the Santa Fe Institute for complexity science, analyzes the role of chat-GPT-4 models compared to the human language model and a more differentiated understanding of what ‘understanding’ and ‘Intelligence’ could mean. His main points of criticism are in close agreement with the position int he text above. He points out that (i) one has clearly to distinguish between the ‘information concept’ of Shannon and the concept of ‘meaning’. Something can represent a high information load but can nevertheless be empty of any meaning. Then he points out (ii) that there are several possible variants of the meaning of ‘understanding’. Coordinating with human understanding can work, but to understand in a constructive sense: no. Then Krakauer (iii) relates GPT-4 to the standard model of science which he characterizes as ‘parsimony’; chat-GPT-4 is clearly the opposite. Another point (iv) is the fact, that human experience has an ’emotional’ and a ‘physical’ aspect based on somato-sensory perceptions within its body. This is missing with GPT-4. This is somehow related (v) to the fact, that the human brain with its ‘algorithms’ is the product of millions of years of evolution in a complex environment. The GPT-4 algorithms have nothing comparable; they have only to ‘convince’ humans. Finally (vi) humans can generate ‘physical models’ inspired by their experience and can quickly argue by using such models. Thus Krakauer concludes “So the narrative that says we’ve rediscovered human reasoning is so misguided in so many ways. Just demonstrably false. That can’t be the way to go.”

[13] By Marie-José Kolly (text) and Merlin Flügel (illustration), 11.04.2023, “Chatbots like GPT can form wonderful sentences. That’s exactly what makes them a problem.” Artificial intelligence fools us into believing something that is not. A plea against the general enthusiasm. Online newspaper ‘Republik’ from Schweiz, URL: https://www.republik.ch/2023/04/11/chatbots-wie-gpt-koennen-wunderbare-saetze-bilden-genau-das-macht-sie-zum-problem? Here are some comments:

The text by Marie-José Kolly stands out because the algorithm named chatGPT(4) is characterized here both in its input-output behavior and additionally a comparison to humans is made at least to some extent.

The basic problem of the algorithm chatGPT(4) is (as also pointed out in my text above) that it has as input data exclusively text sets (also those of the users), which are analyzed according to purely statistical procedures in their formal properties. On the basis of the analyzed regularities, arbitrary text collages can then be generated, which are very similar in form to human texts, so much so that many people take them for ‘human-generated texts’. In fact, however, the algorithm lacks what we humans call ‘world knowledge’, it lacks real ‘thinking’, it lacks ‘own’ value positions, and the algorithm ‘does not understand’ its own text.

Due to this lack of its own reference to the world, the algorithm can be manipulated very easily via the available text volumes. A ‘mass production’ of ‘junk texts’, of ‘disinformation’ is thus very easily possible.

If one considers that modern democracies can only function if the majority of citizens have a common basis of facts that can be assumed to be ‘true’, a common body of knowledge, and reliable media, then the chatGPT(4) algorithm can massively destroy precisely these requirements for a democracy.

The interesting question then is whether chatGPT(4) can actually support a human society, especially a democratic society, in a positive-constructive way?

In any case, it is known that humans learn the use of their language from childhood on in direct contact with a real world, largely playfully, in interaction with other children/people. For humans ‘words’ are never isolated quantities, but they are always dynamically integrated into equally dynamic contexts. Language is never only ‘form’ but always at the same time ‘content’, and this in many different ways. This is only possible because humans have complex cognitive abilities, which include corresponding memory abilities as well as abilities for generalization.

The cultural-historical development from spoken language, via writing, books, libraries up to enormous digital data memories has indeed achieved tremendous things concerning the ‘forms’ of language and therein – possibly – encoded knowledge, but there is the impression that the ‘automation’ of the forms drives them into ‘isolation’, so that the forms lose more and more their contact to reality, to meaning, to truth. Language as a central moment of enabling more complex knowledge and more complex action is thus increasingly becoming a ‘parasite’ that claims more and more space and in the process destroys more and more meaning and truth.

[14] Gary Marcus, April 2023, Hoping for the Best as AI Evolves, Gary Marcus on the systems that “pose a real and imminent threat to the fabric of society.” Communications of the ACM, Volume 66, Issue 4, April 2023 pp 6–7, https://doi.org/10.1145/3583078 , Comment: Gary Marcus writes on the occasion of the effects of systems like chatGPT(OpenAI), Dalle-E2 and Lensa about the seriously increasing negative effects these tools can have within a society, to an extent that poses a serious threat to every society! These tools are inherently flawed in the areas of thinking, facts and hallucinations. At near zero cost, they can be used to create and execute large-scale disinformation campaigns very quickly. Looking to the globally important website ‘Stack Overflow’ for programmers as an example, one could (and can) see how the inflationary use of chatGPT due to its inherent many flaws pushes the Stack Overflow’s management team having to urge its users to completely stop using chatGPT in order to prevent the site’s collapse after 14 years. In the case of big players who specifically target disinformation, such a measure is ineffective. These players aim to create a data world in which no one will be able to trust anyone. With this in mind, Gary Marcus sets out 4 postulates that every society should implement: (1) Automatically generated not certified content should be completely banned; (2) Legally effective measures must be adopted that can prevent ‘misinformation’; (3) User accounts must be made tamper-proof; (4) A new generation of AI tools is needed that can verify facts. (Translated with partial support from www.DeepL.com/Translator (free version))

chatGBT about Rationality: Emotions, Mystik, Unconscious, Conscious, …

eJournal: uffmm.org
ISSN 2567-6458, 15.January 2023 – 16.January 2023
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This texts reflects some ideas following the documented chat No.4.as part of the uffmm.org blog.

Remark: See for a follow-up reflection the text of my post “chatGPT – How drunk do you have to be …” from 15./16.
February 2023.

Chatbots as Interfaces to the Human Knowledge Cloud?

Already at the end of the documented chat No.4 I had the impression, that an interaction with the chatbot chatGBT is somehow different compared to what most people until know have reported or stated in some reflected way about chatGBT.

In a first — and still a little bit vague — conclusion I have stated: “Apropos ‘rational’: that is a point which did surprise me really: as a kind of a summary it came out “that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. This is clearly not what most philosophers today would say. But it follows from the ‘richness of the facts’  which came as a resonance out of this chat. Not that the chatbot would have given this summary in advance as an important characterization of rationality, but as a human counterpart I could summarize all this properties out of the different separated statements [of chatGBT].”

And, indeed, the millions, if not yet billions, of documents in the world wide web are reflecting fragments of knowledge generated by humans which as a whole form a ‘cloud of knowledge’. The reflected echo of the real world through the medium of human brains is distributed in libraries and in the digital space. No individual person as such can make use of it; it is by far too big to be comprehensible.

Meanwhile search-algorithms can help us to make fragments of this ‘human knowledge cloud’ visible, but the search-results so far are ‘transformed’ in a way which is only of very limited use.

My encounter with chatGBT revealed some new glimpses of a possible new paradigm which perhaps wasn’t intended by openai themselves, but which seems now to be in reach: the individual brain has a limited capacity for ‘many documents’, but it has still an incredible ability to ‘transform’ billions of events into complex abstract patterns, inducing complex networks of relations, complex models, even complex theories.

If one looks to a chatbot like chatGBT as an ‘interface’ between a single human person and the ‘human knowledge cloud’, and this by only using ‘everyday language’, then — depending from the quality of the chatbot — this individual human can only with some ideas and questions ‘trigger’ those documents in the huge ‘human knowledge cloud’ which somehow are ‘fitting’ the triggering words. Thus this individual human person is step-wise encountering those fragments of the ‘human knowledge cloud’ which are in ‘resonance’ with his own words. In a bad case this interaction resembles those ‘echo chambers’ of the internet and the real world where people only get back what they put in.

But in a creative case the input of the individual human person can trigger fragments which are somehow confirming, but somehow non-confirming. This can be a trigger onto the the individual human person to generate some new ideas to feed back again the chatbot interface. While the ‘human knowledge cloud’ at a certain moment of time as such is ‘given’ and will not create ‘something new’, an individual person as an ever running learning process (in principle) could generate new aspects and therefore transform the feedbacks out of the ‘human knowledge cloud’ into some ‘new configurations’.

This principle as such is not new. The human brains are working since millions of years with a mechanisms, where the billions of abstract-concrete cognitive entities derived from perceptions and thinking are ‘stored’ in a more or less ‘unconscious space’ called memory, and only if some actual perception or thinking is ‘touching’ some of these stored entities by some ‘kind of similarity’ these will be re-activated and will become part of the actual perceiving and actual thinking. Thus ‘actual new things’ can become merged with ‘past stored things’ forming some new-past construct. This inherent duality of ‘new-past’ is the basis for that important human capability to be able to transcend the almighty presence and escape its clasp.

If we switch to the level of the human population then the ‘Human knowledge cloud’ plays the role of the ‘unconscious memory system’ collecting ‘past things’, and some kinds of chatbots (if they would be good enough) could play the role of the ‘merger mechanism’ bringing together the ‘past’ (given in the ‘Human knowledge cloud’ ) and the ‘presence’ (given in the actual living human person) and it is then up to the human person to generate some useful new-past patterns of thoughts and possible actions.

In case of the chat No.4 the human actor triggered lots of questions and suggestions to get ‘out of the human knowledge cloud’ those fragments which are somehow corresponding to the triggering words. And after about 8000 signs in the chat the chat ended up in these paragraphs:

HumanOK, this all sums up to the conclusion that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. Do I miss something important?

Machine (citing human documents): No, you have accurately summed up the components of human rationality. Additionally, human rationality is also composed of social and cultural influences, which can shape and inform our understanding of the world and our decisions. Additionally, human rationality is often impacted by our values and beliefs, which can provide a moral foundation for our behavior and decision making. Finally, human rationality is also impacted by our environment, which can provide a source of information and insight.

By itself the chatbot would not have been able to start with such a characterization. He can ‘reach out’ in the ‘Human knowledge cloud’, but the bot algorithm as such has not — as far as research can see at the moment — any kind of ‘creative rational power’ to transform the ‘given knowledge’ into some new ‘helpful’ knowledge. But at the other side, the human persons would not be able too to make use of ‘all the available knowledge’.

In the course of interaction the human questions could ‘drive’ the bot to collect ‘interesting facts’which could then become ‘accepted’ by the bot because they haven become ‘part of the chat’. Thus at the end of the chat the bot could accept that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. A human person ‘helped him’ to state this. This bot algorithm as such does not know anything and he cannot understand anything. Because chatbots — until now — do not possess real emotions, no real mystical experience, no unconscious or conscious human-like cognitive processes, they have no intelligence in the human format.

It is an open question what kind of ‘intelligence’ they have at all. Until know there is great number of ‘definitions’ around. No one is accepted as ‘that’ definition, especially the relationship between the ‘collection of machine intelligence definitions’ and the possible — also not really existing — collection of ‘human intelligence definitions’ is more or less unclear. Thus we are somehow ‘dreaming’ of intelligence, but nobody can really explain what it is …. We could seriously try, if we want …. but who wants it?

Talking with chatGPT. A Philosophical Encounter …


eJournal: uffmm.org
ISSN 2567-6458, 14.January 2023 – 15.January 2023
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is a special thought experiment as part of the uffmm.org blog.

INTRODUCTION

Since its recent ‘coming out’ (November 2022) the chatbot chatGPT from openai.com has gained a growing public attention: Posts in blogs, Journals, newspapers, TV … Even I noticed this new presence.

The one way to understand it is ‘looking behind the scene’, looking ‘into the system’, which is in principle possible. But because the direct inspection of the human brain would you tell nearly nothing about its fantastic properties (even not in the light of the interpretation of the brain sciences) and as well, looking into the ‘chips of computer hardware’ would you tell too nearly nothing about what a computer-system is able to do, so it will in a first step be of no great help, to fill many pages with the algorithms of chatGPT. As such an isolated algorithms doesn’t tell too much, and a whole network of such algorithms doesn’t either.

What perhaps can be interesting is a ‘comparison’ between our human behavior (and understanding) with the behavior of chatGPT and some ‘implicit meaning’ embedded in this behavior.

Thus curious about what this chatGBT is I logged into the system and started interacting with the chatGBT software in the offered ‘playground’. The first two chats have been a little bit like ‘swaying back and forth’, my next two chats have become quite interesting.

After this experience I decided to document these chats in this blog in a 1-to-1 fashion thus enabling further reflections about them later. [3] Rather quickly one can learn that this adventure has many different dimensions from ‘simply being impressed’ until ‘deep philosophical thinking’.

OK, let’s start with doing it.

The 8000-signs embracing chat No.4 ends up with the following paragraph:

Human: OK, this all sums up to the conclusion that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. Do I miss something important?

Machine (citing human documents): No, you have accurately summed up the components of human rationality. Additionally, human rationality is also composed of social and cultural influences, which can shape and inform our understanding of the world and our decisions. Additionally, human rationality is often impacted by our values and beliefs, which can provide a moral foundation for our behavior and decision making. Finally, human rationality is also impacted by our environment, which can provide a source of information and insight.

For more coments see:

COMMENT

wkp := wikipedia

[1] wkp en: chatGPT, URL: https://en.wikipedia.org/wiki/ChatGPT, is a chatbot with the technology of a ‘Generative Pre-trained Transformer’. See more there.

[2] wkp en: chatbot, URL: https://en.wikipedia.org/wiki/Chatbot, is a software which is able to interact in the format of a dialogue (which tells nothing and all …)

[3] During the first two chats I didn’t save the dialogues. This I started beginning with chat No.3