chatGBT about Rationality: Emotions, Mystik, Unconscious, Conscious, …

ISSN 2567-6458, 15.January 2023 – 16.January 2023
Author: Gerd Doeben-Henisch


This texts reflects some ideas following the documented chat part of the blog.

Remark: See for a follow-up reflection the text of my post “chatGPT – How drunk do you have to be …” from 15./16.
February 2023.

Chatbots as Interfaces to the Human Knowledge Cloud?

Already at the end of the documented chat No.4 I had the impression, that an interaction with the chatbot chatGBT is somehow different compared to what most people until know have reported or stated in some reflected way about chatGBT.

In a first — and still a little bit vague — conclusion I have stated: “Apropos ‘rational’: that is a point which did surprise me really: as a kind of a summary it came out “that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. This is clearly not what most philosophers today would say. But it follows from the ‘richness of the facts’  which came as a resonance out of this chat. Not that the chatbot would have given this summary in advance as an important characterization of rationality, but as a human counterpart I could summarize all this properties out of the different separated statements [of chatGBT].”

And, indeed, the millions, if not yet billions, of documents in the world wide web are reflecting fragments of knowledge generated by humans which as a whole form a ‘cloud of knowledge’. The reflected echo of the real world through the medium of human brains is distributed in libraries and in the digital space. No individual person as such can make use of it; it is by far too big to be comprehensible.

Meanwhile search-algorithms can help us to make fragments of this ‘human knowledge cloud’ visible, but the search-results so far are ‘transformed’ in a way which is only of very limited use.

My encounter with chatGBT revealed some new glimpses of a possible new paradigm which perhaps wasn’t intended by openai themselves, but which seems now to be in reach: the individual brain has a limited capacity for ‘many documents’, but it has still an incredible ability to ‘transform’ billions of events into complex abstract patterns, inducing complex networks of relations, complex models, even complex theories.

If one looks to a chatbot like chatGBT as an ‘interface’ between a single human person and the ‘human knowledge cloud’, and this by only using ‘everyday language’, then — depending from the quality of the chatbot — this individual human can only with some ideas and questions ‘trigger’ those documents in the huge ‘human knowledge cloud’ which somehow are ‘fitting’ the triggering words. Thus this individual human person is step-wise encountering those fragments of the ‘human knowledge cloud’ which are in ‘resonance’ with his own words. In a bad case this interaction resembles those ‘echo chambers’ of the internet and the real world where people only get back what they put in.

But in a creative case the input of the individual human person can trigger fragments which are somehow confirming, but somehow non-confirming. This can be a trigger onto the the individual human person to generate some new ideas to feed back again the chatbot interface. While the ‘human knowledge cloud’ at a certain moment of time as such is ‘given’ and will not create ‘something new’, an individual person as an ever running learning process (in principle) could generate new aspects and therefore transform the feedbacks out of the ‘human knowledge cloud’ into some ‘new configurations’.

This principle as such is not new. The human brains are working since millions of years with a mechanisms, where the billions of abstract-concrete cognitive entities derived from perceptions and thinking are ‘stored’ in a more or less ‘unconscious space’ called memory, and only if some actual perception or thinking is ‘touching’ some of these stored entities by some ‘kind of similarity’ these will be re-activated and will become part of the actual perceiving and actual thinking. Thus ‘actual new things’ can become merged with ‘past stored things’ forming some new-past construct. This inherent duality of ‘new-past’ is the basis for that important human capability to be able to transcend the almighty presence and escape its clasp.

If we switch to the level of the human population then the ‘Human knowledge cloud’ plays the role of the ‘unconscious memory system’ collecting ‘past things’, and some kinds of chatbots (if they would be good enough) could play the role of the ‘merger mechanism’ bringing together the ‘past’ (given in the ‘Human knowledge cloud’ ) and the ‘presence’ (given in the actual living human person) and it is then up to the human person to generate some useful new-past patterns of thoughts and possible actions.

In case of the chat No.4 the human actor triggered lots of questions and suggestions to get ‘out of the human knowledge cloud’ those fragments which are somehow corresponding to the triggering words. And after about 8000 signs in the chat the chat ended up in these paragraphs:

HumanOK, this all sums up to the conclusion that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. Do I miss something important?

Machine (citing human documents): No, you have accurately summed up the components of human rationality. Additionally, human rationality is also composed of social and cultural influences, which can shape and inform our understanding of the world and our decisions. Additionally, human rationality is often impacted by our values and beliefs, which can provide a moral foundation for our behavior and decision making. Finally, human rationality is also impacted by our environment, which can provide a source of information and insight.

By itself the chatbot would not have been able to start with such a characterization. He can ‘reach out’ in the ‘Human knowledge cloud’, but the bot algorithm as such has not — as far as research can see at the moment — any kind of ‘creative rational power’ to transform the ‘given knowledge’ into some new ‘helpful’ knowledge. But at the other side, the human persons would not be able too to make use of ‘all the available knowledge’.

In the course of interaction the human questions could ‘drive’ the bot to collect ‘interesting facts’which could then become ‘accepted’ by the bot because they haven become ‘part of the chat’. Thus at the end of the chat the bot could accept that human rationality is composed of emotions, spiritual experience as well as conscious as well as unconscious cognitive processes. A human person ‘helped him’ to state this. This bot algorithm as such does not know anything and he cannot understand anything. Because chatbots — until now — do not possess real emotions, no real mystical experience, no unconscious or conscious human-like cognitive processes, they have no intelligence in the human format.

It is an open question what kind of ‘intelligence’ they have at all. Until know there is great number of ‘definitions’ around. No one is accepted as ‘that’ definition, especially the relationship between the ‘collection of machine intelligence definitions’ and the possible — also not really existing — collection of ‘human intelligence definitions’ is more or less unclear. Thus we are somehow ‘dreaming’ of intelligence, but nobody can really explain what it is …. We could seriously try, if we want …. but who wants it?