Category Archives: Knowledge

Self-destruction is an option. Note

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: July 14, 2024, Last change: July 14, 2024)

Starting Point

We are still here on this planet. However, the fact that our ‘human system’ largely programs itself without our ‘voluntary participation’ can become our downfall within the framework of a democracy and in the age of the internet; not necessarily, but highly likely…

Self-programming

Unlike today’s machines, all living beings — including humans — are designed to ‘self-program’: whatever we perceive of ourselves, others, and the environment, is automatically transported into our interior and there it is also largely automatically structured, arranged, evaluated, and much more. No one can resist it. One can only control this ‘self-programming’ by shaping one’s environment in such a way that certain perceptions are less likely to occur, while others occur more. Educational processes assume this capacity for self-programming and they also decide what should happen during the education.

Dictatorship of the ‘Is There’

What has arrived within us is there for the time being. It forms our primary reality. When we want to act, we first rely on what is there. What is there is somehow ‘true’ for us, shaping our further perception and understanding. Something ‘around us’ that is ‘different’ is indeed ‘different’ and ‘does not fit’ with our inner truth.

Which points of comparison?

Suppose the majority of what is ‘inside us’, what we initially assume to be ‘true’, were ‘false’, ‘inappropriate’, ‘inaccurate’, etc., in the real world outside, we would have little chance of recognizing our own ‘untruth’ as long as most people around us share the same ‘untruth’.[1] Recognizing ‘untruth’ presupposes that one somehow has ‘examples of truth’ that are suitable for being ‘compared’ with the prevailing untruth. However, the presence of examples of truth does not guarantee the recognition of untruth, but only increases the likelihood that it might happen.[2]

[1] Throughout the known history of humanity, we can observe how certain ‘untruths’ were able to dominate entire peoples, not only in autocratic systems.

[2] Systems with state-promoted untruth can be identified, among other things, by the suppression of diversity, examples of truth, and the allowance of only certain forms of opinions.

Modern Untruths

Unlike autocratic systems, democratic systems officially have ‘freedom of speech’, which allows for great diversity. In democratic systems of the Democracy 1.0 format, it is assumed that this guaranteed freedom is not abused.[1]

With the advent of modern media, especially media in conjunction with the internet, it is possible to make money on a large scale with the distribution of media. The temptation is near to spread media over the internet in such a way that one can earn the maximum amount of money. A popular method is ‘advertising’: the longer and more often a user stays in front of content, the more advertising revenue flows. The temptation is great enough to offer the user only what automatically arouses his ‘automatic interest’. The fact that ‘automatic interest’ is a very strong motive and specifically correlates with content that does not require much thought is confirmed daily. It is now known and increasingly described that large parts of a population can be ‘specially programmed’ in this way.

In the ‘struggle of power-driven systems’, this possibility of external programming of people using the internet is massively exploited in the so-called ‘hybrid warfare’. While autocratic systems are ‘massively closed’, the modern democracies in the 1.0 format are almost an Eldorado for the application of hybrid warfare methods. Similar to the money-obsessed media industry, hybrid warfare also uses ‘light content’, mixing fragments of ‘true’ with fragments of ‘false’, particularly those that easily excite, and in a short time, the ‘flock of believers’ in these messages grows.[2] The ‘congregation of these propaganda believers’ cannot usually be influenced by ‘arguments’. These convictions are programmed in such a way that all sources that could represent critical alternatives are ‘outlawed’ from the start.[3]

And unfortunately, it is true that Democracies in the 1.0 format appear ‘weak’ and ‘helpless’ so far for this type of freedom use, although slowly the recognition is increasing that there is something like an abuse through ‘false programming of people’.

[1] Preventing systematic abuse of freedom in Democracies 1.0 is difficult to impossible without changing freedom of speech itself.

[2] The broad coverage of this propaganda can be easily recognized when one talks to people in different places in Germany (and abroad!) who do not know each other, but who tell more or less the same stories in a tone of conviction during the course of the conversation. Many (most of them?) even have higher education. This raises the question of how little an academic education apparently promotes a ‘critical spirit’.

[3] A popular term is ‘lying press’. Anything that could become ‘dangerous’ is ‘a lie’, although those who talk about lying press do not seriously engage with this press at all.

What is the probability of survival of truth?

Anyone who has ever delved deeply into the question of ‘truth’ in their life, and who knows that it takes various research, investigations, considerations, and even experiments to ‘look behind the apparent phenomena’, along with much communication with other people, sometimes in other languages, knows that truth is not automatic; truth does not just happen; truth cannot be obtained for ‘free’. The use of truth for beneficial technologies, new forms of agriculture, new transportation systems, etc., appears desirable in retrospect, at the end of a long journey, when it somehow becomes ‘obvious’ what all this is good for, but at the beginning of the path, this is almost unrecognizable to everyone. The pitiable state of the education system in many countries is a telling testament to the low regard for education as a training process for truth.

Given the rapid spread of unscrupulous internet media businesses accompanied by a worldwide surge in hybrid warfare, the survival probability of truth seems to be decreasing. Democracies, as the actual ‘bastions of truth’, are experiencing themselves as a place of accelerated ‘evaporation of truth’.

Collective Knowledge: Generative AI in chatbot format as a helper

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: July 10, 2024, Last change: July 10, 2024)

Starting Point

As the texts of the book will gradually show, the term ‘collective knowledge’ represents a crucial keyword for a characteristic that deeply defines humans—the life form of ‘homo sapiens.’ For an individual, ‘collective knowledge’ is directly hardly perceivable, but without this collective knowledge, no single human would have any knowledge at all. Yes, this not only sounds like a paradox, it is a paradox. While a ‘contradiction’ between two different statements represents a factual incompatibility, a ‘paradox’ also conveys the impression of a ‘contradiction,’ but in fact, in terms of the matter, it is not an ‘incompatibility.’ The ‘knowledge of us individuals’ is real knowledge, but due to the finiteness of our bodies, our perception system, our memory, we can factually only gather a very small amount of knowledge ‘in us.’ However, the more people there are, the more ‘knowledge’ each person ‘produces’ daily—just analogously, or then also digitally—the greater grows the amount of knowledge that we humans ‘deposit’ in our world. Newspapers, books, libraries, databases can collect and sort this knowledge to a limited extent. But an individual can only find and ‘process’ small fractions of this ‘collected knowledge.’ The gap between the ‘available collected knowledge’ and the ‘individually processable knowledge’ is constantly growing. In such a situation, the availability of generative artificial intelligence in the format of chatbots (GAI-ChaBo) is almost an ‘evolutionary event’! This new technology does not solve all questions, but it can help the individual to ‘principally’ get a novel direct access to what we should call ‘collective knowledge of humanity.’

Before the digitalization of the world …

“Before the digitalization of the world, it was indeed laborious to convey thoughts and knowledge in a way that others could become aware of it: initially only through oral traditions, inscriptions in rocks, then parchment and papyrus, stones and clay with inscriptions. With the availability of paper, writing became easier (though there was the problem of durability); this led to the collection of texts, to books, and the first libraries with books (libraries existed even for cuneiform and clay tablets). Great libraries like the ‘Library of Alexandria’ became precious ‘collection points of knowledge,’ but they were also subjected to various destructive events during their existence, which could lead to great losses of recorded knowledge.

A ‘mechanized production of books’ has been around since the 8th century, and modern book printing began in the 15th century. The development of libraries, however, progressed slowly for a long time, often only on a private basis. It was not until the 19th century that there was a stronger development of the library system, now including public libraries.

Despite this development, it remained difficult for an individual to access knowledge through a library, and even if this (usually privileged) access existed, the availability of specific texts, their inspection, the making of notes—or later copies—was cumbersome and time-consuming. The access of the individual reader resembled small ‘sampling’ that even within the framework of scientific work remained very limited over the years. The language problem should not be overlooked: the proportion of ‘foreign-language books’ in the library of a country A was predominantly restricted to texts in the language of country A.

‘Acquisition of knowledge’ was therefore laborious, time-consuming, and very fragmented for an individual.

An increasingly important alternative to this hard-to-access field of library knowledge were modern magazines, journals, in many languages, with ever shorter ‘knowledge cycles.’ However, the more such journals there are, the more the natural individual limitations come into force, painfully felt in the face of the swelling journal knowledge. Currently (2024), it is hardly possible to estimate the exact number of scientific journals. In the field of computer science alone, approximately 2,000 journals are estimated with an average of about 25,000 (or more) articles per year. And scientific journals only in Chinese are stated to be over 10,000.[1]

[1] For more detailed information on the collection of Chinese journals, you can visit the East View page on China Academic Journals (CAJ) here.

With digitalization

Since the availability of the World Wide Web (WWW) in the 1990s, a unified information space has emerged that has continued to spread globally. Although we are currently witnessing an increasing ‘isolation’ of the WWW among countries, the development of a common information space is unstoppable.

Alongside this information space, technologies for ‘collecting,’ ‘storing,’ ‘retrieving,’ and ‘analyzing’ data have also evolved, making it increasingly possible to find ‘answers’ to ‘questions’ from ever more sources.

With the advent of so-called ‘Generative Artificial Intelligence in the format of Chatbots’ (GAI-ChaBo) since 2022, this ‘data utilization technology’ has reached a level that not only finds ‘raw data’ but also allows an individual user with their limited knowledge direct access to ‘collective human knowledge,’ provided it has been digitized.

For the ‘evolution of life on this planet,’ this availability of collective knowledge to the individual may be the most significant event since the appearance of Homo sapiens itself about 300,000 years ago. Why?

The next level?

The sustainability debate over the last approximately 50 years has contributed to the realization that, alongside the rather individual perspective of life, and strong regional or national interests and perspectives of success, there have gradually come into consciousness — not universally — perspectives that suggest — and are now substantiated by diverse data and models — that there are problems which exceed the event horizon of individual particular groups — and these can be entire nations. Many first think of ‘resources’ that are becoming scarce (e.g., fish stocks), or being polluted (world’s oceans), or dangerously reduced (forest systems, raw materials, life forms, etc.) or more. What has hardly been discussed so far, although it should be the most important topic, is the factor that produces all these problems: Homo sapiens himself, who by his behavior, yes, even just by his sheer numbers, causes all the known ‘problems.’ And this does not happen ‘automatically,’ but because the behavior of Homo sapiens on this planet is ‘controlled’ by his ‘inner states’ in such a way that he seems ‘incapable’ of changing his behavior because he does not have his ‘inner states’ under control.

These inner states, roughly considered, consists of needs, different emotions, and collected experiences linked with knowledge. Knowledge provides the ‘images’ of oneself, others, and the world as a Homo sapiens sees it. Needs and emotions can block, ‘bind,’ or change knowledge. The knowledge that is currently available, however, has great power: ultimately, a Homo sapiens can only do what his current knowledge tells him — if he listens to his knowledge and not to ‘others’ who mean something to him due to the life situation.

If one is now interested in a ‘possible future,’ or — even more specifically — in a ‘possible future that is as good as possible for as many people as possible,’ and sustainable, then the challenge arises as to how people in the situation of everyday life, a certain form of the present, can ‘mentally’ surpass this present in such a way that, despite the current present, they can ‘somehow’ think of a piece of ‘possible future.’

‘Generative Artificial Intelligence in the format of chatbots’ (GAI-ChaBo) can help make the (approximate) entirety of past knowledge — albeit only punctually based on questions — accessible, but the ‘knowledge of the past’ provides ‘nothing new’ out of itself and — above all —, the past does not necessarily have those ‘goals’ and ‘values’ that are necessary in the current present to precisely ‘want that possible future’ that will matter.

With this challenge, Homo sapiens collides ‘with himself’ full force, and he will not be able to ‘hide behind GAI-ChaBo.’ A GAI-ChaBo always only delivers what people have previously said and done, albeit in a breadth that an individual could not achieve, but ultimately a GAI-ChaBo only functions like a kind of ‘mirror of the human collective.’ A GAI-ChaBo cannot replace humanity itself. A GAI-ChaBo is the product of collective human intelligence and can make the entirety of this collective intelligence visible in outline (an incredibly great achievement), but no more.

For the next level, Homo sapiens must somehow manage to ‘get a grip on himself’ in a completely different way than before. There are hardly any usable role models in history. What will Homo sapiens do? GAI-ChaBo is an extraordinary success, but it is not the last level. We can be curious…

Automation of Human Tasks: Typology with Examples of ‘Writing Text’, ‘Calculating’, and ‘Planning’

This text is part of the text “Rebooting Humanity”

The German version can be found HERE.

Author No. 1: Gerd Doeben-Henisch
Contact: cagent@cognitiveagent.org

(Start: June 5, 2024, Last updated: June 6, 2024)

Starting Point

In the broader spectrum of human activities, there are three ‘types’ of action patterns that are common in everyday life and crucial for shared communication and coordination: (i) writing texts, (ii) describing (calculating) quantitative relationships, and (iii) planning possible states in an assumed future. All three types have been present since the beginning of documented ‘cultural life’ of humans. The following attempts a rough typology along with known forms of implementation.

Types and their implementation formats

(a) MANUAL: We write texts ‘manually’ using writing tools and surfaces. Similarly, in quantitative matters, there is manual manipulation of objects that represent quantitative relationships. In planning, there is the problem of how to represent a ‘new’ state: if the ‘new’ is ‘already known’, one can revert to ‘images/symbols’ of the known; if it is ‘truly new’, it becomes difficult; there is no ‘automatism of the new’. How do you describe something that has never existed before? Added to this is the — often overlooked — problem that ‘objects of planning’ are usually ‘value and goal dependent’; needs, intentions, expectations, factual necessities can play a role. The latter can be ‘socially standardized’, but given the ‘radical openness of the future’, history has shown that ‘too strong standardizations’ can be a shortcut to failure.

(b) MANUAL SUPPORT: Skipping the phase of ‘partially mechanical’ support and moving on to the early phase of ‘computer support’, there are machines that can be ‘programmed’ using ‘programming languages’ so that both the writing instrument and the surface are represented by the programmed machine, which allows many additional functions (correcting texts, saving, multiple versions, automatic corrections, etc.). However, one still has to write oneself: letter by letter, word for word, etc. In ‘calculating’, writing down is still very laborious, but the ‘calculation’ then takes place partially ‘automatically’. Planning is similar to writing texts: the ‘writing down’ is supported (with all the additional functions), but ‘what’ one writes down is left to the user. Apart from ‘quantitative calculating’, a ‘projection’, a ‘prediction’ is generally not supported. An ‘evaluation’ is also not supported.

(c) LANGUAGE-BASED SUPPORT: The phase of ‘language-based support’ replaces manual input with speaking. For selected areas of texts, this is becoming increasingly successful. For ‘quantitative matters’ (calculating, mathematics, etc.), hardly at all. For planning also only very limited, where it concerns already formulated texts.

(d) ARTIFICIAL INTELLIGENCE ENVIRONMENTS: The Artificial Intelligence (AI) environment is considered here in the context of dialogue formats: The user can ask questions or send commands, and the system responds. The relevant formats of AI here are the so-called ‘generative AIs’ in the chatbot format. Under the condition of ‘existing knowledge’ of humans in the format ‘stored documents/images/…’ and under the condition of ‘dialogue formats’ of humans (also through explicit training), these generative AIs can process questions and orders in ‘formal proximity’ to the known material in the context of a dialogue so that one does not have to intervene oneself. Corrections and changes in detail are possible. Both in ‘text creation’ and in ‘calculating’, this can function reasonably well within the realm of the ‘known’. However, the actual accuracy in the ‘real world’ is never guaranteed. In ‘planning’, the specific problem remains that for the AI, ‘truly new’ is only limited possible within the framework of combinatorial matters. The truth reservation remains, but also applies in the ‘manual case’, where the human plans themselves. The evaluation problem is also limited to already known evaluation patterns: the future is not the same as the past; the future is more or less ‘different’. Where should the necessary evaluations come from?

An interesting question remains, in what sense the advancing support by generative AIs actually supports communication and coordination among people, and specifically, whether and to what extent the central challenge of ‘future planning’ can be additionally supported by it. The fact is that we humans struggle with future planning in the areas of social life, community life, and larger contexts such as counties, states, etc., ranging from difficult to very difficult. But, for a sustainable future, successful planning seems to be indispensable.

A ‘Logic of Life’?

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 25, 2024, Last change: June 28, 2024)

Starting Point

The excerpt discusses the concept of ‘collective human intelligence (CHI)’ and reflects on the foundational schema of all life : reproduction of Generation 1, birth of Generation 2, growth of Generation 2, followed by the onset of Generation 2’s behaviors accompanied by learning processes, and then reproduction of Generation 2, etc. It highlights how genetic predispositions and ‘free adapting’, commonly referred to as ‘learning’, alternate in phases. While genetic guidelines enable structures with typical functionalities that open up ‘possible action spaces’, filling these spaces is not genetically determined. This makes sense because the real ‘biological carrier system’ is not isolated but exists in an ‘open environment’ whose specific configuration and dynamics constantly change. From a ‘sustainable survival’ perspective, it is crucial that the biological carrier system has the ability to not only grasp the nuances of the environment at specific moments but also to represent, combine, and test them in the context of space and time. These simple words point to a highly complex process that has become known as ‘learning’, but the simplicity of this term may overlook the fact that we are dealing with an ‘evolutionary miracle of the highest order’. The common concept of ‘evolution’ is too limited in this perspective; it only describes a fragment.

A ‘Logic of Life’?

Basic Pattern of All Life

The ‘basic pattern of all life’ provokes many considerations. It is striking how phases of genetic change, which imply new structures and functionality, ultimately transform the ‘initial space’ of genetic changes into new, significantly more complex spaces, not just once, but repeatedly, and the more often, the more complexity comes within reach.

The life form of ‘Homo sapiens’—us, who call ourselves ‘humans’—represents a provisional peak of complexity in the temporal view of history so far, but already suggests from within itself a possible ‘next evolutionary stage’.

Even viewed closely, the individual human—with his structured cell galaxy, with the possible functions here, with his individual learning—represents an extraordinary event—relative to the entire known universe—, but this ‘individual’ human in his current state is already fully designed for a ‘plurality of people’, for ‘collective behavior’, for ‘collective learning’, and certainly also for ‘collective achievements’.

[1] The world of ‘molecules’ is transformed into the world of ‘individual cells’; the world of ‘individual cells’ is transformed into the world of ‘many cells (cellular complexes)’; the world of ‘cell complexes’ is transformed into the world of ‘structured cell complexes’, …, the world of structured ‘cell galaxies’ is transformed into the world of ‘cooperating structured cell galaxies with individual and collective learning’, …

Temporal Classification

Not only have the last few millennia shown what many people can achieve together, but particularly the ‘modern engineering achievements’ involving the collaboration of many thousands, if not tens of thousands of experts, distributed globally, over extended periods (months, year, many years), simultaneously in many different languages, dealing with highly complex materials and production processes—processes in which meta-reflection and feedback loops are taken for granted –… These processes, which have been globally initiated since the great war in the mid-20th century, have since become more and more the everyday standard worldwide. [2] The invention of programmable machines, information networks, highly complex storage systems, and the provision of ever more ‘human-compatible interfaces’ (visual, acoustic, tactile, …), up to those formats that make it appear to the human user as if ‘behind the interface’ there is another living person (even if it is ‘just’ a machine), have all occurred within just about 70 years.

While it took a considerable amount of time from the first evidences of biological life on planet Earth (around -3.4 billion years ago) to the first proven appearance of Homo sapiens in North Africa (around -300,000 years ago), the development of the complex ‘mental’ and ‘communicative’ abilities of Homo sapiens starting around -300,000 years ago, was initially slow (invention of writing around -6000), but the development then accelerated significantly over the last approximately 150 years: the complex events are almost overwhelming. However, considering the entire time since the presumed formation of the entire universe about 13.7 billion years ago, there is a rough time schema:

After about 75% of the total time of the existence of the universe, the first signs of biological life.

After about 99.998% of the total time of the existence of the universe, the first signs of Homo sapiens.

After about 99.999998% of the total time of the existence of the universe, the first signs of complex collective human-technical intelligence achievements.

This means that, in relation to the total time, the periods for the ‘latest’ leaps in complexity are so ‘short’ that they can no longer be distinguished on a large scale. This can also be interpreted as ‘acceleration’. It raises the question of whether this ‘acceleration’ in the creation of increasingly complex collective intelligence achievements reveals a ‘logic of process’ that would enable further considerations?

[2] Here began the career of the modern form of ‘Systems Engineering’, a quasi-standard of problem solving, at least in the English-speaking world.

Complexity Level: Biological Cell

With the description of a ‘basic pattern of all life’, a pattern emerges that is describable at least onwards from the complexity level of a biological cell.

The complexity level preceding the biological cell is that of ‘molecules’, which can be involved in different process chains.

In the case of the biological cell, we have, among other things, the case where molecules of type 1 are used by molecules of type 2 as if the type 1 molecules were ‘strings’ that ‘represent’ molecules of type 3, which are then ‘produced’ through certain chemical processes. Put differently, there are material structures that interpret other material structures as ‘strings’, possessing a ‘meaning assignment’ that leads to the creation of new material structures.

Thus, biological cells demonstrate the use of ‘meaning assignment’, as we know structurally in the case of symbolic languages from complex cell galaxies. This is extremely astonishing: how can ‘ordinary molecules’ of type 2 have a ‘meaning assignment’ that allows them to interpret other molecules of type 1 as ‘strings’ in such a way that they—according to the meaning assignment—lead to the organization of other molecules of type 3, which ultimately form a structure with functional properties that cannot be derived ‘purely materially’ from the type 1 molecules.

… !! New text in preparation !!..

[3] In this context, the term ‘information’ (or ‘biological information’) is commonly used in the literature. If this usage refers to the terminology of Claude Shannon, then it would be difficult to apply, as in the specific case it is not about the transmission of ‘signal elements’ through a signal channel to ‘received signal elements’ (a structural 1-to-1 mapping), but about an assignment of ‘signs (= signal elements)’ to something ‘completely different’ than the original signal elements.

A ‘Logic’?

When the main title tentatively (‘hypothetically’) mentions a ‘Logic of Life’, it is important to clarify what specifically is meant by the term ‘logic’ as a possible concept.

The term ‘logic’ dates back to Aristotle, who introduced it around 2400 years ago in Greece. It was then translated back into the Latin of the Christian Middle Ages via the Islamic culture around 1000 AD, profoundly influencing the intellectual life of Europe until the late Middle Ages. In contrast to ‘modern formal logic’—from the late 19th century onwards—the ‘Aristotelian logic’ is also referred to as ‘classical logic’.

If one disregards many details, classical and modern logic differ fundamentally in one aspect: in classical logic, the ‘linguistic meaning’ of the expressions used plays an important role, whereas in modern logic, linguistic meaning is completely excluded. ‘Mutilated remnants’ of meaning can still be found in the concept of an ‘abstract truth’, which is reflected in ‘abstract truth values’, but their ‘meaning content’ is completely empty.

The concept of both classical and modern logic—despite all differences—is united by the concept of ‘logical reasoning’: Suppose one has a set of expressions that are deemed ‘somehow true’ by the users of logic, then there are ‘rules of application’ on how to generate other expressions from the set of ‘assumed true expressions’, which can then also be considered ‘true expressions’. This ‘generation’ of new expressions from existing ones is called ‘reasoning’ or ‘inference’, and the ‘result’ of the reasoning is then a ‘conclusion’ or ‘inference’.

A more modern—formulaically abbreviated—notation for this matter would be:

A ⊢Tr B

Here, the symbol ‘A’ represents a set of expressions assumed to be true, ‘Tr’ stands for a set of transformation instructions (usually called ‘rules of inference or inference rules’), ‘B’ stands for a generated (derived) expression, and ‘⊢’ refers to an ‘action context’ within which users of logic use transformation rules to ‘generate B based on A’.

A ‘normal’ logician, in the case of the symbol ‘⊢’, does not speak of an ‘action context’ but usually just of a ‘concept of inference’ or—with an eye to the widespread use of computers—of an ‘inference mechanism’; however, this way of speaking should not obscure the fact that ‘what actually exists’ are once concrete ‘objects’ in the form of expressions ‘A’ and ‘B’, and also in the form of expressions ‘Tr’. These expressions as such have neither any ‘meaning’ nor can these expressions ‘generate anything by themselves’. For the concrete expressions ‘B’ to be classified as ‘inference’ from the expressions ‘A’, which are ‘really generated’ by means of ‘Tr’, a real ‘process’ must take place in which ‘B’ is ‘really generated’ from ‘A’ ‘in the sense of Tr’.

A process is a real event ‘in time’, in which there is a real state that contains the object ‘A’, and a real logic user who has a ‘concept = model’ of ‘logical reasoning’ in his head, in which the ‘expressions’ of the generation rules Tr are linked with concrete process steps (the meaning of the expressions Tr), so that the logic user can identify the expressions belonging to A as part of the generation rules in a way that the generation rules can assign a new expression B to the expressions A. If this assignment ‘in the mind of the logic user’ (commonly referred to as ‘thinking’) is successful, he can then write down a new expression B referring to the concrete expressions Tr in a ‘subsequent situation’. Another logic user will only accept this new expression ‘B’ if he also has a ‘concept = model’ of logical reasoning in his head that leads to the same result ‘B’ in his mind. If the other logic user comes to a different result than ‘B’, he will object.

–!! Not finished yet! —

Blind’s World One (1995!)

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Last modified: June 14, 2024)

Starting Point

How can one philosophically conceive of artificial intelligence (AI) interacting with real people, an AI that learns real language with real meaning on its own? Prompted by an offer from Ars Electronica ’95 to introduce a philosophically inspired art project, I spent intense months with an ad hoc software team (the team was wonderful!) designing (and implemented by the software team) the interactive network version of a small artificial world based on philosophical considerations. In this world lived ‘blind Knowbots’ that could communicate with the outside world, using their other sensory experiences and basic bodily states as the basis for assigning meanings to their respective languages. Whatever language the users (mostly children!) used, they could link this language with their real-world experiences. This experiment has shaped me for many years, actually up to today.

Blind’s World One

(The text was copied from the Ars Electronica ’95 book since the text is no longer accessible)

Humans and machines that can generate sound

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 14, 2024, Last Modification: June 14, 2024)

Starting Point

Since September 2015, I have been repeatedly trying—both theoretically and practically—to understand what sound art really is; what is sound? What does it do to us? One consideration led to another; between them were real experiments and live performances. There were also long periods of ‘standstill’…. At a sound art concert on June 11, 2024, at Mousonturm in Frankfurt, something clicked in my mind regarding a fundamental question, and suddenly the uniqueness of ‘collective human intelligence’ in confrontation with so-called ‘intelligent machines’ became somehow newly clearer to me.

XEROX EXOTIQUE #090 – IMPRESSIONS

This post on an associated blog is about people and machines that can generate sound.

The trigger was a sound art event at the Mousonturm in Frankfurt am Main on June 11, 2024.

Here comes the translation:

A Hint from a Friend

Following a tip from Tobias (PiC, Xerox Exotique, …), I made a trip yesterday to the sound art event #090, organized by Xerox Exotique at the Mousonturm in Frankfurt am Main.

Impressions

SKETCH: Mousonturm, a small event area to the right of the entrance with a small stage. Some participants are highlighted. Detailed information about the event can be found on the XEROX EXOTIQUE website (xeroxex.de).

What to Talk About?


A sound art event like this offers numerous starting points for discussion…

Since the beginning of Philosophy in Concert (PiC), I have been driven by the question of how to situate soundscapes in the life world of people so that they do not seem like ‘foreign bodies,’ somehow ‘detached’ from the process of humans on this planet, but as a ‘living part’ of this very real-dynamic process made visible.

At concerts based on written music (scores…), it all revolves around the sets of symbols that someone has produced, which others convert into sounds, and perhaps about the person who holds the ‘office of the interpreter’ and tells other implementers how they should convert. The ‘typically human’ aspect may then be recognized in the ‘background of the notation’, in the way of ‘converting’ or ‘interpreting’, and then the effect of the sound cloud in the room on the people who sit, listen, and experience various emotions…

How much of the human process is revealed in such a form of event?

There is almost never any talking, and if there is, what is there to talk about? About one’s own feelings? About the technical intricacies of the written? About the skill of the converters? About the beauty of a voice? Yes, it’s not easy to integrate the sound event into the life process… and yet, it affects somehow, one remembers, talks about it later, may rave or complain…

The Hidden Human


Let’s briefly change the context and dive directly into the current global euphoria many people have over the new chatbots, which increasingly fascinate more people in everyday life, products of ‘generative Artificial Intelligence’ (chatGPT & Co).

The algorithms behind the interface are comparatively simple (although the global deployment is due to impressive engineering). What fascinates people in front of the interface is ‘how human the algorithms appear in the interface’. They use everyday language just as ‘we humans’ do, ultimately even better than most of those who sit in front of it. And — almost irresistibly — many see, because of this language and the accessible knowledge ‘behind the interface’, not a simple machine but something ‘profoundly human’. What is ‘human’ about this appearance, however, are the words, sentences, and texts that the simple algorithm has compiled from millions of documents, all of which come from humans. On its own, this algorithm cannot generate a single sentence! It lacks fundamental prerequisites. The ‘actual’ wonder sits in front of the ‘apparent’ wonder: it is we humans, who have, are, and represent something that we are barely aware of ourselves (we are ‘blind through ourselves’), and we marvel when simple algorithms show us what we are… ultimately like a mirror of humanity, but most do not notice; we get excited about simple algorithms and forget that we ourselves are exactly this wonder that has produced all this, continues to produce… we become blind to the real wonder that we ourselves are, each of us, all together.

Collective Intelligence – Collective ‘Spirit’…

In the case of algorithms, the term ‘artificial intelligence (AI)’ has been used for a long time, and more moderately, ‘machine learning (ML)’. However, the concept of intelligence has not yet been truly standardized, even though psychology has developed and experimentally researched interesting concepts of ‘intelligence’ (e.g., the ‘Intelligence Quotient (IQ)’) for humans for about 120 years. The communication between psychology and computer science, however, has never been very systematic; rather, everyone does ‘their own thing’. Thus, precisely determining the relationship between ‘human intelligence (HI)’ and ‘artificial intelligence (AI)’ has so far been quite difficult; the terms are too vague, not standardized. Moreover, it is complicated by the fact that the ‘actually impressive achievements’ of humans are not their ‘individual achievements’ (although these are important), but everything that ‘many people together over a long time’ have accomplished or are accomplishing. The term ‘Collective Human Intelligence (CHI)’ is in this direction but is probably too narrow, as it’s not just about ‘intellect’ but also about ‘communication’, ’emotions’, and ‘goals’. Unfortunately, research on the topic of Collective Human Intelligence is still far behind. The focus on the individual runs deep, and then in times of artificial intelligence, where individual machines achieve remarkable feats (under the premise of the collective achievements of humans!), even the study of individual human intelligence has fallen into the shadow of attention.

How do we get out of this impasse?

Sound Art as a Case Study?


I hadn’t attended a sound art concert in many years. But there were still memories, various aspects swirling through my mind.

The tip from Tobias catapulted me out of my usual daily routines into such a sound art event at the Mousonturm on June 11, 2024, at 8:00 pm.

As I said, there is a lot to talk about here. For a long time, I have been preoccupied with the question of the ‘collective’ dimension in human interaction. The ‘synchronization’ of people by algorithms is nothing unusual. In a way, humans have always been ‘standardized’ and ‘aligned’ by the prevailing ‘narratives,’ and the rapid spread of modern ‘narratives’ and the speed with which millions of people worldwide join a narrative is a fact. Most people (despite education) are apparently defenseless against the emergence of ‘narratives’ at first, and then very soon so strongly ‘locked-in’ that they reproduce the narratives like marionettes.

What role can ‘sound art’ play against such a backdrop? Sound art, where there is nothing ‘written’, no ‘central interpreter’, no ‘converters of the written’, but, yes, what?

That evening, the first group, ‘Art Ensemble Neurotica’, seemed to me to most broadly illustrate the profound characteristics of sound art. In the two following solo acts, where the individual performer interacted with sound they themselves produced, the special dimension of sound art was also present, in my view, but more concealed due to the arrangement.

In the case of Neurotica: Four people generated sound, live, each individually: Dirk Hülstrunk (narrator) – Michael Junck (digital devices) – Johannes Aeppli (percussion) – Guido Braun (strings & conductor). Each person on stage was a ’cause’, equipped with instruments that allowed all sorts of sound effects. There were no written notes; there hadn’t been a real rehearsal beforehand, but some arrangements (according to Guido).

Anyone who knows how diversely each individual can generate sound under these conditions can imagine that this seemingly infinite space can give rise to tension about what will happen next?

Describing the totality of sound that emanated from the four performers upfront for 45 minutes is nearly impossible in detail. At no stage did it seem (I exchanged views immediately afterwards with Roland (incorrectly identified as Robert in the sketch) next to me—we didn’t know each other, it was a coincidence we sat next to each other), that one sound source drowned out or overwhelmed another; everything appeared side by side and intertwined in a somehow ‘fitting form’, appealing and stimulating. Patterns from all four individual sources could be recognized interacting with each other over extended phases, yet they were supple, changing shape. Effects like volume shifts, echo, reverb, distortion, etc., did not feel out of place but seemed ‘harmonic’… giving each source a ‘character’ that combined with the others to form an overall impression…

Can such an arrangement of sounds be taken ‘purely abstractly’, detached from their creators? Could software generate such a complex sound event?

While the listener initially hears only the produced sound and might not immediately decide from this perspective whether it matters who and how this sound is produced, from the perspective of creation it quickly becomes clear that these sounds cannot be isolated from the producer, from the ‘inner states’ of the producer. Ultimately, the sound is created in the moment, in the interaction of many moments inside each individual actor (human), and this individual is not ‘alone’, but through his perception and many jointly experienced sound processes, each possesses a ‘sound knowledge’ that he more or less ‘shares internally’ with others, and thus each can bring his current inner states into a ‘dialogue’ with this ‘shared sound knowledge’. It is precisely this inner dialogue (largely unconscious) that provides opportunities for complex synchronizations, which an individual alone, without a shared history of sound, could not have. The resulting complex sounds are therefore not just ‘sound’ but are more manifestations of the internal structures and processes of the creators, which as internal meaning are linked with the external sound: Sound art sound is therefore not just sound one hears, it is also fully a kind of ‘communication’ of ‘inner human states’, spread over various collaborating individuals, thus a true collective event that presupposes the individual but extends far beyond in the happening. In this form of distributed sound art, the individual can experience themselves as a ‘WE’ that would otherwise be invisible.

Postscript


So, I now have this strange feeling that participating in this sound art event has led me deeper into the great mystery of us humans, who we are, that we have a special dimension of our existence in our ability to ‘collectively feel, think, and act,’ which somewhat liberates us from ‘individuality’ towards a very special ‘We’.

While a soundscape is ‘authentic’ and as such not ‘underminable’, ‘narrative spaces’—the use of language with an assumed, but not easily controllable potential meaning—are extremely ‘dangerous’ spaces: assumed meanings can be false and—as we can see today on a global scale—are predominantly wrong with correspondingly devastating consequences. Moving in distributed sound spaces has its ‘meaning’ ‘within itself’; the ‘Self in sound together’ is not underminable; it is mercilessly direct. Perhaps we need more of this…

Changes

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 14, 2024, Last Modification: June 14, 2024)

Starting Point

In both the section “Talking about the World” and the section “Verifiable Statements,” the theme of ‘change’ continuously emerges: our everyday world is characterized by everything we know being capable of ‘changing,’ including ourselves, constantly, often unconsciously; it just happens. In the context of people trying to collectively create an understanding of the world, perhaps also attempting to ‘plan’ what should be done together to achieve the best possible living situation for as many as possible in the future, the phenomenon of ‘change’ presents an ambivalent challenge: if there were no change, there would be no future, only ‘present’; but with change occurring, it becomes difficult to ‘look into the future’. How can we know into what future state all these changes will lead us? Do we even have a chance?

Changes

Motivation


In the current scenario, we assume a context of people trying to collectively form a picture of the world, who may also be attempting to ‘plan’ joint actions. It’s essential to recognize that the ‘relevant’ topics of interest are influenced by ‘which people’ one is working with, as each group within a society can and often does have its ‘own perspectives’. It is not only in ‘autocratic’ societal systems that citizens’ perspectives can be easily overlooked; there are plenty of examples in officially ‘democratic’ systems where citizens’ concerns are also overlooked, warranting closer analysis.

This discussion initially focuses on the fundamental mechanisms of ‘change’, specifically the ‘collective description’ of changes. The motivation for this emphasis stems from the fact that different people can only ‘coordinate (align) their actions’ if they first manage to ‘communicate and agree’ on the ‘contents of their actions’ through ‘communication processes’.

While simple situations or small groups may manage with verbal communication alone, most scenarios require ‘written texts’ (documents). However, written text has a disadvantage compared to direct speech: a ‘text’ can be ‘read’ in a situation where the ‘reader’ is not currently in the situation being described. In terms of ‘verifiability of statements’, this presents a real challenge: every text, due to ‘learned meaning relationships’, automatically has a ‘meaning’ that is activated ‘in the mind of the reader’, but it is crucial to verify whether there is a ‘real verifiable correspondence’ to the situation ‘described’ in the text.

If we assume that a group of people seriously contemplates a ‘future’ that they believe is ‘more likely to occur than not’—not just ‘theoretically’ but ‘actually’—then there must be a way to design the description of a ‘starting situation’ such that all participants have a chance to verify its accuracy in their shared everyday life.

Verifiable Statements

This text is part of the text “Rebooting Humanity”

(The German Version can be found HERE)

Author No. 1 (Gerd Doeben-Henisch)

Contact: info@uffmm.org

(Start: June 7, 2024, Last change: June 9, 2024)

Starting Point

Speaking in everyday life entails that through our manner of speaking, we organize the perceptions of our environment, solely through our speech. This organization occurs through thinking, which manifests in speaking. As previously described, while the ability to speak is innate to us humans, the way we use our speech is not. In speaking, we automatically create an order, but whether this order actually corresponds to the realities of our everyday world requires additional verification. This verification, however, does not happen automatically; we must explicitly desire it and carry it out concretely.

Verifiable Statements

If one accepts the starting point that linguistic expressions, which enable our thinking, are initially ‘only thought’ and require additional ‘verification in everyday life’ to earn a minimal ‘claim to validity in practice’, then this basic idea can be used as a starting point for the concept of ’empirical verifiability’, which is seen here as one of several ‘building blocks’ for the more comprehensive concept of an ’empirical theory (ET)’.

Language Without Number Words


Here are some everyday examples that can illustrate some aspects of the concept of ’empirical verifiability’:

Case 1: There is an object with certain properties that the involved persons can perceive sensorily. Then one person, A, can say: ‘There is an object X with properties Y.’ And another person, B, can say: ‘Yes, I agree.’

Case 2: A specific object X with properties Y cannot be sensorily perceived by the involved persons. Then person A can say: ‘The object X with properties Y is not here.’ And another person, B, can say: ‘Yes, I agree.’

Case 3: There is an object with certain properties that the involved persons can sensorily perceive, which they have never seen before. Then person A can say: ‘There is an object with properties that I do not recognize. This is new to me.’ And another person, B, can then say: ‘Yes, I agree.’

The common basic structure of all three cases is that there are at least two people who ‘speak the same language’ and are in a ‘shared situation’ in everyday life. One person—let’s call him A—initiates a conversation with a ‘statement about an object with properties,’ where the statement varies depending on the situation. In all cases, the person addressed—let’s call him B—can ‘agree’ to A’s statements.

The three cases differ, for example, in how the object ‘appears’: In case 1, an object is ‘simply there,’ one can ‘perceive’ it, and the object appears as ‘familiar.’ In case 2, the object is known, but not present. In case 3, there is also an object, it can be perceived, but it is ‘not known.’

For the constructive success of determining an agreement that finds approval among several people, the following elements are assumed based on the three cases:

The participants possess:

  • ‘Sensory perception’, which makes events in the environment recognizable to the perceiver.
  • ‘Memory’, which can store what is perceived.
  • ‘Decision-making ability’ to decide whether (i) the perceived has been perceived before, (ii) the perceived is something ‘new,’ or (iii) an object ‘is no longer there,’ which ‘was there before.’
  • A sufficiently similar ‘meaning relationship’, which enables people to activate an active relationship between the elements of spoken language and the elements of both perception and memory, whereby language elements can refer to contents and vice versa.

Only if all these four components [2] are present in each person involved in the situation can one convey something linguistically about their perception of the world in a way that the other can agree or disagree. If one of the mentioned components (perception, memory, decision-making ability, meaning relationship) is missing, the procedure of determining an agreement using a linguistic expression is not possible.

[1] There are many different cases!

[2] These four concepts (perception, memory, decision-making ability, meaning relationship) are ‘incomprehensible on their own.’ They must be explained in a suitable context later on. They are used here in the current concept of ‘verifiable statements’ in a functional context, which characterizes the concept of ‘verifiable statement’.

Language with Numerals


Typically, everyday languages today include numerals (e.g., one, two, 33, 4400, …, 1/2, 1/4), although they vary in scope.

Such numerals usually refer to some ‘objects’ (e.g., three eggs, 5 roses, 33 potatoes, 4400 inhabitants, … 1/2 pound of flour, 44 liters of rainfall in an hour, …) located in a specific area.

A comprehensible verification then depends on the following factors:

  • Can the specified number or quantity be directly determined in this area (a clear number must come out)?
  • If the number or amount is too large to estimate directly in the area, is there a comprehensible procedure by which this is possible?
  • What is the time required to make the determination in the area (e.g., minutes, hours, days, weeks, …)? If the necessary time always increases, it becomes increasingly difficult to make the statement for a specific time (e.g., the number of residents in a city).

These examples show that the question of verification quickly encompasses more and more aspects that must be met for the verifiability of a statement to be understood and accepted by all involved.

Language with Abstractions


Another pervasive feature of everyday languages is the phenomenon that, in the context of perception and memory (storing and recalling), abstract structures automatically form, which are also reflected in the language. Here are some simple examples:

IMAGE: Four types of objects, each seen as concrete examples of an abstract type (class).


In everyday life, we have a word for the perceived objects of types 1-4, even though the concrete variety makes each object look different: In the case of objects of group 1, we can speak of a ‘clock,’ for group 2 of a ‘cup,’ for 3 of ‘pens,’ and in the case 4 of ‘computer mice,’ or simply ‘mice,’ where everyone knows from the context that ‘mouse’ here does not mean a biological mouse but a technical device related to computers. Although we ‘sensorily’ see something ‘different’ each time, we use the ‘same word.’ The ‘one word’ then stands for potentially ‘many concrete objects,’ with the peculiarity that we ‘implicitly know’ which concrete object is to be linked with which word. If we were not able to name many different concrete objects with ‘one word,’ we would not only be unable to invent as many different words as we would need, but coordination among ourselves would completely break down: how could two different people agree on what they ‘perceive in the same way’ if every detail of perception counted? The same object can look very different depending on the angle and lighting.

The secret of this assignment of one word to many sensually different objects lies not in the assignment of words to elements of knowledge, but rather the secret lies one level deeper, where the events of perception are transformed into events of memory. Simplifying, one can say that the multitude of sensory events (visual, auditory, gustatory (taste), tactile, …) after their conversion into chemical-physical states of nerve cells become parts of neuronal signal flows, which undergo multiple ‘processings’. As a result, the ‘diversity of signals’ is condensed into ‘abstract structures’ that function as a kind of ‘prototype’ connected to many concrete ‘variants.’ There are thus something like ‘core properties’ that are ‘common’ to different perception events like ‘cup,’ and then many ‘secondary properties’ that can also occur, but not always, the core properties do. In the case of the ‘clock,’ for example, the two hands along with the circular arrangement of marks could be such ‘core properties.’ Everything else can vary greatly. Moreover, the ‘patterns of core and secondary properties’ are not formed once, but as part of processes with diverse aspects e.g., possible changes, possible simultaneous events, etc., which can function as ‘contexts’ (e.g., the difference between ‘technical’ and ‘biological’ in the case of the term ‘mouse’).

Thus, the use of a word like ‘clock’ or ‘cup’ involves— as previously discussed—once the reference to memory contents, to perceptual contents, to learned meaning relationships, as well as the ability to ‘decide’ which of the concrete perception patterns belong to which learned ‘prototype.’ Depending on how this decision turns out, we then say ‘clock’ or ‘cup’ or something else accordingly. This ability of our brain to ‘abstract,’ by automatically generating prototypical ‘patterns’ that can exemplify many sensorially different individual objects, is fundamental for our thinking and speaking in everyday life. Only because of this ability to abstract can our language work.

It is no less impressive that this basic ‘ability to abstract’ of our brain is not limited to the relationship between the two levels ‘sensory perception’ and ‘storage in memory,’ but works everywhere in memory between any levels. Thus, we have no problem grouping various individual clocks based on properties into ‘wristwatches’ and ‘wall clocks.’ We know that cups can be seen as part of ‘drinking vessels’ or as part of ‘kitchenware.’ Pens are classified as ‘writing instruments,’ and ‘computer mice’ are part of ‘computer accessories,’ etc.

Often, such abstraction achievements are also referred to as ‘categorizations’ or ‘class formation,’ and the objects that are assigned to such class designations then form the ‘class content,’ where the ‘scope’ of a class is ‘fluid.’ New objects can constantly appear that the brain assigns to one class or another.

Given this diversity of ‘abstractions,’ it is not surprising that the assignment of individual objects to one of these classes is ‘fluid,’ ‘fuzzy.’ With the hundreds or more different shapes of chairs or tables that now exist, it is sometimes difficult to decide, is this still a ‘chair’ or a ‘table’ in the ‘original sense’ [2] or rather a ‘design product’ in search of a new form.

For the guiding question of the verifiability of linguistic expressions that contain abstractions (and these are almost all), it follows from the preceding considerations that the ‘meaning of a word’ or then also the ‘meaning of a linguistic expression’ can never be determined by the words alone, but almost always only by the ‘context’ in which the linguistic expression takes place. Just as the examples with the ‘numerical words’ suggest, so must one know in a request like “Can you pass me my cup” which of the various cups was the ‘speaker’s cup.’ This presupposes the situation and ‘knowledge of the past of this situation’: which of the possible objects had he used as his cup?[3]

Or, when people try to describe a street, a neighborhood, a single house, and the like with language. Based on the general structures of meaning, each reader can form a ‘reasonably clear picture’ ‘in his head’ while reading, but almost all details that were not explicitly described (which is normally almost impossible) are then also not present in the reconstructed ‘picture in the head’ of the reader. Based on the ‘experience knowledge’ of the language participants, of course, everyone can additionally ‘color in’ his ‘picture in the head.'[4]

If a group of people wants to be sure that a description is ‘sufficiently clear,’ one must provide additional information for all important elements of the report that are ‘ambiguous.’ One can, for example, jointly inspect, investigate the described objects and/ or create additional special descriptions, possibly supplemented by pictures, sound recordings, or other hints.

When it comes to details, everyday language alone is not enough. Additional special measures are required.[5]

[1] A problem that machine image recognition has struggled with from the beginning and continues to struggle with to this day.

[2] The ‘original’ sense, i.e., the principle underlying the abstraction performance, is to be found in those neuronal mechanisms responsible for this prototype formation. The ‘inner logic’ of these neuronal processes has not yet been fully researched, but their ‘effect’ can be observed and analyzed. Psychology has been trying to approximate this behavior with many model formations since the 1960s, with considerable success.

[3] Algorithms of generative artificial intelligence (like chatGPT), which have no real context and which have no ‘body-based knowledge,’ attempt to solve the problem by analyzing extremely large amounts of words by breaking down documents into their word components along with possible contexts of each word so that they can deduce possible ‘formal contexts,’ which then function as ‘quasi-meaning contexts.’ To a certain extent, this works meanwhile quite well, but only in a closed word space (closed world).

[4] A well-known example from everyday life here is the difference that can arise when someone reads a novel, forms ideas in their head, and eventually someone produced a movie about the novel: to what extent do the ideas one has made of individual people correspond with those in the movie?

[5] Some may still know texts from so-called ‘holy scriptures’ of a religion (e.g., the ‘Bible’). The fundamental problem of the ‘ambiguity’ of language is of course intensified in the case of historical texts. With the passage of time, the knowledge of the everyday world in which a text was created is lost. Then, with older texts, there is often a language problem: the original texts, such as those of the Bible, were written in an old Hebrew (‘Old Testament’) or an old Greek (‘New Testament’), whose language use is often no longer known. In addition, these texts were written in different text forms, in the case of the Old Testament also at different times, whereby the text has also been repeatedly revised (which is often also connected with the fact that it is not clear who exactly the authors were). Under these conditions, deducing an ‘exact’ meaning is more or less restricted or impossible. This may explain why interpretations in the approximately 2000 years of ‘Bible interpretation’ have been very different at all times.

The Invasion of the Storytellers

Author: Gerd Doeben-Henisch

Changelog: April 30, 2024 – May 3, 2024

May 3,24: I added two Epilogs

Email: info@uffmm.org

TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software @chatGPT4 with manual modifications.

CONTEXT

Originally I wrote, that “this text is not a direct continuation of another text, but that there exist before various articles from the author on similar topics. In this sense, the current text is a kind of ‘further development’ of these ideas”. But, indeed, at least the text “NARRATIVES RULE THE WORLD. CURSE & BLESSING. COMMENTS FROM @CHATGPT4” ( https://www.uffmm.org/2024/02/03/narratives-rule-the-world-curse-blessing-comments-from-chatgpt4/ ) is a text, which can be understood as a kind of precursor.

In everyday life … magical links …

Almost everyone knows someone—or even several people—who send many emails—or other messages—that only contain links, links to various videos, of which the internet provides plenty nowadays, or images with a few keywords.

Since time is often short, one would like to know if it’s worth clicking on this video. But explanatory information is missing.

When asked about it, whether it would not be possible to include a few explanatory words, the sender almost always replies that they cannot formulate it as well as the video itself.

Interesting: Someone sends a link to a video without being able to express their opinion about it in their own words…

Follow-up questions…

When I click on a link and try to form an opinion, one of the first questions naturally is who published the video (or a text). The same set of facts can be narrated quite differently, even in complete contradiction, depending on the observer’s perspective, as evidenced and verifiable in everyday life. And since what we can sensually perceive is always only very fragmentary, is attached to the surfaces and is connected to some moment of time, it does not necessarily allow us to recognize different relationships to other aspects. And this vagueness is offering plenty of room for interpretation with each observation. Without a thorough consideration of the context and the backstory, interpretation is simply not possible … unless someone already has a ‘finished opinion’ that ‘integrates’ the ‘involuntary fragment of observation’ without hesitation.

So questioning and researching is quite ‘normal’, but our ‘quick brain’ first seeks ‘automatic answers’, as it doesn’t require much thought, is faster, requires less energy, and despite everything, this ‘automatic interpretation’ still provides a ‘satisfying feeling’: Yes, one ‘knows exactly what is presented’. So why question?

Immunizing…

As a scientist, I am trained to clarify all framework conditions, including my own assumptions. Of course, this takes effort and time and is anything but error-free. Hence, multiple checks, inquiries with others about their perspectives, etc. are a common practice.

However, when I ask the ‘wordless senders of links’, if something catches my attention, especially when I address a conflict with the reality I know, the reactions vary in the direction that I have misunderstood or that the author did not mean it that way at all. If I then refer to other sources that are considered ‘strongly verified’, they are labeled as ‘lying press’ or the authors are immediately exposed as ‘agents of a dark power’ (there is a whole range of such ‘dark powers’), and if I dare to inquire here as well, where the information comes from, then I quickly become a naive, stupid person for not knowing all this.

So, any attempt to clarify the basics of statements, to trace them back to comprehensible facts, ends in some kind of conflict long before any clarification has been realized.

Truth, Farewell…

Now, the topic of ‘truth’ has become even in philosophy unfortunately no more than a repository of multiple proposals. And even the modern sciences, fundamentally empirical, increasingly entangle themselves in the multitude of their disciplines and methods in a way that ‘integrative perspectives’ are rare and the ‘average citizen’ tends to have a problem of understanding. Not a good starting point to effectively prevent the spread of the ‘cognitive fairy tale virus’.

Democracy and the Internet as a Booster

The bizarre aspect of our current situation is that precisely the two most significant achievements of humanity, the societal form of ‘modern democracy’ (for about 250 years (in a history of about 300,000 years)) and the technology of the ‘internet’ (browser-based since about 1993), which for the first time have made a maximum of freedom and diversity of expression possible, that precisely these two achievements have now created the conditions for the cognitive fairy tale virus to spread so unrestrainedly.

Important: today’s cognitive fairy tale virus occurs in the context of ‘freedom’! In previous millennia, the cognitive fairy tale virus already existed, but it was under the control of the respective authoritarian rulers, who used it to steer the thoughts and feelings of their subjects in their favor. The ‘ambiguities’ of meanings have always allowed almost all interpretations; and if a previous fairy tale wasn’t enough, a new one was quickly invented. As long as control by reality is not really possible, anything can be told.

With the emergence of democracy, the authoritarian power structures disappeared, but the people who were allowed and supposed to vote were ultimately the same as before in authoritarian regimes. Who really has the time and desire to deal with the complicated questions of the real world, especially if it doesn’t directly affect oneself? That’s what our elected representatives are supposed to do…

In the (seemingly) quiet years since World War II, the division of tasks seemed to work well: here the citizens delegating everything, and there the elected representatives who do everything right. ‘Control’ of power was supposed to be guaranteed through constitution, judiciary, and through a functioning public…

But what was not foreseen were such trifles as:

  1. The increase in population and the advancement of technologies induced ever more complex processes with equally complex interactions that could no longer be adequately managed with the usual methods from the past. Errors and conflicts were inevitable.
  2. Delegating to a few elected representatives with ‘normal abilities’ can only work if these few representatives operate within contexts that provide them with all the necessary competencies their office requires. This task seems to be increasingly poorly addressed.
  3. The important ‘functioning public’ has been increasingly fragmented by the tremendous possibilities of the internet: there is no longer ‘the’ public, but many publics. This is not inherently bad, but when the available channels are attracting the ‘quick and convenient brain’ like light attracts mosquitoes, then heads increasingly fall into the realm of ‘cognitive viruses’ that, after only short ‘incubation periods,’ take possession of a head and control it from there.

The effects of these three factors have been clearly observable for several years now: the unresolved problems of society, which are increasingly poorly addressed by the existing democratic-political system, make individual people in the everyday situation to interpret their dissatisfaction and fears more and more exclusively under the influence of the cognitive fairy tale virus and to act accordingly. This gradually worsens the situation, as the constructive capacities for problem analysis and the collective strength for problem-solving diminish more and more..

No remedies available?

Looking back over the thousands of years of human history, it’s evident that ‘opinions’, ‘views of the world’, have always only harmonized with the real world in limited areas, where it was important to survive. But even in these small areas, for millennia, there were many beliefs that were later found to be ‘wrong’.

Very early on, we humans mastered the art of telling ourselves stories about how everything is connected. These were eagerly listened to, they were believed, and only much later could one sometimes recognize what was entirely or partially wrong about the earlier stories. But in their lifetimes, for those who grew up with these stories, these tales were ‘true’, made ‘sense’, people even went to their deaths for them.

Only at the very end of humanity’s previous development (the life form of Homo sapiens), so — with 300,000 years as 24 hours — after about 23 hours and 59 minutes, did humans discover with empirical sciences a method of obtaining ‘true knowledge’ that not only works for the moment but allows us to look millions, even billions of years ‘back in time’, and for many factors, billions of years into the future. With this, science can delve into the deepest depths of matter and increasingly understand the complex interplay of all the wonderful factors.

And just at this moment of humanity’s first great triumphs on the planet Earth, the cognitive fairy tale virus breaks out unchecked and threatens even to completely extinguish modern sciences!

Which people on this planet can resist this cognitive fairy tale virus?

Here’s a recent message from the Uppsala University [1,2], reporting on an experiment by Swedish scientists with students, showing that it was possible to measurably sharpen students’ awareness of ‘fake news’ (here: the cognitive fairy tale virus).

Yes, we know that young people can shape their awareness to be better equipped against the cognitive fairy tale virus through appropriate education. But what happens when official educational institutions aren’t able to provide the necessary eduaction because either the teachers cannot conduct such knowledge therapy or the teachers themselves could do it, but the institutions do not allow it? The latter cases are known, even in so-called democracies!

Epilog 1

The following working hypotheses are emerging:

  1. The fairy tale virus, the unrestrained inclination to tell stories (uncontrolled), is genetically ingrained in humans.
  2. Neither intelligence nor so-called ‘academic education’ automatically protect against it.
  3. Critical thinking’ and ’empirical science’ are special qualities that people can only acquire with their own great commitment. Minimal conditions must exist in a society for these qualities, without which it is not possible.
  4. Active democracies seem to be able to contain the fairy tale virus to about 15-20% of societal practice (although it is always present in people). As soon as the percentage of active storytellers perceptibly increases, it must be assumed that the concept of ‘democracy’ is increasingly weakening in societal practice — for various reasons.

Epilog 2

Anyone actively affected by the fairy tale virus has a view of the world, of themselves, and of others, that has so little to do with the real world ‘out there’, beyond their own thinking, that real events no longer influence their own thinking. They live in their own ‘thought bubble’. Those who have learned to think ‘critically and scientifically’ have acquired techniques and apply them that repeatedly subject their thinking within their own bubble to a ‘reality check’. This check is not limited to specific events or statements… and that’s where it gets difficult.

References

[1] Here’s the website of Uppsala University, Sweden, where the researchers come from: https://www.uu.se/en/press/press-releases/2024/2024-04-24-computer-game-in-school-made-students-better-at-detecting-fake-news

[2] And here’s the full scientific article with open access: “Bad News in the civics classroom: How serious gameplay fosters teenagers’ ability to discern misinformation techniques.” Carl-Anton Werner Axelsson, Thomas Nygren, Jon Roozenbeek & Sander van der Linden, Received 26 Sep 2023, Accepted 29 Mar 2024, Published online: 19 Apr 2024: https://doi.org/10.1080/15391523.2024.2338451

TRUTH AND MEANING – As a collective achievement

Author: Gerd Doeben-Henisch

Time: Jan 8, 2024 – Jan 8, 2024 (10:00 a.m. CET)

Email: gerd@doeben-henisch.de

TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software deepL.com as well as chatGPT 4.

CONTEXT

This text is a direct continuation of the text There exists only one big Problem for the Future of Human Mankind: The Belief in false Narratives.

INTRODUCTION

There exists only one big Problem for the Future of Human Mankind: The Belief in false Narratives

Author: Gerd Doeben-Henisch

Time: Jan 5, 2024 – Jan 8, 2024 (09:45 a.m. CET)

Email: gerd@doeben-henisch.de

TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software deepL.com as well as chatGPT 4. The English version is a slightly revised version of the German text.

This blog entry will be completed today. However, it has laid the foundations for considerations that will be pursued further in a new blog entry.

CONTEXT

This text belongs to the topic Philosophy (of Science).

Introduction

Triggered by several reasons I started some investigation in the phenomenon of ‘propaganda’ to sharpen my understanding. My strategy was first to try to characterize the phenomenon of ‘general communication’ in order to find some ‘harder criteria’ that would allow to characterize the concept of ‘propaganda’ to stand out against this general background in a somewhat comprehensible way.

The realization of this goal then actually led to an ever more fundamental examination of our normal (human) communication, so that forms of propaganda become recognizable as ‘special cases’ of our communication. The worrying thing about this is that even so-called ‘normal communication’ contains numerous elements that can make it very difficult to recognize and pass on ‘truth’ (*). ‘Massive cases of propaganda’ therefore have their ‘home’ where we communicate with each other every day. So if we want to prevent propaganda, we have to start in everyday life.

(*) The concept of ‘truth’ is examined and explained in great detail in the following long text below. Unfortunately, I have not yet found a ‘short formula’ for it. In essence, it is about establishing a connection to ‘real’ events and processes in the world – including one’s own body – in such a way that they can, in principle, be understood and verified by others.

DICTATORIAL CONTEXT

However, it becomes difficult when there is enough political power that can set the social framework conditions in such a way that for the individual in everyday life – the citizen! – general communication is more or less prescribed – ‘dictated’. Then ‘truth’ becomes less and less or even non-existent. A society is then ‘programmed’ for its own downfall through the suppression of truth. ([3], [6]).

EVERYDAY LIFE AS A DICTATOR ?
The hour of narratives

But – and this is the far more dangerous form of ‘propaganda’ ! – even if there is not a nationwide apparatus of power that prescribes certain forms of ‘truth’, a mutilation or gross distortion of truth can still take place on a grand scale. Worldwide today, in the age of mass media, especially in the age of the internet, we can see that individuals, small groups, special organizations, political groups, entire religious communities, in fact all people and their social manifestations, follow a certain ‘narrative’ [*11] when they act.

Typical for acting according to a narrative is that those who do so individually believe that it is ‘their own decision’ and that their narrative is ‘true’, and that they are therefore ‘in the right’ when they act accordingly. This ‘feeling to be right’ can go as far as claiming the right to kill others because they ‘act wrongly’ in the light of their own ‘narrative’. We should therefore speak here of a ‘narrative truth’: Within the framework of the narrative, a picture of the world is drawn that ‘as a whole’ enables a perspective that ‘as such’ is ‘found to be good’ by the followers of the narrative, as ‘making sense’. Normally, the effect of a narrative, which is experienced as ‘meaningful’, is so great that the ‘truth content’ is no longer examined in detail.

RELIGIOUS NARRATIVES

This has existed at all times in the history of mankind. Narratives that appeared as ‘religious beliefs’ were particularly effective. It is therefore no coincidence that almost all governments of the last millennia have adopted religious beliefs as state doctrines; an essential component of religious beliefs is that they are ‘unprovable’, i.e. ‘incapable of truth’. This makes a religious narrative a wonderful tool in the hands of the powerful to motivate people to behave in certain ways without the threat of violence.

POPULAR NARRATIVES

In recent decades, however, we have experienced new, ‘modern forms’ of narratives that do not come across as religious narratives, but which nevertheless have a very similar effect: People perceive these narratives as ‘giving meaning’ in a world that is becoming increasingly confusing and therefore threatening for everyone today. Individual people, the citizens, also feel ‘politically helpless’, so that – even in a ‘democracy’ – they have the feeling that they cannot directly influence anything: the ‘people up there’ do what they want. In such a situation, ‘simplistic narratives’ are a blessing for the maltreated soul; you hear them and have the feeling: yes, that’s how it is; that’s exactly how I ‘feel’!

Such ‘popular narratives’, which enable ‘good feelings’, are gaining ever greater power. What they have in common with religious narratives is that the ‘followers’ of popular narratives no longer ask the ‘question of truth’; most of them are also not sufficiently ‘trained’ to be able to clarify the truth of a narrative at all. It is typical for supporters of narratives that they are generally hardly able to explain their own narrative to others. They typically send each other links to texts/videos that they find ‘good’ because these texts/videos somehow seem to support the popular narrative, and tend not to check the authors and sources because they are in the eyes of the followers such ‘decent people’, which always say exactly the ‘same thing’ as the ‘popular narrative’ dictates.

NARRATIVES ARE SEXY FOR POWER

If you now take into account that the ‘world of narratives’ is an extremely tempting offer for all those who have power over people or would like to gain power over people, then it should come as no surprise that many governments in this world, many other power groups, are doing just that today: they do not try to coerce people ‘directly’, but they ‘produce’ popular narratives or ‘monitor’ already existing popular narratives’ in order to gain power over the hearts and minds of more and more people via the detour of these narratives. Some speak here of ‘hybrid warfare’, others of ‘modern propaganda’, but ultimately, I guess, these terms miss the core of the problem.

THE NARRATIVE AS A BASIC CULTURAL PATTERN
The ‘irrational’ defends itself against the ‘rational’

The core of the problem is the way in which human communities have always organized their collective action, namely through narratives; we humans have no other option. However, such narratives – as the considerations further down in the text will show – are extremely susceptible to ‘falsity’, to a ‘distortion of the picture of the world’. In the context of the development of legal systems, approaches have been developed during at least the last 7000 years to ‘improve’ the abuse of power in a society by supporting truth-preserving mechanisms. Gradually, this has certainly helped, with all the deficits that still exist today. Additionally, about 500 years ago, a real revolution took place: humanity managed to find a format with the concept of a ‘verifiable narrative (empirical theory)’ that optimized the ‘preservation of truth’ and minimized the slide into untruth. This new concept of ‘verifiable truth’ has enabled great insights that before were beyond imagination .

The ‘aura of the scientific’ has meanwhile permeated almost all of human culture, almost! But we have to realize that although scientific thinking has comprehensively shaped the world of practicality through modern technologies, the way of scientific thinking has not overridden all other narratives. On the contrary, the ‘non-truth narratives’ have become so strong again that they are pushing back the ‘scientific’ in more and more areas of our world, patronizing it, forbidding it, eradicating it. The ‘irrationality’ of religious and popular narratives is stronger than ever before. ‘Irrational narratives’ are for many so appealing because they spare the individual from having to ‘think for themselves’. Real thinking is exhausting, unpopular, annoying and hinders the dream of a simple solution.

THE CENTRAL PROBLEM OF HUMANITY

Against this backdrop, the widespread inability of people to recognize and overcome ‘irrational narratives’ appears to be the central problem facing humanity in mastering the current global challenges. Before we need more technology (we certainly do), we need more people who are able and willing to think more and better, and who are also able to solve ‘real problems’ together with others. Real problems can be recognized by the fact that they are largely ‘new’, that there are no ‘simple off-the-shelf’ solutions for them, that you really have to ‘struggle’ together for possible insights; in principle, the ‘old’ is not enough to recognize and implement the ‘true new’, and the future is precisely the space with the greatest amount of ‘unknown’, with lots of ‘genuinely new’ things.

The following text examines this view in detail.

MAIN TEXT FOR EXPLANATION

MODERN PROPAGANDA ?

As mentioned in the introduction the trigger for me to write this text was the confrontation with a popular book which appeared to me as a piece of ‘propaganda’. When I considered to describe my opinion with own words I detected that I had some difficulties: what is the difference between ‘propaganda’ and ‘everyday communication’? This forced me to think a little bit more about the ingredients of ‘everyday communication’ and where and why a ‘communication’ is ‘different’ to our ‘everyday communication’. As usual in the beginning of some discussion I took a first look to the various entries in Wikipedia (German and English). The entry in the English Wikipedia on ‘Propaganda [1b] attempts a very similar strategy to look to ‘normal communication’ and compared to this having a look to the phenomenon of ‘propaganda’, albeit with not quite sharp contours. However, it provides a broad overview of various forms of communication, including those forms that are ‘special’ (‘biased’), i.e. do not reflect the content to be communicated in the way that one would reproduce it according to ‘objective, verifiable criteria’.[*0] However, the variety of examples suggests that it is not easy to distinguish between ‘special’ and ‘normal’ communication: What then are these ‘objective verifiable criteria’? Who defines them?

Assuming for a moment that it is clear what these ‘objectively verifiable criteria’ are, one can tentatively attempt a working definition for the general (normal?) case of communication as a starting point:

Working Definition:

The general case of communication could be tentatively described as a simple attempt by one person – let’s call them the ‘author’ – to ‘bring something to the attention’ of another person – let’s call them the ‘interlocutor’. We tentatively call what is to be brought to their attention ‘the message’. We know from everyday life that an author can have numerous ‘characteristics’ that can affect the content of his message.

Here is a short list of properties that characterize the author’s situation in a communication. Then corresponding properties for the interlocutor.

The Author:

  1. The available knowledge of the author — both conscious and unconscious — determines the kind of message the author can create.
  2. His ability to discern truth determines whether and to what extent he can differentiate what in his message is verifiable in the real world — present or past — as ‘accurate’ or ‘true’.
  3. His linguistic ability determines whether and how much of his available knowledge can be communicated linguistically.
  4. The world of emotions decides whether he wants to communicate anything at all, for example, when, how, to whom, how intensely, how conspicuously, etc.
  5. The social context can affect whether he holds a certain social role, which dictates when he can and should communicate what, how, and with whom.
  6. The real conditions of communication determine whether a suitable ‘medium of communication’ is available (spoken sound, writing, sound, film, etc.) and whether and how it is accessible to potential interlocutors.
  7. The author’s physical constitution decides how far and to what extent he can communicate at all.

The Interlocutor:

  1. In general, the characteristics that apply to the author also apply to the interlocutor. However, some points can be particularly emphasized for the role of the interlocutor:
  2. The available knowledge of the interlocutor determines which aspects of the author’s message can be understood at all.
  3. The ability of the interlocutor to discern truth determines whether and to what extent he can also differentiate what in the conveyed message is verifiable as ‘accurate’ or ‘true’.
  4. The linguistic ability of the interlocutor affects whether and how much of the message he can absorb purely linguistically.
  5. Emotions decide whether the interlocutor wants to take in anything at all, for example, when, how, how much, with what inner attitude, etc.
  6. The social context can also affect whether the interlocutor holds a certain social role, which dictates when he can and should communicate what, how, and with whom.
  7. Furthermore, it can be important whether the communication medium is so familiar to the interlocutor that he can use it sufficiently well.
  8. The physical constitution of the interlocutor can also determine how far and to what extent the interlocutor can communicate at all.

Even this small selection of factors shows how diverse the situations can be in which ‘normal communication’ can take on a ‘special character’ due to the ‘effect of different circumstances’. For example, an actually ‘harmless greeting’ can lead to a social problem with many different consequences in certain roles. A seemingly ‘normal report’ can become a problem because the contact person misunderstands the message purely linguistically. A ‘factual report’ can have an emotional impact on the interlocutor due to the way it is presented, which can lead to them enthusiastically accepting the message or – on the contrary – vehemently rejecting it. Or, if the author has a tangible interest in persuading the interlocutor to behave in a certain way, this can lead to a certain situation not being presented in a ‘purely factual’ way, but rather to many aspects being communicated that seem suitable to the author to persuade the interlocutor to perceive the situation in a certain way and to adopt it accordingly. These ‘additional’ aspects can refer to many real circumstances of the communication situation beyond the pure message.

Types of communication …

Given this potential ‘diversity’, the question arises as to whether it will even be possible to define something like normal communication?

In order to be able to answer this question meaningfully, one should have a kind of ‘overview’ of all possible combinations of the properties of author (1-7) and interlocutor (1-8) and one should also have to be able to evaluate each of these possible combinations with a view to ‘normality’.

It should be noted that the two lists of properties author (1-7) and interlocutor (1-8) have a certain ‘arbitrariness’ attached to them: you can build the lists as they have been constructed here, but you don’t have to.

This is related to the general way in which we humans think: on one hand, we have ‘individual events that happen’ — or that we can ‘remember’ —, and on the other hand, we can ‘set’ ‘arbitrary relationships’ between ‘any individual events’ in our thinking. In science, this is called ‘hypothesis formation’. Whether or not such formation of hypotheses is undertaken, and which ones, is not standardized anywhere. Events as such do not enforce any particular hypothesis formations. Whether they are ‘sensible’ or not is determined solely in the later course of their ‘practical use’. One could even say that such hypothesis formation is a rudimentary form of ‘ethics’: the moment one adopts a hypothesis regarding a certain relationship between events, one minimally considers it ‘important’, otherwise, one would not undertake this hypothesis formation.

In this respect, it can be said that ‘everyday life’ is the primary place for possible working hypotheses and possible ‘minimum values’.

The following diagram demonstrates a possible arrangement of the characteristics of the author and the interlocutor:

FIGURE : Overview of the possible overlaps of knowledge between the author and the interlocutor, if everyone can have any knowledge at its disposal.

What is easy to recognize is the fact that an author can naturally have a constellation of knowledge that draws on an almost ‘infinite number of possibilities’. The same applies to the interlocutor. In purely abstract terms, the number of possible combinations is ‘virtually infinite’ due to the assumptions about the properties Author 1 and Interlocutor 2, which ultimately makes the question of ‘normality’ at the abstract level undecidable.


However, since both authors and interlocutors are not spherical beings from some abstract angle of possibilities, but are usually ‘concrete people’ with a ‘concrete history’ in a ‘concrete life-world’ at a ‘specific historical time’, the quasi-infinite abstract space of possibilities is narrowed down to a finite, manageable set of concretes. Yet, even these can still be considerably large when related to two specific individuals. Which person, with their life experience from which area, should now be taken as the ‘norm’ for ‘normal communication’?


It seems more likely that individual people are somehow ‘typified’, for example, by age and learning history, although a ‘learning history’ may not provide a clear picture either. Graduates from the same school can — as we know — possess very different knowledge afterwards, even though commonalities may be ‘minimally typical’.

Overall, the approach based on the characteristics of the author and the interlocutor does not seem to provide really clear criteria for a norm, even though a specification such as ‘the humanistic high school in Hadamar (a small German town) 1960 – 1968’ would suggest rudimentary commonalities.


One could now try to include the further characteristics of Author 2-7 and Interlocutor 3-8 in the considerations, but the ‘construction of normal communication’ seems to lead more and more into an unclear space of possibilities based on the assumptions of Author 1 and Interlocutor 2.

What does this mean for the typification of communication as ‘propaganda’? Isn’t ultimately every communication also a form of propaganda, or is there a possibility to sufficiently accurately characterize the form of ‘propaganda’, although it does not seem possible to find a standard for ‘normal communication’? … or will a better characterization of ‘propaganda’ indirectly provide clues for ‘non-propaganda’?

TRUTH and MEANING: Language as Key

The spontaneous attempt to clarify the meaning of the term ‘propaganda’ to the extent that one gets a few constructive criteria for being able to characterize certain forms of communication as ‘propaganda’ or not, gets into ever ‘deeper waters’. Are there now ‘objective verifiable criteria’ that one can work with, or not? And: Who determines them?

Let us temporarily stick to working hypothesis 1, that we are dealing with an author who articulates a message for an interlocutor, and let us expand this working hypothesis by the following addition 1: such communication always takes place in a social context. This means that the perception and knowledge of the individual actors (author, interlocutor) can continuously interact with this social context or ‘automatically interacts’ with it. The latter is because we humans are built in such a way that our body with its brain just does this, without ‘us’ having to make ‘conscious decisions’ for it.[*1]

For this section, I would like to extend the previous working hypothesis 1 together with supplement 1 by a further working hypothesis 2 (localization of language) [*4]:

  1. Every medium (language, sound, image, etc.) can contain a ‘potential meaning’.
  2. When creating the media event, the ‘author’ may attempt to ‘connect’ possible ‘contents’ that are to be ‘conveyed’ by him with the medium (‘putting into words/sound/image’, ‘encoding’, etc.). This ‘assignment’ of meaning occurs both ‘unconsciously/automatically’ and ‘(partially) consciously’.
  3. In perceiving the media event, the ‘interlocutor’ may try to assign a ‘possible meaning’ to this perceived event. This ‘assignment’ of meaning also happens both ‘unconsciously/automatically’ and ‘(partially) consciously’.
  4. The assignment of meaning requires both the author and the interlocutor to have undergone ‘learning processes’ (usually years, many years) that have made it possible to link certain ‘events of the external world’ as well as ‘internal states’ with certain media events.
  5. The ‘learning of meaning relationships’ always takes place in social contexts, as a media structure meant to ‘convey meaning’ between people belongs to everyone involved in the communication process.
  6. Those medial elements that are actually used for the ‘exchange of meanings’ all together form what is called a ‘language’: the ‘medial elements themselves’ form the ‘surface structure’ of the language, its ‘sign dimension’, and the ‘inner states’ in each ‘actor’ involved, form the ‘individual-subjective space of possible meanings’. This inner subjective space comprises two components: (i) the internally available elements as potential meaning content and (ii) a dynamic ‘meaning relationship’ that ‘links’ perceived elements of the surface structure and the potential meaning content.


To answer the guiding question of whether one can “characterize certain forms of communication as ‘propaganda’ or not,” one needs ‘objective, verifiable criteria’ on the basis of which a statement can be formulated. This question can be used to ask back whether there are ‘objective criteria’ in ‘normal everyday dialogue’ that we can use in everyday life to collectively decide whether a ‘claimed fact’ is ‘true’ or not; in this context, the word ‘true’ is also used. Can this be defined a bit more precisely?

For this I propose an additional working hypotheses 3:

  1. At least two actors can agree that a certain meaning, associated with the media construct, exists as a sensibly perceivable fact in such a way that they can agree that the ‘claimed fact’ is indeed present. Such a specific occurrence should be called ‘true 1’ or ‘Truth 1.’ A ‘specific occurrence’ can change at any time and quickly due to the dynamics of the real world (including the actors themselves), for example: the rain stops, the coffee cup is empty, the car from before is gone, the empty sidewalk is occupied by a group of people, etc.
  2. At least two actors can agree that a certain meaning, associated with the media construct, is currently not present as a real fact. Referring to the current situation of ‘non-occurrence,’ one would say that the statement is ‘false 1’; the claimed fact does not actually exist contrary to the claim.
  3. At least two actors can agree that a certain meaning, associated with the media construct, is currently not present, but based on previous experience, it is ‘quite likely’ to occur in a ‘possible future situation.’ This aspect shall be called ‘potentially true’ or ‘true 2’ or ‘Truth 2.’ Should the fact then ‘actually occur’ at some point in the future, Truth 2 would transform into Truth 1.
  4. At least two actors can agree that a certain meaning associated with the media construct does not currently exist and that, based on previous experience, it is ‘fairly certain that it is unclear’ whether the intended fact could actually occur in a ‘possible future situation’. This aspect should be called ‘speculative true’ or ‘true 3’ or ‘truth 3’. Should the situation then ‘actually occur’ at some point, truth 3 would change into truth 1.
  5. At least two actors can agree that a certain meaning associated with the medial construct does not currently exist, and on the basis of previous experience ‘it is fairly certain’ that the intended fact could never occur in a ‘possible future situation’. This aspect should be called ‘speculative false’ or ‘false 2’.

A closer look at these 5 assumptions of working hypothesis 3 reveals that there are two ‘poles’ in all these distinctions, which stand in certain relationships to each other: on the one hand, there are real facts as poles, which are ‘currently perceived or not perceived by all participants’ and, on the other hand, there is a ‘known meaning’ in the minds of the participants, which can or cannot be related to a current fact. This results in the following distribution of values:

REAL FACTsRelationship to Meaning
Given1Fits (true 1)
Given2Doesn’t fit (false 1)
Not given3Assumed, that it will fit in the future (true 2)
Not given4Unclear, whether it would fit in the future (true 3)
Not given5Assumed, that it would not fit in the future (false 2)

In this — still somewhat rough — scheme, ‘the meaning of thoughts’ can be qualified in relation to something currently present as ‘fitting’ or ‘not fitting’, or in the absence of something real as ‘might fit’ or ‘unclear whether it can fit’ or ‘certain that it cannot fit’.

However, it is important to note that these qualifications are ‘assessments’ made by the actors based on their ‘own knowledge’. As we know, such an assessment is always prone to error! In addition to errors in perception [*5], there can be errors in one’s own knowledge [*6]. So contrary to the belief of an actor, ‘true 1’ might actually be ‘false 1’ or vice versa, ‘true 2’ could be ‘false 2’ and vice versa.

From all this, it follows that a ‘clear qualification’ of truth and falsehood is ultimately always error-prone. For a community of people who think ‘positively’, this is not a problem: they are aware of this situation and they strive to keep their ‘natural susceptibility to error’ as small as possible through conscious methodical procedures [*7]. People who — for various reasons — tend to think negatively, feel motivated in this situation to see only errors or even malice everywhere. They find it difficult to deal with their ‘natural error-proneness’ in a positive and constructive manner.

TRUTH and MEANING : Process of Processes

In the previous section, the various terms (‘true1,2’, ‘false 1,2’, ‘true 3’) are still rather disconnected and are not yet really located in a tangible context. This will be attempted here with the help of working hypothesis 4 (sketch of a process space).

FIGURE 1 Process : The process space in the real world and in thinking, including possible interactions

The basic elements of working hypothesis 4 can be characterized as follows:

  1. There is the real world with its continuous changes, and within an actor which includes a virtual space for processes with elements such as perceptions, memories, and imagined concepts.
  2. The link between real space and virtual space occurs through perceptual achievements that represent specific properties of the real world for the virtual space, in such a way that ‘perceived contents’ and ‘imagined contents’ are distinguishable. In this way, a ‘mental comparison’ of perceived and imagined is possible.
  3. Changes in the real world do not show up explicitly but are manifested only indirectly through the perceivable changes they cause.
  4. It is the task of ‘cognitive reconstruction’ to ‘identify’ changes and to describe them linguistically in such a way that it is comprehensible, based on which properties of a given state, a possible subsequent state can arise.
  5. In addition to distinguishing between ‘states’ and ‘changes’ between states, it must also be clarified how a given description of change is ‘applied’ to a given state in such a way that a ‘subsequent state’ arises. This is called here ‘successor generation rule’ (symbolically: ⊢). An expression like Z ⊢V Z’ would then mean that using the successor generation rule ⊢ and employing the change rule V, one can generate the subsequent state Z’ from the state Z. However, more than one change rule V can be used, for example, ⊢{V1, V2, …, Vn} with the change rules V1, …, Vn.
  6. When formulating change rules, errors can always occur. If certain change rules have proven successful in the past in derivations, one would tend to assume for the ‘thought subsequent state’ that it will probably also occur in reality. In this case, we would be dealing with the situation ‘true 2’. If a change rule is new and there are no experiences with it yet, we would be dealing with the ‘true 3’ case for the thought subsequent state. If a certain change rule has failed repeatedly in the past, then the case ‘false 2’ might apply.
  7. The outlined process model also shows that the previous cases (1-5 in the table) only ever describe partial aspects. Suppose a group of actors manages to formulate a rudimentary process theory with many states and many change rules, including a successor generation instruction. In that case, it is naturally of interest how the ‘theory as a whole’ ‘proves itself’. This means that every ‘mental construction’ of a sequence of possible states according to the applied change rules under the assumption of the process theory must ‘prove itself’ in all cases of application for the theory to be said to be ‘generically true’. For example, while the case ‘true 1’ refers to only a single state, the case ‘generically true’ refers to ‘very many’ states, as many until an ‘end state’ is reached, which is supposed to count as a ‘target state’. The case ‘generically contradicted’ is supposed to occur when there is at least one sequence of generated states that keeps generating an end state that is false 1. As long as a process theory has not yet been confirmed as true 1 for an end state in all possible cases, there remains a ‘remainder of cases’ that are unclear. Then a process theory would be called ‘generically unclear’, although it may be considered ‘generically true’ for the set of cases successfully tested so far.

FIGURE 2 Process : The individual extended process space with an indication of the dimension ‘META-THINKING’ and ‘EVALUATION’.

If someone finds the first figure of the process room already quite ‘challenging’, they he will certainly ‘break into a sweat’ with this second figure of the ‘expanded process room’.

Everyone can check for himself that we humans have the ability — regardless of what we are thinking — to turn our thinking at any time back onto our own thinking shortly before, a kind of ‘thinking about thinking’. This opens up an ‘additional level of thinking’ – here called the ‘meta-level’ – on which we thinkers ‘thematize’ everything that is noticeable and important to us in the preceding thinking. [*8] In addition to ‘thinking about thinking’, we also have the ability to ‘evaluate’ what we perceive and think. These ‘evaluations’ are fueled by our ’emotions’ [*9] and ‘learned preferences’. This enables us to ‘learn’ with the help of our emotions and learned preferences: If we perform certain actions and suffer ‘pain’, we will likely avoid these actions next time. If we go to restaurant X to eat because someone ‘recommended’ it to us, and the food and/or service were really bad, then we will likely not consider this suggestion in the future. Therefore, our thinking (and our knowledge) can ‘make possibilities visible’, but it is the emotions that comment on what happens to be ‘good’ or ‘bad’ when implementing knowledge. But beware, emotions can also be mistaken, and massively so.[*10]

TRUTH AND MEANING – As a collective achievement

The previous considerations on the topic of ‘truth and meaning’ in the context of individual processes have outlined that and how ‘language’ plays a central role in enabling meaning and, based on this, truth. Furthermore, it was also outlined that and how truth and meaning must be placed in a dynamic context, in a ‘process model’, as it takes place in an individual in close interaction with the environment. This process model includes the dimension of ‘thinking’ (also ‘knowledge’) as well as the dimension of ‘evaluations’ (emotions, preferences); within thinking there are potentially many ‘levels of consideration’ that can relate to each other (of course they can also take place ‘in parallel’ without direct contact with each other (the unconnected parallelism is the less interesting case, however).

As fascinating as the dynamic emotional-cognitive structure within an individual actor can be, the ‘true power’ of explicit thinking only becomes apparent when different people begin to coordinate their actions by means of communication. When individual action is transformed into collective action in this way, a dimension of ‘society’ becomes visible, which in a way makes the ‘individual actors’ ‘forget’, because the ‘overall performance’ of the ‘collectively connected individuals’ can be dimensions more complex and sustainable than any one individual could ever realize. While a single person can make a contribution in their individual lifetime at most, collectively connected people can accomplish achievements that span many generations.

On the other hand, we know from history that collective achievements do not automatically have to bring about ‘only good’; the well-known history of oppression, bloody wars and destruction is extensive and can be found in all periods of human history.

This points to the fact that the question of ‘truth’ and ‘being good’ is not only a question for the individual process, but also a question for the collective process, and here, in the collective case, this question is even more important, since in the event of an error not only individuals have to suffer negative effects, but rather very many; in the worst case, all of them.

To be continued …

COMMENTS

[*0] The meaning of the terms ‘objective, verifiable’ will be explained in more detail below.

[*1] In a system-theoretical view of the ‘human body’ system, one can formulate the working hypothesis that far more than 99% of the events in a human body are not conscious. You can find this frightening or reassuring. I tend towards the latter, towards ‘reassurance’. Because when you see what a human body as a ‘system’ is capable of doing on its own, every second, for many years, even decades, then this seems extremely reassuring in view of the many mistakes, even gross ones, that we can make with our small ‘consciousness’. In cooperation with other people, we can indeed dramatically improve our conscious human performance, but this is only ever possible if the system performance of a human body is maintained. After all, it contains 3.5 billion years of development work of the BIOM on this planet; the building blocks of this BIOM, the cells, function like a gigantic parallel computer, compared to which today’s technical supercomputers (including the much-vaunted ‘quantum computers’) look so small and weak that it is practically impossible to express this relationship.

[*2] An ‘everyday language’ always presupposes ‘the many’ who want to communicate with each other. One person alone cannot have a language that others should be able to understand.

[*3] A meaning relation actually does what is mathematically called a ‘mapping’: Elements of one kind (elements of the surface structure of the language) are mapped to elements of another kind (the potential meaning elements). While a mathematical mapping is normally fixed, the ‘real meaning relation’ can constantly change; it is ‘flexible’, part of a higher-level ‘learning process’ that constantly ‘readjusts’ the meaning relation depending on perception and internal states.

[*4] The contents of working hypothesis 2 originate from the findings of modern cognitive sciences (neuroscience, psychology, biology, linguistics, semiotics, …) and philosophy; they refer to many thousands of articles and books. Working hypothesis 2 therefore represents a highly condensed summary of all this. Direct citation is not possible in purely practical terms.

[*5] As is known from research on witness statements and from general perception research, in addition to all kinds of direct perception errors, there are many errors in the ‘interpretation of perception’ that are largely unconscious/automated. The actors are normally powerless against such errors; they simply do not notice them. Only methodically conscious controls of perception can partially draw attention to these errors.

[*6] Human knowledge is ‘notoriously prone to error’. There are many reasons for this. One lies in the way the brain itself works. ‘Correct’ knowledge is only possible if the current knowledge processes are repeatedly ‘compared’ and ‘checked’ so that they can be corrected. Anyone who does not regularly check the correctness will inevitably confirm incomplete and often incorrect knowledge. As we know, this does not prevent people from believing that everything they carry around in their heads is ‘true’. If there is a big problem in this world, then this is one of them: ignorance about one’s own ignorance.

[*7] In the cultural history of mankind to date, it was only very late (about 500 years ago?) that a format of knowledge was discovered that enables any number of people to build up fact-based knowledge that, compared to all other known knowledge formats, enables the ‘best results’ (which of course does not completely rule out errors, but extremely minimizes them). This still revolutionary knowledge format has the name ’empirical theory’, which I have since expanded to ‘sustainable empirical theory’. On the one hand, we humans are the main source of ‘true knowledge’, but at the same time we ourselves are also the main source of ‘false knowledge’. At first glance, this seems like a ‘paradox’, but it has a ‘simple’ explanation, which at its root is ‘very profound’ (comparable to the cosmic background radiation, which is currently simple, but originates from the beginnings of the universe).

[*8] In terms of its architecture, our brain can open up any number of such meta-levels, but due to its concrete finiteness, it only offers a limited number of neurons for different tasks. For example, it is known (and has been experimentally proven several times) that our ‘working memory’ (also called ‘short-term memory’) is only limited to approx. 6-9 ‘units’ (whereby the term ‘unit’ must be defined depending on the context). So if we want to solve extensive tasks through our thinking, we need ‘external aids’ (sheet of paper and pen or a computer, …) to record the many aspects and write them down accordingly. Although today’s computers are not even remotely capable of replacing the complex thought processes of humans, they can be an almost irreplaceable tool for carrying out complex thought processes to a limited extent. But only if WE actually KNOW what we are doing!

[*9] The word ’emotion’ is a ‘collective term’ for many different phenomena and circumstances. Despite extensive research for over a hundred years, the various disciplines of psychology are still unable to offer a uniform picture, let alone a uniform ‘theory’ on the subject. This is not surprising, as much of the assumed emotions takes place largely ‘unconsciously’ or is only directly available as an ‘internal event’ in the individual. The only thing that seems to be clear is that we as humans are never ’emotion-free’ (this also applies to so-called ‘cool’ types, because the apparent ‘suppression’ or ‘repression’ of emotions is itself part of our innate emotionality).

[*10] Of course, emotions can also lead us seriously astray or even to our downfall (being wrong about other people, being wrong about ourselves, …). It is therefore not only important to ‘sort out’ the factual things in the world in a useful way through ‘learning’, but we must also actually ‘keep an eye on our own emotions’ and check when and how they occur and whether they actually help us. Primary emotions (such as hunger, sex drive, anger, addiction, ‘crushes’, …) are selective, situational, can develop great ‘psychological power’ and thus obscure our view of the possible or very probable ‘consequences’, which can be considerably damaging for us.

[*11] The term ‘narrative’ is increasingly used today to describe the fact that a group of people use a certain ‘image’, a certain ‘narrative’ in their thinking for their perception of the world in order to be able to coordinate their joint actions. Ultimately, this applies to all collective action, even for engineers who want to develop a technical solution. In this respect, the description in the German Wikipedia is a bit ‘narrow’: https://de.wikipedia.org/wiki/Narrativ_(Sozialwissenschaften)

REFERENCES

The following sources are just a tiny selection from the many hundreds, if not thousands, of articles, books, audio documents and films on the subject. Nevertheless, they may be helpful for an initial introduction. The list will be expanded from time to time.

[1a] Propaganda, in the German Wikipedia https://de.wikipedia.org/wiki/Propaganda

[1b] Propaganda in the English Wikipedia : https://en.wikipedia.org/wiki/Propaganda /*The English version appears more systematic, covers larger periods of time and more different areas of application */

[3] Propaganda der Russischen Föderation, hier: https://de.wikipedia.org/wiki/Propaganda_der_Russischen_F%C3%B6deration (German source)

[6] Mischa Gabowitsch, Mai 2022, Von »Faschisten« und »Nazis«, https://www.blaetter.de/ausgabe/2022/mai/von-faschisten-und-nazis#_ftn4 (German source)

DOWNSIZING TEXT GENERATORS – UPGRADING HUMANS. A Thought Experiment

Author: Gerd Doeben-Henisch

Time: Nov 12, 2023 — Nov 12, 2023

Email: gerd@doeben-henisch.de

–!! This is not yet finished !!–

CONTEXT

This text belongs to the topic Philosophy (of Science).

INTRODUCTION

The ‘coming out’ of a new type of ‘text generator’ in November 2022 called ‘chatGPT’ — this is not the only one around — caused an explosion of publications and usages around the world. The author is working since about 40 years in this field and never did this happen before. What has happened? Is this the beginning of the end of humans being the main actors on this planet (yes, we know, not really the best until now) or is there something around and in-between which we do overlook captivated by these new text generators?

Reading many papers since that event, talking with people, experimenting directly with chatGPT4, continuing working with theories and working with people in a city by trying new forms of ‘citizens at work in their community’, slowly a picture was forming in my head how it could perhaps be possible to ‘benchmark’ text generators with human activities directly.

After several first trials everything came together when I could give a speech in the Goethe University in Frankfurt Friday Nov-10. [1] There was a wonderful audience with elderly people from the so-called University of the 3rd age … a bit different to young students 🙂

There was an idea that hit me like a bolt of lightning when I wrote it down afterwards: it is the fundamental role of literature for our understanding of world and people, which will be completely eliminated by using text generators. The number of written text will explode in the near future, but the meaning of the world will vanish more and more at the same time. You will see letters, but there will be no more meaning around. And with the meaning the world of humans will disappear. You won’t even be able to know yourself anymore.

Clearly, it can only happen if we substitute our own thinking and writing completely by text generators.

Is the author of this text a bit ‘ill’ to write down such ideas or are there some arguments around which make it clear why this can be the fate of humans after the year 2023?

SET-UP OF AN EXPERIMENT

[1] See the text written down after the speech: https://www.cognitiveagent.org/2023/11/06/kollektive-mensch-maschine-intelligenz-im-kontext-nach

Pain does not replace the truth …

Time: Oct 18, 2023 — Oct 24, 2023)
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.d
e

CONTEXT

This post is part of the uffmm science blog. It is a translation from the German source: https://www.cognitiveagent.org/2023/10/18/schmerz-ersetzt-nicht-die-wahrheit/. For the translation I have used chatGPT4 and deepl.com. Because in the text the word ‘hamas’ is occurring, chatGPT didn’t translate a long paragraph with this word. Thus the algorithm is somehow ‘biased’ by a certain kind of training. This is really bad because the following text is offers some reflections about a situation where someone ‘hates’ others. This is one of our biggest ‘disease’ today.

Preface

The Hamas terrorist attack on Israeli citizens on October 7, 2023, has shaken the world. For years, terrorist acts have been shaking our world. In front of our eyes, a is attempting, since 2022 (actually since 2014), to brutally eradicate the entire Ukrainian population. Similar events have been and are taking place in many other regions of the world…

… Pain does not replace the truth [0]…

Truth is not automatic. Making truth available requires significantly more effort than remaining in a state of partial truth.

The probability that a person knows the truth or seeks the truth is smaller than staying in a state of partial truth or outright falsehood.

Whether in a democracy, falsehood or truth predominates depends on how a democracy shapes the process of truth-finding and the communication of truth. There is no automatic path to truth.

In a dictatorship, the likelihood of truth being available is extremely dependent on those who exercise centralized power. Absolute power, however, has already fundamentally broken with the truth (which does not exclude the possibility that this power can have significant effects).

The course of human history on planet Earth thus far has shown that there is evidently no simple, quick path that uniformly leads all people to a state of happiness. This must have to do with humans themselves—with us.

The interest in seeking truth, in cultivating truth, in a collective process of truth, has never been strong enough to overcome the everyday exclusions, falsehoods, hostilities, atrocities…

One’s own pain is terrible, but it does not help us to move forward…

Who even wants a future for all of us?????

[0] There is an overview article by the author from 2018, in which he presents 15 major texts from the blog “Philosophie Jetzt” ( “Philosophy Now”) ( “INFORMAL COSMOLOGY. Part 3a. Evolution – Truth – Society. Synopsis of previous contributions to truth in this blog” ( https://www.cognitiveagent.org/2018/03/20/informelle-kosmologie-teil-3a-evolution-wahrheit-gesellschaft-synopse-der-bisherigen-beitraege-zur-wahrheit-in-diesem-blog/ )), in which the matter of truth is considered from many points of view. In the 5 years since, society’s treatment of truth has continued to deteriorate dramatically.

Hate cancels the truth


Truth is related to knowledge. However, in humans, knowledge most often is subservient to emotions. Whatever we may know or wish to know, when our emotions are against it, we tend to suppress that knowledge.

One form of emotion is hatred. The destructive impact of hatred has accompanied human history like a shadow, leaving a trail of devastation everywhere it goes: in the hater themselves and in their surroundings.

The event of the inhumane attack on October 7, 2023 in Israel, claimed by Hamas, is unthinkable without hatred.

If one traces the history of Hamas since its founding in 1987 [1,2], then one can see that hatred is already laid down as an essential moment in its founding. This hatred is joined by the moment of a religious interpretation, which calls itself Islamic, but which represents a special, very radicalized and at the same time fundamentalist form of Islam.

The history of the state of Israel is complex, and the history of Judaism is no less so. And the fact that today’s Judaism also contains strong components that are clearly fundamentalist and to which hatred is not alien, this also leads within many other factors at the core to a constellation of fundamentalist antagonisms on both sides that do not in themselves reveal any approaches to a solution. The many other people in Israel and Palestine ‘around’ are part of these ‘fundamentalist force fields’, which simply evaporate humanity and truth in their vicinity. By the trail of blood one can see this reality.

Both Judaism and Islam have produced wonderful things, but what does all this mean in the face of a burning hatred that pushes everything aside, that sees only itself.

[1] Jeffrey Herf, Sie machen den Hass zum Weltbild, FAZ 20.Okt. 23, S.11 (Abriss der Geschichte der Hamas und ihr Weltbild, als Teil der größeren Geschichte) (Translation:They make hatred their worldview, FAZ Oct. 20, 23, p.11 (outlining the history of Hamas and its worldview, as part of the larger story)).

[2] Joachim Krause, Die Quellen des Arabischen Antisemitismus, FAZ, 23.10.2023,p.8 (This text “The Sources of Arab Anti-Semitism” complements the account by Jeffrey Herf. According to Krause, Arab anti-Semitism has been widely disseminated in the Arab world since the 1920s/ 30s via the Muslim Brotherhood, founded in 1928).

A society in decline

When truth diminishes and hatred grows (and, indirectly, trust evaporates), a society is in free fall. There is no remedy for this; the use of force cannot heal it, only worsen it.

The mere fact that we believe that lack of truth, dwindling trust, and above all, manifest hatred can only be eradicated through violence, shows how seriously we regard these phenomena and at the same time, how helpless we feel in the face of these attitudes.

In a world whose survival is linked to the availability of truth and trust, it is a piercing alarm signal to observe how difficult it is for us as humans to deal with the absence of truth and face hatred.

Is Hatred Incurable?

When we observe how tenaciously hatred persists in humanity, how unimaginably cruel actions driven by hatred can be, and how helpless we humans seem in the face of hatred, one might wonder if hatred is ultimately not a kind of disease—one that threatens the hater themselves and, particularly, those who are hated with severe harm, ultimately death.

With typical diseases, we have learned to search for remedies that can free us from the illness. But what about a disease like hatred? What helps here? Does anything help? Must we, like in earlier times with people afflicted by deadly diseases (like the plague), isolate, lock away, or send away those who are consumed by hatred to some no man’s land? … but everyone knows that this isn’t feasible… What is feasible? What can combat hatred?

After approximately 300.000 years of Homo sapiens on this planet, we seem strangely helpless in the face of the disease of hatred.

What’s even worse is that there are other people who see in every hater a potential tool to redirect that hatred toward goals they want to damage or destroy, using suitable manipulation. Thus, hatred does not disappear; on the contrary, it feels justified, and new injustices fuel the emergence of new hatred… the disease continues to spread.

One of the greatest events in the entire known universe—the emergence of mysterious life on this planet Earth—has a vulnerable point where this life appears strangely weak and helpless. Throughout history, humans have demonstrated their capability for actions that endure for many generations, that enable more people to live fulfilling lives, but in the face of hatred, they appear oddly helpless… and the one consumed by hatred is left incapacitated, incapable of anything else… plummeting into their dark inner abyss…


Instead of hatred, we need (minimally and in outline):

  1. Water: To sustain human life, along with the infrastructure to provide it, and individuals to maintain that infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  2. Food: To sustain human life, along with the infrastructure for its production, storage, processing, transportation, distribution, and provision. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  3. Shelter: To provide a living environment, including the infrastructure for its creation, provisioning, maintenance, and distribution. Individuals are needed to manage this provision, and they, too, require everything they need for their own lives to fulfill this task.
  4. Energy: For heating, cooling, daily activities, and life itself, along with the infrastructure for its generation, provisioning, maintenance, and distribution. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  5. Authorization and Participation: To access water, food, shelter, and energy. This requires an infrastructure of agreements, and individuals to manage these agreements. These individuals also require everything they need for their own lives to fulfill this task.
  6. Education: To be capable of undertaking and successfully completing tasks in real life. This necessitates individuals with enough experience and knowledge to offer and conduct such education. These individuals also require everything they need for their own lives to fulfill this task.
  7. Medical Care: To help with injuries, accidents, and illnesses. This requires individuals with sufficient experience and knowledge to offer and provide medical care, as well as the necessary facilities and equipment. These individuals also require everything they need for their own lives to fulfill this task.
  8. Communication Facilities: So that everyone can receive helpful information needed to navigate their world effectively. This requires suitable infrastructure and individuals with enough experience and knowledge to provide such information. These individuals also require everything they need for their own lives to fulfill this task.
  9. Transportation Facilities: So that people and goods can reach the places they need to go. This necessitates suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  10. Decision Structures: To mediate the diverse needs and necessary services in a way that ensures most people have access to what they need for their daily lives. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  11. Law Enforcement: To ensure disruptions and damage to the infrastructure necessary for daily life are resolved without creating new disruptions. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such services. These individuals also require everything they need for their own lives to fulfill this task.
  12. Sufficient Land: To provide enough space for all these requirements, along with suitable soil (for water, food, shelter, transportation, storage, production, etc.).
  13. Suitable Climate
  14. A functioning ecosystem.
  15. A capable scientific community to explore and understand the world.
  16. Suitable technology to accomplish everyday tasks and support scientific endeavors.
  17. Knowledge in the minds of people to understand daily events and make responsible decisions.
  18. Goal orientations (preferences, values, etc.) in the minds of people to make informed decisions.
  19. Ample time and peace to allow these processes to occur and produce results.
  20. Strong and lasting relationships with other population groups pursuing the same goals.
  21. Sufficient commonality among all population groups on Earth to address their shared needs where they are affected.
  22. A sustained positive and constructive competition for those goal orientations that make life possible and viable for as many people on this planet (in this solar system, in this galaxy, etc.) as possible.
  23. The freedom present within the experiential world, included within every living being, especially within humans, should be given as much room as possible, as it is this freedom that can overcome false ideas from the past in the face of a constantly changing world, enabling us to potentially thrive in the world of the future.

READING A BOOK, LEARNING, BEING SCIENTIFIC, WIKIPEDIA. A dialogue with chatGPT4 bringing you ‘back to earth’

Autor: Gerd Doeben-Henisch

Aug 30, 2023 – Aug 30, 2023

Email: info@uffmm.org

CONTEXT

This text belongs to a series of experiments with chatGPT4. While there exist meanwhile some experiments demonstrating the low quality of chatGPT4 in many qualitative tests [1], the author of this texts wants to check the ability of the chatGPT4 software in reproducing man-like behavior associated with the production and reception of texts on a higher level. I started this series with a first post here.[2]

SUMMARY

In the following series of dialogues with chatGPT4 the software stated in the beginning that it cannot read a book like a human person. Asking for a special book with the title “Das Experiment sind wir” by the author Christian Stöcker? the software answered, that it doesn’t know the book. Then I tried to ask the software, whether it can nevertheless ‘learn’ something based on the documents given to the software until the year 2021. The answer was quite revealing: “No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.” I didn’t give up to understand what the software is able to do if not ‘reading a book’ or to ‘learn’. I asked: “What can you extract from your text base related to the question what can happen in the head of people if they read a book?” I got a list of general properties summarized from different source (without mentioning these sources). I continued asking the software not about the leaning inside itself but what the software knows about the learning inside humans persons reading a book: “How do you describe the process of learning in human persons while reading a book?” The software did mention some aspects which have to be considered while a human person reads a book. But this gives only some ‘static view’ of the structure being active in the process of reading. More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book? The software gives a detailed breakdown of the learning process in human persons while reading, but again these are statements without any ‘backing up’, no sources,no contexts, but in the everyday world — especially in science –, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’? Then the author did ask the software “Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?” The answer again was very simple: “I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.” Because it is known that wikipedia encyclopedia provides always explicit sources for the texts the author asked directly: Do you know the textbase wikipedia? Clearly, the chatGPT4 knows Wikipedia; besides many positive remarks it answered: “The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.” This sounds quite good for Wikipedia. Thus it is interesting how chatGPT4 compares itself to Wikipedia: Question: “Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?” chatGPT4 doesn’t give a clear answer; it summarizes points for Wikipedia and itself, but leaves the question open. Thus the author continues asking for a more ‘general criterion’, which should be valid for both Wikipedia and chatGPT4: “Which criteria do you know for a good scientific description? ” As usual chatGPT4 gives a long list of criteria . the author asks back “Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.” The answer chatGPT4 gives makes itself nearly unusable for any serious scientific work. Yes, you can use chatGPT4 for some first general information, but without knowing more by yourself, you are really lost. It is surely quite more fruitful to check Wikipedia directly and using the the sources there for further investigations.

Reading a book

I started spontaneously to talk with chatGPT4 about a book from Christian Ströcker with the title “Das Experiment sind wir”, published 2020.

Here is the dialogue:

Author:

Can you read a book?

ChatGPT

I don’t “read” books in the same way humans do. Instead, I was trained on vast amounts of text data, which includes information from books, articles, websites, and other written sources up to my last update in September 2021.

You can ask me about the content, summaries, themes, and other information from many books, especially those that are well-known or that were published before my last training cut-off. If the book was published after September 2021 or is particularly obscure, I might not have information about it.

If you have a particular book or topic in mind, feel free to ask, and I’ll do my best to provide relevant information!

Author:

What can you say about the book entitled “Das Experiment sind wir” by the author Christian Stöcker?

ChatGPT

As of my last update in September 2021, “Das Experiment sind wir” by Christian Stöcker hasn’t been one of the primary sources I was trained on. However, Christian Stöcker is known as a journalist and professor in the field of digital communication.

If “Das Experiment sind wir” was published or gained significant attention after my last update, I may not have detailed information about its content, themes, or specific insights.

For a detailed overview or analysis of the book, I would recommend reading reviews, summaries, or author interviews, or accessing the book directly. If you have other questions or need information on a related topic, I’m here to help!

RESULT

OK, chatGPT4 doesn’t know this book. This is OK, even a high performance System can not know everything.

But interesting is his confession, that chatGPT4 can not read a book as a human person would do. In this sense chatGPT4 is not a ‘typical reader’ but — as wee indeed know — a piece of software, managed by human persons, which decide, which kinds of texts chatGPT4 has to ‘store’ in his memories and which are the basis for hier ‘algorithmic processing’.

But there is left the question whether chatGBT4 at least can learn by its own starting with the text base given to him.

Author

Thus if you have a text base as a point of reference, are you able to learn in some sense by your own?

ChatGPT

No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.

Here’s an analogy: Imagine an extremely comprehensive book. This book can answer a vast array of questions based on its contents, but it can’t write new chapters or update its pages on its own. In a similar manner, I can generate answers based on the data I was trained on, but I can’t independently update that data or “learn” new things.

So, while I can generate responses based on a broad range of existing knowledge, I can’t assimilate or understand new information or events after my last training cut-off in September 2021.

RESULT

As expected, chatGBT4 can not learn as we humans can learn. To generate summaries can be of some help, but this can not replace ‘true learning’.

But let us stay with the topic of ‘reading a book’:

Author

What can you extract from your text base related to the question what can happen in the head of people if they read a book?

ChatGPT

Reading a book can have various cognitive, emotional, and psychological effects on individuals. Here are some insights based on literature, cognitive science, and psychological studies up to my last training cut-off:

  1. Cognitive Stimulation: Engaging with the written word can stimulate the brain. Studies have suggested that mental stimulation can slow the progress (or possibly even prevent) Alzheimer’s and dementia, as keeping your brain active and engaged prevents it from losing power.
  2. Mental Escape: For many, reading a book can act as an escape from the real world. Immersing oneself in a different world, whether it’s a fantasy realm or an historical setting, provides a break from everyday stressors.
  3. Increased Knowledge: Naturally, books inform and educate. This builds a knowledge base that can be applied in real-life situations.
  4. Vocabulary Expansion: Reading regularly can expand one’s vocabulary more than direct teaching or verbal communications.
  5. Improved Focus and Concentration: Unlike skimming short web articles or watching quick videos, reading a book requires a longer attention span and can help improve one’s focus and concentration.
  6. Stress Reduction: A well-written novel or non-fiction book can transport the reader to other realms, distracting them and helping them to de-stress more effectively than other methods.
  7. Improved Analytical Skills: Engaging with complex narratives and plots can help readers improve their analytical skills as they ponder plot developments and character motivations.
  8. Empathy Development: Reading fiction, in particular, allows readers to live vicariously through characters, experiencing a vast range of emotions and situations. This can help in developing a deeper understanding and empathy for others.
  9. Mental Tranquility: Reading subjects like poetry or certain religious texts can provide tranquility, help lower blood pressure, and reduce stress.
  10. Memory Improvement: Reading involves remembering various characters, their backgrounds, histories, and nuances. This can create new brain pathways and strengthen existing ones.
  11. Entertainment: Of course, one of the primary reasons people read is for entertainment. A good story can be incredibly engaging, offering hours of enjoyment.

It’s worth noting that the impact of reading can vary based on the type of book (e.g., fiction vs. non-fiction), the style of writing, the personal experiences and preferences of the reader, and the context in which they’re reading.

RESULT

chatGPT4 mentions some aspects which have to be considered while a human person reads a book. This gives some ‘static view’ of the structure being active in the process of reading.

More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book?

Author

How do you describe the process of learning in human persons while reading a book?

ChatGPT

Reading a book is a complex cognitive process that involves multiple levels of mental engagement and comprehension. When a person reads, several processes related to learning occur simultaneously. Here’s a detailed breakdown of the learning process while reading:

  1. Perceptual Processing: This is the basic recognition of words and letters. As your eyes scan a page, the visual cortex interprets the shapes of letters and groups of letters as words. This process becomes automatic with practice and training.
  2. Decoding: This involves understanding the phonetic value of words, which is especially crucial for unfamiliar terms or names. This skill is emphasized heavily in early reading education when children sound out words.
  3. Semantic Processing: Once the words are recognized, the brain works to understand their meaning. This involves linking the words to their associated concepts in the brain.
  4. Syntax and Grammar Processing: The brain understands the structure of sentences and how words relate to each other, determining the relationships among verbs, nouns, adjectives, etc.
  5. Working Memory: This is essential for maintaining an understanding of a sentence or paragraph. As you read, you keep the beginning of a sentence in mind as you progress to the end, allowing you to grasp the entire concept.
  6. Building Connections: Learning often involves connecting new information with prior knowledge. When reading, you constantly relate what you are consuming to what you already know, which can deepen comprehension.
  7. Critical Thinking: Beyond just understanding the content, readers often evaluate, analyze, and critique what they’re reading. They may ask themselves if they agree with the author, if the information is credible, or how it relates to other things they know.
  8. Visualization: Good readers often create mental images of what they are reading. This is especially true for narrative texts where imagining scenes, characters, or events can enhance comprehension.
  9. Emotional Engagement: The content in books can evoke emotions. Whether it’s a novel that tugs at the heartstrings or a non-fiction book that arouses anger or passion about a topic, emotional engagement can increase retention and internalization of content.
  10. Metacognition: This involves thinking about one’s own thinking. Effective readers often monitor their understanding as they read, recognizing when they’re confused and employing strategies to improve comprehension.
  11. Retention & Long-Term Memory: Once the book is done, the process of shifting information from working memory to long-term memory begins. Repetition, discussion, and application of knowledge can aid this transfer.
  12. Application: The ultimate goal of learning is often to apply knowledge. After reading, a person may take actions, make decisions, or change beliefs based on what they’ve learned.

It’s important to note that everyone’s reading process might be a bit different. Some people naturally visualize more, while others might focus on critical analysis. Additionally, background knowledge, context, and the reader’s purpose (e.g., reading for pleasure versus studying for an exam) can influence how these processes unfold.

RESULT

chatGPT4 presents some facts responding to the question which are — as such — really interesting, but in the everyday world especially in science, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’?

Let us ask chatGPT4:

Author

Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?

ChatGPT

I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.

The steps and processes I outlined for learning while reading a book are based on widely accepted theories and research in the fields of cognitive psychology, education, and linguistics. If you’re interested in diving deeper into these topics, you might consider exploring:

  1. Cognitive psychology textbooks: These delve into the processes of perception, working memory, long-term memory, and more.
  2. Reading and literacy research: Scholars in this field have conducted in-depth studies on the processes of decoding, comprehension, and the strategies good readers use.
  3. Educational psychology: This field explores topics related to learning, including metacognition, critical thinking, and the application of knowledge.

If you’d like specific book or article recommendations, or more in-depth explanations on any of these topics, I can certainly provide them.

Author

Do you know the textbase wikipedia?

ChatGPT

Yes, I’m familiar with Wikipedia. Wikipedia is a free online encyclopedia that is collaboratively edited by volunteers from around the world. It’s one of the most visited websites globally and provides information on a vast array of topics.

Wikipedia is known for its “wiki” nature, which means that almost any article can be edited by any user with internet access. This collaborative approach has made Wikipedia a rich and ever-evolving source of information. However, because of this open-editing system, it’s always a good idea to verify critical information from Wikipedia with other reliable sources.

The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.

Would you like more information or have a specific question related to Wikipedia?

RESULT

chatGPT4 knows wikipedia and gives a quite fair description of wikipedia (probably the English version). Of interest is how chatGPT4 describes the characteristic differences with wikipedia.

Author

Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?

ChatGPT

Wikipedia and I serve different purposes, and the trustworthiness of information from either source depends on the context and specific use-case.

Advantages of Wikipedia:

  1. Citations: Wikipedia articles often have extensive lists of references, which allow readers to trace back the origin of specific pieces of information. This provides a level of transparency.
  2. Collaboration: Wikipedia is collectively edited by a vast number of individuals, which can mean many eyes reviewing and refining content.
  3. Current Events: Wikipedia is often updated rapidly in the face of current events, so for very recent information, it can sometimes be more up-to-date than my last training data.

Advantages of ChatGPT (me):

  1. Broad Training Data: I’ve been trained on diverse datasets, including books, articles, and other content. This allows me to provide a wide range of information.
  2. Consistent and Neutral: While Wikipedia aims for a neutral point of view, individual articles might sometimes reflect biases of the contributors. My design aims to minimize bias and provide consistent responses.
  3. Conversational Interface: Users can have a back-and-forth dialogue with me, making it easier to ask follow-up questions or dive deeper into topics.

Considerations:

  1. Vandalism: Wikipedia can be edited by anyone, which sometimes leads to misinformation due to vandalism. However, popular pages often have vigilant editors and protective measures in place.
  2. Knowledge Cutoff: My last training data is from September 2021, so events or advancements after that date are not within my current knowledge.

Ultimately, neither Wikipedia nor I should be seen as infallible. Both can serve as starting points for research, but critical information should be cross-referenced with primary sources or authoritative references. It’s always a good practice to be discerning and critical when interpreting information from any source.

RESULT

The characterization of wikipedia is still quite fair. But in some sens does chatGPT4 ‘downplay’ it’s lack of real sources, which are the basis of any serious arguing.

Author

Which criteria do you know for a good scientific description?

ChatGPT

Scientific descriptions are meant to be precise, clear, and replicable. To ensure a high-quality scientific description, several criteria should be met:

  1. Accuracy: The description should correctly represent observations, data, or results without exaggeration or misrepresentation.
  2. Clarity: The language used should be clear and free from ambiguity. It should be easily understood by those with appropriate background knowledge in the field.
  3. Completeness: All relevant details should be included, especially those that would be necessary for replication or to understand the context of the study.
  4. Consistency: Descriptions should be consistent in terms of units of measurement, terminology, and methodology.
  5. Objectivity: The description should be free from personal bias. Instead of using subjective terms, rely on objective measurements and observations.
  6. Replicability: Enough detail should be provided so that another scientist with the same background knowledge and resources can replicate the study or experiment.
  7. Relevance: Only pertinent details should be included. While completeness is important, overloading a description with irrelevant information can detract from its usefulness.
  8. Use of Standardized Terminology: Using accepted and standardized terms in the field ensures that the description is understood universally among experts.
  9. Citing Sources: Any claim, idea, or method that isn’t original to the current study should be appropriately cited, giving credit to the original sources and allowing readers to delve deeper into the background if they wish.
  10. Avoiding Jargon (when possible): While specialized terms can be necessary, over-reliance on jargon can alienate those slightly outside the immediate field. When jargon is used, it should be defined or explained.
  11. Providing Context: A good scientific description often includes background information or a literature review to give context to the study’s significance and position within the broader field.
  12. Transparency: Limitations, potential sources of error, and uncertainties should be clearly stated. This honesty allows readers to gauge the reliability of the findings and understand any potential caveats.

A good scientific description is essentially a balance between precision and clarity. It should be detailed enough to be of value to experts in the field while still being accessible to those with a general background in the discipline.

RESULT

The overview of criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of chatGPT4 it is not easily to see, whether these criteria can also applied to the output of chatGPT4.

Lets transform this into a question:

Author

Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.

ChatGPT

That’s an insightful observation. Let’s discuss the applicability of some of these criteria to ChatGPT (like me) and its outputs:

  1. Replicability: While the underlying research and development process of models like mine can be replicated in theory, the specific outputs for each query might vary slightly, even with similar inputs. Therefore, individual responses from me aren’t precisely replicable.
  2. Citing Sources: I don’t provide direct citations for specific statements in my responses. My training data comes from diverse sources, and I generate responses based on patterns in that data. Thus, I can’t directly attribute my statements to specific sources like a scholarly article or a book. This is indeed a limitation if you’re considering the output as a “scientific description.”
  3. Transparency: OpenAI, the organization behind me, has released details about the architecture, training methodology, and intentions behind models like mine. However, the specifics of the training data or exact processes that lead to a particular response aren’t transparently available in each interaction.

In light of this:

  • For scholarly or academic purposes, my responses should be considered as starting points or general overviews. Critical information should always be cross-referenced with authoritative and primary sources.
  • For general inquiries or casual information, my design is to provide accurate and coherent answers based on the vast amount of data I’ve been trained on.

It’s crucial to recognize the intended use and context when considering the applicability of scientific criteria to any tool or source, including ChatGPT.

COMMENTS

[1] There was a conference at the Technical University of Darmstadt with the title “KI – Text und Geltung. Wie verändern KI-Textgeneratoren wissenschaftliche Diskurse?” (Translated: AI – Text and Validity. How do AI text generators change scientific discourses? )., https://zevedi.de/en/topics/ki-text-2/ . One lecturer Thomas Arnold, gave a speech entitled “Herausforderungen in der Forschung: Mangelnde Reproduzierbarkeit und Erklärbarkeit” (Translated: Challenges in research: lack of reproducibility and explainability), which will be published by de Gruyter at the end of 2023/ or the beginning of 2024.

[2] Gerd Doeben-Henisch, STATE CHANGE FROM NON-WRITING TO WRITING. WORKING WITH CHATGPT4 IN PARALLEL. AN EXPLORATION, https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/

Homo Sapiens: empirical and sustained-empirical theories, emotions, and machines. A sketch

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Aug 24, 2023 — Aug 29, 2023 (10:48h CET)

Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.

CONTEXT

This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.

Start of the Lecture

Dear Auditorium,

This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.

In this conference many different views are presented that are possible on this topic.

TRANSDISCIPLINARY

My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.

An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.

‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.

HUMAN TEXT GENERATION

The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.

This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.

TEXT CAPABLE MACHINES

With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.

In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.

BIOLOGICAL — NON-BIOLOGICAL

On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.

BLANK INTELLIGENCE TERM

The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.

PREREQUISITES FOR TEXT GENERATION

If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.

VALIDITY

A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.

In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.

In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.

ASYMMETRY: APPLICABLE- NOT APPLICABLE

One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.

With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.

MEANING

This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.

If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):

KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.

LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.

MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.

Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.

FUTURE AND EMOTIONS

In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.

If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.

If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?

The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]

SCIENTIFIC DISCOURSE AND EVERYDAY SITUATIONS

In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.

The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?

For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.

This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.

The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.

Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.

From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.

  1. The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
  2. This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
  3. The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
  4. It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
  5. In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
  6. Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
  7. The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
  8. The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
  9. The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
  1. The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
  2. If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
  3. This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
  4. A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
  5. Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
  6. The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.

Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.

SUSTAINABLE EMPIRICAL THEORY

With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.

While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.

However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]

If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.

In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]

MAN-MACHINE

After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?

My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.

In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.

The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.

This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.

Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?

Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.

Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.

In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.

But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?

A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.

My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]

COMMENTS

[1] https://zevedi.de/en/topics/ki-text-2/

[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/ ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.