Category Archives: Knowledge

The Invasion of the Storytellers

Author: Gerd Doeben-Henisch

Changelog: April 30, 2024 – May 3, 2024

May 3,24: I added two Epilogs


TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software @chatGPT4 with manual modifications.


Originally I wrote, that “this text is not a direct continuation of another text, but that there exist before various articles from the author on similar topics. In this sense, the current text is a kind of ‘further development’ of these ideas”. But, indeed, at least the text “NARRATIVES RULE THE WORLD. CURSE & BLESSING. COMMENTS FROM @CHATGPT4” ( ) is a text, which can be understood as a kind of precursor.

In everyday life … magical links …

Almost everyone knows someone—or even several people—who send many emails—or other messages—that only contain links, links to various videos, of which the internet provides plenty nowadays, or images with a few keywords.

Since time is often short, one would like to know if it’s worth clicking on this video. But explanatory information is missing.

When asked about it, whether it would not be possible to include a few explanatory words, the sender almost always replies that they cannot formulate it as well as the video itself.

Interesting: Someone sends a link to a video without being able to express their opinion about it in their own words…

Follow-up questions…

When I click on a link and try to form an opinion, one of the first questions naturally is who published the video (or a text). The same set of facts can be narrated quite differently, even in complete contradiction, depending on the observer’s perspective, as evidenced and verifiable in everyday life. And since what we can sensually perceive is always only very fragmentary, is attached to the surfaces and is connected to some moment of time, it does not necessarily allow us to recognize different relationships to other aspects. And this vagueness is offering plenty of room for interpretation with each observation. Without a thorough consideration of the context and the backstory, interpretation is simply not possible … unless someone already has a ‘finished opinion’ that ‘integrates’ the ‘involuntary fragment of observation’ without hesitation.

So questioning and researching is quite ‘normal’, but our ‘quick brain’ first seeks ‘automatic answers’, as it doesn’t require much thought, is faster, requires less energy, and despite everything, this ‘automatic interpretation’ still provides a ‘satisfying feeling’: Yes, one ‘knows exactly what is presented’. So why question?


As a scientist, I am trained to clarify all framework conditions, including my own assumptions. Of course, this takes effort and time and is anything but error-free. Hence, multiple checks, inquiries with others about their perspectives, etc. are a common practice.

However, when I ask the ‘wordless senders of links’, if something catches my attention, especially when I address a conflict with the reality I know, the reactions vary in the direction that I have misunderstood or that the author did not mean it that way at all. If I then refer to other sources that are considered ‘strongly verified’, they are labeled as ‘lying press’ or the authors are immediately exposed as ‘agents of a dark power’ (there is a whole range of such ‘dark powers’), and if I dare to inquire here as well, where the information comes from, then I quickly become a naive, stupid person for not knowing all this.

So, any attempt to clarify the basics of statements, to trace them back to comprehensible facts, ends in some kind of conflict long before any clarification has been realized.

Truth, Farewell…

Now, the topic of ‘truth’ has become even in philosophy unfortunately no more than a repository of multiple proposals. And even the modern sciences, fundamentally empirical, increasingly entangle themselves in the multitude of their disciplines and methods in a way that ‘integrative perspectives’ are rare and the ‘average citizen’ tends to have a problem of understanding. Not a good starting point to effectively prevent the spread of the ‘cognitive fairy tale virus’.

Democracy and the Internet as a Booster

The bizarre aspect of our current situation is that precisely the two most significant achievements of humanity, the societal form of ‘modern democracy’ (for about 250 years (in a history of about 300,000 years)) and the technology of the ‘internet’ (browser-based since about 1993), which for the first time have made a maximum of freedom and diversity of expression possible, that precisely these two achievements have now created the conditions for the cognitive fairy tale virus to spread so unrestrainedly.

Important: today’s cognitive fairy tale virus occurs in the context of ‘freedom’! In previous millennia, the cognitive fairy tale virus already existed, but it was under the control of the respective authoritarian rulers, who used it to steer the thoughts and feelings of their subjects in their favor. The ‘ambiguities’ of meanings have always allowed almost all interpretations; and if a previous fairy tale wasn’t enough, a new one was quickly invented. As long as control by reality is not really possible, anything can be told.

With the emergence of democracy, the authoritarian power structures disappeared, but the people who were allowed and supposed to vote were ultimately the same as before in authoritarian regimes. Who really has the time and desire to deal with the complicated questions of the real world, especially if it doesn’t directly affect oneself? That’s what our elected representatives are supposed to do…

In the (seemingly) quiet years since World War II, the division of tasks seemed to work well: here the citizens delegating everything, and there the elected representatives who do everything right. ‘Control’ of power was supposed to be guaranteed through constitution, judiciary, and through a functioning public…

But what was not foreseen were such trifles as:

  1. The increase in population and the advancement of technologies induced ever more complex processes with equally complex interactions that could no longer be adequately managed with the usual methods from the past. Errors and conflicts were inevitable.
  2. Delegating to a few elected representatives with ‘normal abilities’ can only work if these few representatives operate within contexts that provide them with all the necessary competencies their office requires. This task seems to be increasingly poorly addressed.
  3. The important ‘functioning public’ has been increasingly fragmented by the tremendous possibilities of the internet: there is no longer ‘the’ public, but many publics. This is not inherently bad, but when the available channels are attracting the ‘quick and convenient brain’ like light attracts mosquitoes, then heads increasingly fall into the realm of ‘cognitive viruses’ that, after only short ‘incubation periods,’ take possession of a head and control it from there.

The effects of these three factors have been clearly observable for several years now: the unresolved problems of society, which are increasingly poorly addressed by the existing democratic-political system, make individual people in the everyday situation to interpret their dissatisfaction and fears more and more exclusively under the influence of the cognitive fairy tale virus and to act accordingly. This gradually worsens the situation, as the constructive capacities for problem analysis and the collective strength for problem-solving diminish more and more..

No remedies available?

Looking back over the thousands of years of human history, it’s evident that ‘opinions’, ‘views of the world’, have always only harmonized with the real world in limited areas, where it was important to survive. But even in these small areas, for millennia, there were many beliefs that were later found to be ‘wrong’.

Very early on, we humans mastered the art of telling ourselves stories about how everything is connected. These were eagerly listened to, they were believed, and only much later could one sometimes recognize what was entirely or partially wrong about the earlier stories. But in their lifetimes, for those who grew up with these stories, these tales were ‘true’, made ‘sense’, people even went to their deaths for them.

Only at the very end of humanity’s previous development (the life form of Homo sapiens), so — with 300,000 years as 24 hours — after about 23 hours and 59 minutes, did humans discover with empirical sciences a method of obtaining ‘true knowledge’ that not only works for the moment but allows us to look millions, even billions of years ‘back in time’, and for many factors, billions of years into the future. With this, science can delve into the deepest depths of matter and increasingly understand the complex interplay of all the wonderful factors.

And just at this moment of humanity’s first great triumphs on the planet Earth, the cognitive fairy tale virus breaks out unchecked and threatens even to completely extinguish modern sciences!

Which people on this planet can resist this cognitive fairy tale virus?

Here’s a recent message from the Uppsala University [1,2], reporting on an experiment by Swedish scientists with students, showing that it was possible to measurably sharpen students’ awareness of ‘fake news’ (here: the cognitive fairy tale virus).

Yes, we know that young people can shape their awareness to be better equipped against the cognitive fairy tale virus through appropriate education. But what happens when official educational institutions aren’t able to provide the necessary eduaction because either the teachers cannot conduct such knowledge therapy or the teachers themselves could do it, but the institutions do not allow it? The latter cases are known, even in so-called democracies!

Epilog 1

The following working hypotheses are emerging:

  1. The fairy tale virus, the unrestrained inclination to tell stories (uncontrolled), is genetically ingrained in humans.
  2. Neither intelligence nor so-called ‘academic education’ automatically protect against it.
  3. Critical thinking’ and ’empirical science’ are special qualities that people can only acquire with their own great commitment. Minimal conditions must exist in a society for these qualities, without which it is not possible.
  4. Active democracies seem to be able to contain the fairy tale virus to about 15-20% of societal practice (although it is always present in people). As soon as the percentage of active storytellers perceptibly increases, it must be assumed that the concept of ‘democracy’ is increasingly weakening in societal practice — for various reasons.

Epilog 2

Anyone actively affected by the fairy tale virus has a view of the world, of themselves, and of others, that has so little to do with the real world ‘out there’, beyond their own thinking, that real events no longer influence their own thinking. They live in their own ‘thought bubble’. Those who have learned to think ‘critically and scientifically’ have acquired techniques and apply them that repeatedly subject their thinking within their own bubble to a ‘reality check’. This check is not limited to specific events or statements… and that’s where it gets difficult.


[1] Here’s the website of Uppsala University, Sweden, where the researchers come from:

[2] And here’s the full scientific article with open access: “Bad News in the civics classroom: How serious gameplay fosters teenagers’ ability to discern misinformation techniques.” Carl-Anton Werner Axelsson, Thomas Nygren, Jon Roozenbeek & Sander van der Linden, Received 26 Sep 2023, Accepted 29 Mar 2024, Published online: 19 Apr 2024:

TRUTH AND MEANING – As a collective achievement

Author: Gerd Doeben-Henisch

Time: Jan 8, 2024 – Jan 8, 2024 (10:00 a.m. CET)


TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software as well as chatGPT 4.


This text is a direct continuation of the text There exists only one big Problem for the Future of Human Mankind: The Belief in false Narratives.


There exists only one big Problem for the Future of Human Mankind: The Belief in false Narratives

Author: Gerd Doeben-Henisch

Time: Jan 5, 2024 – Jan 8, 2024 (09:45 a.m. CET)


TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software as well as chatGPT 4. The English version is a slightly revised version of the German text.

This blog entry will be completed today. However, it has laid the foundations for considerations that will be pursued further in a new blog entry.


This text belongs to the topic Philosophy (of Science).


Triggered by several reasons I started some investigation in the phenomenon of ‘propaganda’ to sharpen my understanding. My strategy was first to try to characterize the phenomenon of ‘general communication’ in order to find some ‘harder criteria’ that would allow to characterize the concept of ‘propaganda’ to stand out against this general background in a somewhat comprehensible way.

The realization of this goal then actually led to an ever more fundamental examination of our normal (human) communication, so that forms of propaganda become recognizable as ‘special cases’ of our communication. The worrying thing about this is that even so-called ‘normal communication’ contains numerous elements that can make it very difficult to recognize and pass on ‘truth’ (*). ‘Massive cases of propaganda’ therefore have their ‘home’ where we communicate with each other every day. So if we want to prevent propaganda, we have to start in everyday life.

(*) The concept of ‘truth’ is examined and explained in great detail in the following long text below. Unfortunately, I have not yet found a ‘short formula’ for it. In essence, it is about establishing a connection to ‘real’ events and processes in the world – including one’s own body – in such a way that they can, in principle, be understood and verified by others.


However, it becomes difficult when there is enough political power that can set the social framework conditions in such a way that for the individual in everyday life – the citizen! – general communication is more or less prescribed – ‘dictated’. Then ‘truth’ becomes less and less or even non-existent. A society is then ‘programmed’ for its own downfall through the suppression of truth. ([3], [6]).

The hour of narratives

But – and this is the far more dangerous form of ‘propaganda’ ! – even if there is not a nationwide apparatus of power that prescribes certain forms of ‘truth’, a mutilation or gross distortion of truth can still take place on a grand scale. Worldwide today, in the age of mass media, especially in the age of the internet, we can see that individuals, small groups, special organizations, political groups, entire religious communities, in fact all people and their social manifestations, follow a certain ‘narrative’ [*11] when they act.

Typical for acting according to a narrative is that those who do so individually believe that it is ‘their own decision’ and that their narrative is ‘true’, and that they are therefore ‘in the right’ when they act accordingly. This ‘feeling to be right’ can go as far as claiming the right to kill others because they ‘act wrongly’ in the light of their own ‘narrative’. We should therefore speak here of a ‘narrative truth’: Within the framework of the narrative, a picture of the world is drawn that ‘as a whole’ enables a perspective that ‘as such’ is ‘found to be good’ by the followers of the narrative, as ‘making sense’. Normally, the effect of a narrative, which is experienced as ‘meaningful’, is so great that the ‘truth content’ is no longer examined in detail.


This has existed at all times in the history of mankind. Narratives that appeared as ‘religious beliefs’ were particularly effective. It is therefore no coincidence that almost all governments of the last millennia have adopted religious beliefs as state doctrines; an essential component of religious beliefs is that they are ‘unprovable’, i.e. ‘incapable of truth’. This makes a religious narrative a wonderful tool in the hands of the powerful to motivate people to behave in certain ways without the threat of violence.


In recent decades, however, we have experienced new, ‘modern forms’ of narratives that do not come across as religious narratives, but which nevertheless have a very similar effect: People perceive these narratives as ‘giving meaning’ in a world that is becoming increasingly confusing and therefore threatening for everyone today. Individual people, the citizens, also feel ‘politically helpless’, so that – even in a ‘democracy’ – they have the feeling that they cannot directly influence anything: the ‘people up there’ do what they want. In such a situation, ‘simplistic narratives’ are a blessing for the maltreated soul; you hear them and have the feeling: yes, that’s how it is; that’s exactly how I ‘feel’!

Such ‘popular narratives’, which enable ‘good feelings’, are gaining ever greater power. What they have in common with religious narratives is that the ‘followers’ of popular narratives no longer ask the ‘question of truth’; most of them are also not sufficiently ‘trained’ to be able to clarify the truth of a narrative at all. It is typical for supporters of narratives that they are generally hardly able to explain their own narrative to others. They typically send each other links to texts/videos that they find ‘good’ because these texts/videos somehow seem to support the popular narrative, and tend not to check the authors and sources because they are in the eyes of the followers such ‘decent people’, which always say exactly the ‘same thing’ as the ‘popular narrative’ dictates.


If you now take into account that the ‘world of narratives’ is an extremely tempting offer for all those who have power over people or would like to gain power over people, then it should come as no surprise that many governments in this world, many other power groups, are doing just that today: they do not try to coerce people ‘directly’, but they ‘produce’ popular narratives or ‘monitor’ already existing popular narratives’ in order to gain power over the hearts and minds of more and more people via the detour of these narratives. Some speak here of ‘hybrid warfare’, others of ‘modern propaganda’, but ultimately, I guess, these terms miss the core of the problem.

The ‘irrational’ defends itself against the ‘rational’

The core of the problem is the way in which human communities have always organized their collective action, namely through narratives; we humans have no other option. However, such narratives – as the considerations further down in the text will show – are extremely susceptible to ‘falsity’, to a ‘distortion of the picture of the world’. In the context of the development of legal systems, approaches have been developed during at least the last 7000 years to ‘improve’ the abuse of power in a society by supporting truth-preserving mechanisms. Gradually, this has certainly helped, with all the deficits that still exist today. Additionally, about 500 years ago, a real revolution took place: humanity managed to find a format with the concept of a ‘verifiable narrative (empirical theory)’ that optimized the ‘preservation of truth’ and minimized the slide into untruth. This new concept of ‘verifiable truth’ has enabled great insights that before were beyond imagination .

The ‘aura of the scientific’ has meanwhile permeated almost all of human culture, almost! But we have to realize that although scientific thinking has comprehensively shaped the world of practicality through modern technologies, the way of scientific thinking has not overridden all other narratives. On the contrary, the ‘non-truth narratives’ have become so strong again that they are pushing back the ‘scientific’ in more and more areas of our world, patronizing it, forbidding it, eradicating it. The ‘irrationality’ of religious and popular narratives is stronger than ever before. ‘Irrational narratives’ are for many so appealing because they spare the individual from having to ‘think for themselves’. Real thinking is exhausting, unpopular, annoying and hinders the dream of a simple solution.


Against this backdrop, the widespread inability of people to recognize and overcome ‘irrational narratives’ appears to be the central problem facing humanity in mastering the current global challenges. Before we need more technology (we certainly do), we need more people who are able and willing to think more and better, and who are also able to solve ‘real problems’ together with others. Real problems can be recognized by the fact that they are largely ‘new’, that there are no ‘simple off-the-shelf’ solutions for them, that you really have to ‘struggle’ together for possible insights; in principle, the ‘old’ is not enough to recognize and implement the ‘true new’, and the future is precisely the space with the greatest amount of ‘unknown’, with lots of ‘genuinely new’ things.

The following text examines this view in detail.



As mentioned in the introduction the trigger for me to write this text was the confrontation with a popular book which appeared to me as a piece of ‘propaganda’. When I considered to describe my opinion with own words I detected that I had some difficulties: what is the difference between ‘propaganda’ and ‘everyday communication’? This forced me to think a little bit more about the ingredients of ‘everyday communication’ and where and why a ‘communication’ is ‘different’ to our ‘everyday communication’. As usual in the beginning of some discussion I took a first look to the various entries in Wikipedia (German and English). The entry in the English Wikipedia on ‘Propaganda [1b] attempts a very similar strategy to look to ‘normal communication’ and compared to this having a look to the phenomenon of ‘propaganda’, albeit with not quite sharp contours. However, it provides a broad overview of various forms of communication, including those forms that are ‘special’ (‘biased’), i.e. do not reflect the content to be communicated in the way that one would reproduce it according to ‘objective, verifiable criteria’.[*0] However, the variety of examples suggests that it is not easy to distinguish between ‘special’ and ‘normal’ communication: What then are these ‘objective verifiable criteria’? Who defines them?

Assuming for a moment that it is clear what these ‘objectively verifiable criteria’ are, one can tentatively attempt a working definition for the general (normal?) case of communication as a starting point:

Working Definition:

The general case of communication could be tentatively described as a simple attempt by one person – let’s call them the ‘author’ – to ‘bring something to the attention’ of another person – let’s call them the ‘interlocutor’. We tentatively call what is to be brought to their attention ‘the message’. We know from everyday life that an author can have numerous ‘characteristics’ that can affect the content of his message.

Here is a short list of properties that characterize the author’s situation in a communication. Then corresponding properties for the interlocutor.

The Author:

  1. The available knowledge of the author — both conscious and unconscious — determines the kind of message the author can create.
  2. His ability to discern truth determines whether and to what extent he can differentiate what in his message is verifiable in the real world — present or past — as ‘accurate’ or ‘true’.
  3. His linguistic ability determines whether and how much of his available knowledge can be communicated linguistically.
  4. The world of emotions decides whether he wants to communicate anything at all, for example, when, how, to whom, how intensely, how conspicuously, etc.
  5. The social context can affect whether he holds a certain social role, which dictates when he can and should communicate what, how, and with whom.
  6. The real conditions of communication determine whether a suitable ‘medium of communication’ is available (spoken sound, writing, sound, film, etc.) and whether and how it is accessible to potential interlocutors.
  7. The author’s physical constitution decides how far and to what extent he can communicate at all.

The Interlocutor:

  1. In general, the characteristics that apply to the author also apply to the interlocutor. However, some points can be particularly emphasized for the role of the interlocutor:
  2. The available knowledge of the interlocutor determines which aspects of the author’s message can be understood at all.
  3. The ability of the interlocutor to discern truth determines whether and to what extent he can also differentiate what in the conveyed message is verifiable as ‘accurate’ or ‘true’.
  4. The linguistic ability of the interlocutor affects whether and how much of the message he can absorb purely linguistically.
  5. Emotions decide whether the interlocutor wants to take in anything at all, for example, when, how, how much, with what inner attitude, etc.
  6. The social context can also affect whether the interlocutor holds a certain social role, which dictates when he can and should communicate what, how, and with whom.
  7. Furthermore, it can be important whether the communication medium is so familiar to the interlocutor that he can use it sufficiently well.
  8. The physical constitution of the interlocutor can also determine how far and to what extent the interlocutor can communicate at all.

Even this small selection of factors shows how diverse the situations can be in which ‘normal communication’ can take on a ‘special character’ due to the ‘effect of different circumstances’. For example, an actually ‘harmless greeting’ can lead to a social problem with many different consequences in certain roles. A seemingly ‘normal report’ can become a problem because the contact person misunderstands the message purely linguistically. A ‘factual report’ can have an emotional impact on the interlocutor due to the way it is presented, which can lead to them enthusiastically accepting the message or – on the contrary – vehemently rejecting it. Or, if the author has a tangible interest in persuading the interlocutor to behave in a certain way, this can lead to a certain situation not being presented in a ‘purely factual’ way, but rather to many aspects being communicated that seem suitable to the author to persuade the interlocutor to perceive the situation in a certain way and to adopt it accordingly. These ‘additional’ aspects can refer to many real circumstances of the communication situation beyond the pure message.

Types of communication …

Given this potential ‘diversity’, the question arises as to whether it will even be possible to define something like normal communication?

In order to be able to answer this question meaningfully, one should have a kind of ‘overview’ of all possible combinations of the properties of author (1-7) and interlocutor (1-8) and one should also have to be able to evaluate each of these possible combinations with a view to ‘normality’.

It should be noted that the two lists of properties author (1-7) and interlocutor (1-8) have a certain ‘arbitrariness’ attached to them: you can build the lists as they have been constructed here, but you don’t have to.

This is related to the general way in which we humans think: on one hand, we have ‘individual events that happen’ — or that we can ‘remember’ —, and on the other hand, we can ‘set’ ‘arbitrary relationships’ between ‘any individual events’ in our thinking. In science, this is called ‘hypothesis formation’. Whether or not such formation of hypotheses is undertaken, and which ones, is not standardized anywhere. Events as such do not enforce any particular hypothesis formations. Whether they are ‘sensible’ or not is determined solely in the later course of their ‘practical use’. One could even say that such hypothesis formation is a rudimentary form of ‘ethics’: the moment one adopts a hypothesis regarding a certain relationship between events, one minimally considers it ‘important’, otherwise, one would not undertake this hypothesis formation.

In this respect, it can be said that ‘everyday life’ is the primary place for possible working hypotheses and possible ‘minimum values’.

The following diagram demonstrates a possible arrangement of the characteristics of the author and the interlocutor:

FIGURE : Overview of the possible overlaps of knowledge between the author and the interlocutor, if everyone can have any knowledge at its disposal.

What is easy to recognize is the fact that an author can naturally have a constellation of knowledge that draws on an almost ‘infinite number of possibilities’. The same applies to the interlocutor. In purely abstract terms, the number of possible combinations is ‘virtually infinite’ due to the assumptions about the properties Author 1 and Interlocutor 2, which ultimately makes the question of ‘normality’ at the abstract level undecidable.

However, since both authors and interlocutors are not spherical beings from some abstract angle of possibilities, but are usually ‘concrete people’ with a ‘concrete history’ in a ‘concrete life-world’ at a ‘specific historical time’, the quasi-infinite abstract space of possibilities is narrowed down to a finite, manageable set of concretes. Yet, even these can still be considerably large when related to two specific individuals. Which person, with their life experience from which area, should now be taken as the ‘norm’ for ‘normal communication’?

It seems more likely that individual people are somehow ‘typified’, for example, by age and learning history, although a ‘learning history’ may not provide a clear picture either. Graduates from the same school can — as we know — possess very different knowledge afterwards, even though commonalities may be ‘minimally typical’.

Overall, the approach based on the characteristics of the author and the interlocutor does not seem to provide really clear criteria for a norm, even though a specification such as ‘the humanistic high school in Hadamar (a small German town) 1960 – 1968’ would suggest rudimentary commonalities.

One could now try to include the further characteristics of Author 2-7 and Interlocutor 3-8 in the considerations, but the ‘construction of normal communication’ seems to lead more and more into an unclear space of possibilities based on the assumptions of Author 1 and Interlocutor 2.

What does this mean for the typification of communication as ‘propaganda’? Isn’t ultimately every communication also a form of propaganda, or is there a possibility to sufficiently accurately characterize the form of ‘propaganda’, although it does not seem possible to find a standard for ‘normal communication’? … or will a better characterization of ‘propaganda’ indirectly provide clues for ‘non-propaganda’?

TRUTH and MEANING: Language as Key

The spontaneous attempt to clarify the meaning of the term ‘propaganda’ to the extent that one gets a few constructive criteria for being able to characterize certain forms of communication as ‘propaganda’ or not, gets into ever ‘deeper waters’. Are there now ‘objective verifiable criteria’ that one can work with, or not? And: Who determines them?

Let us temporarily stick to working hypothesis 1, that we are dealing with an author who articulates a message for an interlocutor, and let us expand this working hypothesis by the following addition 1: such communication always takes place in a social context. This means that the perception and knowledge of the individual actors (author, interlocutor) can continuously interact with this social context or ‘automatically interacts’ with it. The latter is because we humans are built in such a way that our body with its brain just does this, without ‘us’ having to make ‘conscious decisions’ for it.[*1]

For this section, I would like to extend the previous working hypothesis 1 together with supplement 1 by a further working hypothesis 2 (localization of language) [*4]:

  1. Every medium (language, sound, image, etc.) can contain a ‘potential meaning’.
  2. When creating the media event, the ‘author’ may attempt to ‘connect’ possible ‘contents’ that are to be ‘conveyed’ by him with the medium (‘putting into words/sound/image’, ‘encoding’, etc.). This ‘assignment’ of meaning occurs both ‘unconsciously/automatically’ and ‘(partially) consciously’.
  3. In perceiving the media event, the ‘interlocutor’ may try to assign a ‘possible meaning’ to this perceived event. This ‘assignment’ of meaning also happens both ‘unconsciously/automatically’ and ‘(partially) consciously’.
  4. The assignment of meaning requires both the author and the interlocutor to have undergone ‘learning processes’ (usually years, many years) that have made it possible to link certain ‘events of the external world’ as well as ‘internal states’ with certain media events.
  5. The ‘learning of meaning relationships’ always takes place in social contexts, as a media structure meant to ‘convey meaning’ between people belongs to everyone involved in the communication process.
  6. Those medial elements that are actually used for the ‘exchange of meanings’ all together form what is called a ‘language’: the ‘medial elements themselves’ form the ‘surface structure’ of the language, its ‘sign dimension’, and the ‘inner states’ in each ‘actor’ involved, form the ‘individual-subjective space of possible meanings’. This inner subjective space comprises two components: (i) the internally available elements as potential meaning content and (ii) a dynamic ‘meaning relationship’ that ‘links’ perceived elements of the surface structure and the potential meaning content.

To answer the guiding question of whether one can “characterize certain forms of communication as ‘propaganda’ or not,” one needs ‘objective, verifiable criteria’ on the basis of which a statement can be formulated. This question can be used to ask back whether there are ‘objective criteria’ in ‘normal everyday dialogue’ that we can use in everyday life to collectively decide whether a ‘claimed fact’ is ‘true’ or not; in this context, the word ‘true’ is also used. Can this be defined a bit more precisely?

For this I propose an additional working hypotheses 3:

  1. At least two actors can agree that a certain meaning, associated with the media construct, exists as a sensibly perceivable fact in such a way that they can agree that the ‘claimed fact’ is indeed present. Such a specific occurrence should be called ‘true 1’ or ‘Truth 1.’ A ‘specific occurrence’ can change at any time and quickly due to the dynamics of the real world (including the actors themselves), for example: the rain stops, the coffee cup is empty, the car from before is gone, the empty sidewalk is occupied by a group of people, etc.
  2. At least two actors can agree that a certain meaning, associated with the media construct, is currently not present as a real fact. Referring to the current situation of ‘non-occurrence,’ one would say that the statement is ‘false 1’; the claimed fact does not actually exist contrary to the claim.
  3. At least two actors can agree that a certain meaning, associated with the media construct, is currently not present, but based on previous experience, it is ‘quite likely’ to occur in a ‘possible future situation.’ This aspect shall be called ‘potentially true’ or ‘true 2’ or ‘Truth 2.’ Should the fact then ‘actually occur’ at some point in the future, Truth 2 would transform into Truth 1.
  4. At least two actors can agree that a certain meaning associated with the media construct does not currently exist and that, based on previous experience, it is ‘fairly certain that it is unclear’ whether the intended fact could actually occur in a ‘possible future situation’. This aspect should be called ‘speculative true’ or ‘true 3’ or ‘truth 3’. Should the situation then ‘actually occur’ at some point, truth 3 would change into truth 1.
  5. At least two actors can agree that a certain meaning associated with the medial construct does not currently exist, and on the basis of previous experience ‘it is fairly certain’ that the intended fact could never occur in a ‘possible future situation’. This aspect should be called ‘speculative false’ or ‘false 2’.

A closer look at these 5 assumptions of working hypothesis 3 reveals that there are two ‘poles’ in all these distinctions, which stand in certain relationships to each other: on the one hand, there are real facts as poles, which are ‘currently perceived or not perceived by all participants’ and, on the other hand, there is a ‘known meaning’ in the minds of the participants, which can or cannot be related to a current fact. This results in the following distribution of values:

REAL FACTsRelationship to Meaning
Given1Fits (true 1)
Given2Doesn’t fit (false 1)
Not given3Assumed, that it will fit in the future (true 2)
Not given4Unclear, whether it would fit in the future (true 3)
Not given5Assumed, that it would not fit in the future (false 2)

In this — still somewhat rough — scheme, ‘the meaning of thoughts’ can be qualified in relation to something currently present as ‘fitting’ or ‘not fitting’, or in the absence of something real as ‘might fit’ or ‘unclear whether it can fit’ or ‘certain that it cannot fit’.

However, it is important to note that these qualifications are ‘assessments’ made by the actors based on their ‘own knowledge’. As we know, such an assessment is always prone to error! In addition to errors in perception [*5], there can be errors in one’s own knowledge [*6]. So contrary to the belief of an actor, ‘true 1’ might actually be ‘false 1’ or vice versa, ‘true 2’ could be ‘false 2’ and vice versa.

From all this, it follows that a ‘clear qualification’ of truth and falsehood is ultimately always error-prone. For a community of people who think ‘positively’, this is not a problem: they are aware of this situation and they strive to keep their ‘natural susceptibility to error’ as small as possible through conscious methodical procedures [*7]. People who — for various reasons — tend to think negatively, feel motivated in this situation to see only errors or even malice everywhere. They find it difficult to deal with their ‘natural error-proneness’ in a positive and constructive manner.

TRUTH and MEANING : Process of Processes

In the previous section, the various terms (‘true1,2’, ‘false 1,2’, ‘true 3’) are still rather disconnected and are not yet really located in a tangible context. This will be attempted here with the help of working hypothesis 4 (sketch of a process space).

FIGURE 1 Process : The process space in the real world and in thinking, including possible interactions

The basic elements of working hypothesis 4 can be characterized as follows:

  1. There is the real world with its continuous changes, and within an actor which includes a virtual space for processes with elements such as perceptions, memories, and imagined concepts.
  2. The link between real space and virtual space occurs through perceptual achievements that represent specific properties of the real world for the virtual space, in such a way that ‘perceived contents’ and ‘imagined contents’ are distinguishable. In this way, a ‘mental comparison’ of perceived and imagined is possible.
  3. Changes in the real world do not show up explicitly but are manifested only indirectly through the perceivable changes they cause.
  4. It is the task of ‘cognitive reconstruction’ to ‘identify’ changes and to describe them linguistically in such a way that it is comprehensible, based on which properties of a given state, a possible subsequent state can arise.
  5. In addition to distinguishing between ‘states’ and ‘changes’ between states, it must also be clarified how a given description of change is ‘applied’ to a given state in such a way that a ‘subsequent state’ arises. This is called here ‘successor generation rule’ (symbolically: ⊢). An expression like Z ⊢V Z’ would then mean that using the successor generation rule ⊢ and employing the change rule V, one can generate the subsequent state Z’ from the state Z. However, more than one change rule V can be used, for example, ⊢{V1, V2, …, Vn} with the change rules V1, …, Vn.
  6. When formulating change rules, errors can always occur. If certain change rules have proven successful in the past in derivations, one would tend to assume for the ‘thought subsequent state’ that it will probably also occur in reality. In this case, we would be dealing with the situation ‘true 2’. If a change rule is new and there are no experiences with it yet, we would be dealing with the ‘true 3’ case for the thought subsequent state. If a certain change rule has failed repeatedly in the past, then the case ‘false 2’ might apply.
  7. The outlined process model also shows that the previous cases (1-5 in the table) only ever describe partial aspects. Suppose a group of actors manages to formulate a rudimentary process theory with many states and many change rules, including a successor generation instruction. In that case, it is naturally of interest how the ‘theory as a whole’ ‘proves itself’. This means that every ‘mental construction’ of a sequence of possible states according to the applied change rules under the assumption of the process theory must ‘prove itself’ in all cases of application for the theory to be said to be ‘generically true’. For example, while the case ‘true 1’ refers to only a single state, the case ‘generically true’ refers to ‘very many’ states, as many until an ‘end state’ is reached, which is supposed to count as a ‘target state’. The case ‘generically contradicted’ is supposed to occur when there is at least one sequence of generated states that keeps generating an end state that is false 1. As long as a process theory has not yet been confirmed as true 1 for an end state in all possible cases, there remains a ‘remainder of cases’ that are unclear. Then a process theory would be called ‘generically unclear’, although it may be considered ‘generically true’ for the set of cases successfully tested so far.

FIGURE 2 Process : The individual extended process space with an indication of the dimension ‘META-THINKING’ and ‘EVALUATION’.

If someone finds the first figure of the process room already quite ‘challenging’, they he will certainly ‘break into a sweat’ with this second figure of the ‘expanded process room’.

Everyone can check for himself that we humans have the ability — regardless of what we are thinking — to turn our thinking at any time back onto our own thinking shortly before, a kind of ‘thinking about thinking’. This opens up an ‘additional level of thinking’ – here called the ‘meta-level’ – on which we thinkers ‘thematize’ everything that is noticeable and important to us in the preceding thinking. [*8] In addition to ‘thinking about thinking’, we also have the ability to ‘evaluate’ what we perceive and think. These ‘evaluations’ are fueled by our ’emotions’ [*9] and ‘learned preferences’. This enables us to ‘learn’ with the help of our emotions and learned preferences: If we perform certain actions and suffer ‘pain’, we will likely avoid these actions next time. If we go to restaurant X to eat because someone ‘recommended’ it to us, and the food and/or service were really bad, then we will likely not consider this suggestion in the future. Therefore, our thinking (and our knowledge) can ‘make possibilities visible’, but it is the emotions that comment on what happens to be ‘good’ or ‘bad’ when implementing knowledge. But beware, emotions can also be mistaken, and massively so.[*10]

TRUTH AND MEANING – As a collective achievement

The previous considerations on the topic of ‘truth and meaning’ in the context of individual processes have outlined that and how ‘language’ plays a central role in enabling meaning and, based on this, truth. Furthermore, it was also outlined that and how truth and meaning must be placed in a dynamic context, in a ‘process model’, as it takes place in an individual in close interaction with the environment. This process model includes the dimension of ‘thinking’ (also ‘knowledge’) as well as the dimension of ‘evaluations’ (emotions, preferences); within thinking there are potentially many ‘levels of consideration’ that can relate to each other (of course they can also take place ‘in parallel’ without direct contact with each other (the unconnected parallelism is the less interesting case, however).

As fascinating as the dynamic emotional-cognitive structure within an individual actor can be, the ‘true power’ of explicit thinking only becomes apparent when different people begin to coordinate their actions by means of communication. When individual action is transformed into collective action in this way, a dimension of ‘society’ becomes visible, which in a way makes the ‘individual actors’ ‘forget’, because the ‘overall performance’ of the ‘collectively connected individuals’ can be dimensions more complex and sustainable than any one individual could ever realize. While a single person can make a contribution in their individual lifetime at most, collectively connected people can accomplish achievements that span many generations.

On the other hand, we know from history that collective achievements do not automatically have to bring about ‘only good’; the well-known history of oppression, bloody wars and destruction is extensive and can be found in all periods of human history.

This points to the fact that the question of ‘truth’ and ‘being good’ is not only a question for the individual process, but also a question for the collective process, and here, in the collective case, this question is even more important, since in the event of an error not only individuals have to suffer negative effects, but rather very many; in the worst case, all of them.

To be continued …


[*0] The meaning of the terms ‘objective, verifiable’ will be explained in more detail below.

[*1] In a system-theoretical view of the ‘human body’ system, one can formulate the working hypothesis that far more than 99% of the events in a human body are not conscious. You can find this frightening or reassuring. I tend towards the latter, towards ‘reassurance’. Because when you see what a human body as a ‘system’ is capable of doing on its own, every second, for many years, even decades, then this seems extremely reassuring in view of the many mistakes, even gross ones, that we can make with our small ‘consciousness’. In cooperation with other people, we can indeed dramatically improve our conscious human performance, but this is only ever possible if the system performance of a human body is maintained. After all, it contains 3.5 billion years of development work of the BIOM on this planet; the building blocks of this BIOM, the cells, function like a gigantic parallel computer, compared to which today’s technical supercomputers (including the much-vaunted ‘quantum computers’) look so small and weak that it is practically impossible to express this relationship.

[*2] An ‘everyday language’ always presupposes ‘the many’ who want to communicate with each other. One person alone cannot have a language that others should be able to understand.

[*3] A meaning relation actually does what is mathematically called a ‘mapping’: Elements of one kind (elements of the surface structure of the language) are mapped to elements of another kind (the potential meaning elements). While a mathematical mapping is normally fixed, the ‘real meaning relation’ can constantly change; it is ‘flexible’, part of a higher-level ‘learning process’ that constantly ‘readjusts’ the meaning relation depending on perception and internal states.

[*4] The contents of working hypothesis 2 originate from the findings of modern cognitive sciences (neuroscience, psychology, biology, linguistics, semiotics, …) and philosophy; they refer to many thousands of articles and books. Working hypothesis 2 therefore represents a highly condensed summary of all this. Direct citation is not possible in purely practical terms.

[*5] As is known from research on witness statements and from general perception research, in addition to all kinds of direct perception errors, there are many errors in the ‘interpretation of perception’ that are largely unconscious/automated. The actors are normally powerless against such errors; they simply do not notice them. Only methodically conscious controls of perception can partially draw attention to these errors.

[*6] Human knowledge is ‘notoriously prone to error’. There are many reasons for this. One lies in the way the brain itself works. ‘Correct’ knowledge is only possible if the current knowledge processes are repeatedly ‘compared’ and ‘checked’ so that they can be corrected. Anyone who does not regularly check the correctness will inevitably confirm incomplete and often incorrect knowledge. As we know, this does not prevent people from believing that everything they carry around in their heads is ‘true’. If there is a big problem in this world, then this is one of them: ignorance about one’s own ignorance.

[*7] In the cultural history of mankind to date, it was only very late (about 500 years ago?) that a format of knowledge was discovered that enables any number of people to build up fact-based knowledge that, compared to all other known knowledge formats, enables the ‘best results’ (which of course does not completely rule out errors, but extremely minimizes them). This still revolutionary knowledge format has the name ’empirical theory’, which I have since expanded to ‘sustainable empirical theory’. On the one hand, we humans are the main source of ‘true knowledge’, but at the same time we ourselves are also the main source of ‘false knowledge’. At first glance, this seems like a ‘paradox’, but it has a ‘simple’ explanation, which at its root is ‘very profound’ (comparable to the cosmic background radiation, which is currently simple, but originates from the beginnings of the universe).

[*8] In terms of its architecture, our brain can open up any number of such meta-levels, but due to its concrete finiteness, it only offers a limited number of neurons for different tasks. For example, it is known (and has been experimentally proven several times) that our ‘working memory’ (also called ‘short-term memory’) is only limited to approx. 6-9 ‘units’ (whereby the term ‘unit’ must be defined depending on the context). So if we want to solve extensive tasks through our thinking, we need ‘external aids’ (sheet of paper and pen or a computer, …) to record the many aspects and write them down accordingly. Although today’s computers are not even remotely capable of replacing the complex thought processes of humans, they can be an almost irreplaceable tool for carrying out complex thought processes to a limited extent. But only if WE actually KNOW what we are doing!

[*9] The word ’emotion’ is a ‘collective term’ for many different phenomena and circumstances. Despite extensive research for over a hundred years, the various disciplines of psychology are still unable to offer a uniform picture, let alone a uniform ‘theory’ on the subject. This is not surprising, as much of the assumed emotions takes place largely ‘unconsciously’ or is only directly available as an ‘internal event’ in the individual. The only thing that seems to be clear is that we as humans are never ’emotion-free’ (this also applies to so-called ‘cool’ types, because the apparent ‘suppression’ or ‘repression’ of emotions is itself part of our innate emotionality).

[*10] Of course, emotions can also lead us seriously astray or even to our downfall (being wrong about other people, being wrong about ourselves, …). It is therefore not only important to ‘sort out’ the factual things in the world in a useful way through ‘learning’, but we must also actually ‘keep an eye on our own emotions’ and check when and how they occur and whether they actually help us. Primary emotions (such as hunger, sex drive, anger, addiction, ‘crushes’, …) are selective, situational, can develop great ‘psychological power’ and thus obscure our view of the possible or very probable ‘consequences’, which can be considerably damaging for us.

[*11] The term ‘narrative’ is increasingly used today to describe the fact that a group of people use a certain ‘image’, a certain ‘narrative’ in their thinking for their perception of the world in order to be able to coordinate their joint actions. Ultimately, this applies to all collective action, even for engineers who want to develop a technical solution. In this respect, the description in the German Wikipedia is a bit ‘narrow’:


The following sources are just a tiny selection from the many hundreds, if not thousands, of articles, books, audio documents and films on the subject. Nevertheless, they may be helpful for an initial introduction. The list will be expanded from time to time.

[1a] Propaganda, in the German Wikipedia

[1b] Propaganda in the English Wikipedia : /*The English version appears more systematic, covers larger periods of time and more different areas of application */

[3] Propaganda der Russischen Föderation, hier: (German source)

[6] Mischa Gabowitsch, Mai 2022, Von »Faschisten« und »Nazis«, (German source)


Author: Gerd Doeben-Henisch

Time: Nov 12, 2023 — Nov 12, 2023


–!! This is not yet finished !!–


This text belongs to the topic Philosophy (of Science).


The ‘coming out’ of a new type of ‘text generator’ in November 2022 called ‘chatGPT’ — this is not the only one around — caused an explosion of publications and usages around the world. The author is working since about 40 years in this field and never did this happen before. What has happened? Is this the beginning of the end of humans being the main actors on this planet (yes, we know, not really the best until now) or is there something around and in-between which we do overlook captivated by these new text generators?

Reading many papers since that event, talking with people, experimenting directly with chatGPT4, continuing working with theories and working with people in a city by trying new forms of ‘citizens at work in their community’, slowly a picture was forming in my head how it could perhaps be possible to ‘benchmark’ text generators with human activities directly.

After several first trials everything came together when I could give a speech in the Goethe University in Frankfurt Friday Nov-10. [1] There was a wonderful audience with elderly people from the so-called University of the 3rd age … a bit different to young students 🙂

There was an idea that hit me like a bolt of lightning when I wrote it down afterwards: it is the fundamental role of literature for our understanding of world and people, which will be completely eliminated by using text generators. The number of written text will explode in the near future, but the meaning of the world will vanish more and more at the same time. You will see letters, but there will be no more meaning around. And with the meaning the world of humans will disappear. You won’t even be able to know yourself anymore.

Clearly, it can only happen if we substitute our own thinking and writing completely by text generators.

Is the author of this text a bit ‘ill’ to write down such ideas or are there some arguments around which make it clear why this can be the fate of humans after the year 2023?


[1] See the text written down after the speech:

Pain does not replace the truth …

Time: Oct 18, 2023 — Oct 24, 2023)
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.d


This post is part of the uffmm science blog. It is a translation from the German source: For the translation I have used chatGPT4 and Because in the text the word ‘hamas’ is occurring, chatGPT didn’t translate a long paragraph with this word. Thus the algorithm is somehow ‘biased’ by a certain kind of training. This is really bad because the following text is offers some reflections about a situation where someone ‘hates’ others. This is one of our biggest ‘disease’ today.


The Hamas terrorist attack on Israeli citizens on October 7, 2023, has shaken the world. For years, terrorist acts have been shaking our world. In front of our eyes, a is attempting, since 2022 (actually since 2014), to brutally eradicate the entire Ukrainian population. Similar events have been and are taking place in many other regions of the world…

… Pain does not replace the truth [0]…

Truth is not automatic. Making truth available requires significantly more effort than remaining in a state of partial truth.

The probability that a person knows the truth or seeks the truth is smaller than staying in a state of partial truth or outright falsehood.

Whether in a democracy, falsehood or truth predominates depends on how a democracy shapes the process of truth-finding and the communication of truth. There is no automatic path to truth.

In a dictatorship, the likelihood of truth being available is extremely dependent on those who exercise centralized power. Absolute power, however, has already fundamentally broken with the truth (which does not exclude the possibility that this power can have significant effects).

The course of human history on planet Earth thus far has shown that there is evidently no simple, quick path that uniformly leads all people to a state of happiness. This must have to do with humans themselves—with us.

The interest in seeking truth, in cultivating truth, in a collective process of truth, has never been strong enough to overcome the everyday exclusions, falsehoods, hostilities, atrocities…

One’s own pain is terrible, but it does not help us to move forward…

Who even wants a future for all of us?????

[0] There is an overview article by the author from 2018, in which he presents 15 major texts from the blog “Philosophie Jetzt” ( “Philosophy Now”) ( “INFORMAL COSMOLOGY. Part 3a. Evolution – Truth – Society. Synopsis of previous contributions to truth in this blog” ( )), in which the matter of truth is considered from many points of view. In the 5 years since, society’s treatment of truth has continued to deteriorate dramatically.

Hate cancels the truth

Truth is related to knowledge. However, in humans, knowledge most often is subservient to emotions. Whatever we may know or wish to know, when our emotions are against it, we tend to suppress that knowledge.

One form of emotion is hatred. The destructive impact of hatred has accompanied human history like a shadow, leaving a trail of devastation everywhere it goes: in the hater themselves and in their surroundings.

The event of the inhumane attack on October 7, 2023 in Israel, claimed by Hamas, is unthinkable without hatred.

If one traces the history of Hamas since its founding in 1987 [1,2], then one can see that hatred is already laid down as an essential moment in its founding. This hatred is joined by the moment of a religious interpretation, which calls itself Islamic, but which represents a special, very radicalized and at the same time fundamentalist form of Islam.

The history of the state of Israel is complex, and the history of Judaism is no less so. And the fact that today’s Judaism also contains strong components that are clearly fundamentalist and to which hatred is not alien, this also leads within many other factors at the core to a constellation of fundamentalist antagonisms on both sides that do not in themselves reveal any approaches to a solution. The many other people in Israel and Palestine ‘around’ are part of these ‘fundamentalist force fields’, which simply evaporate humanity and truth in their vicinity. By the trail of blood one can see this reality.

Both Judaism and Islam have produced wonderful things, but what does all this mean in the face of a burning hatred that pushes everything aside, that sees only itself.

[1] Jeffrey Herf, Sie machen den Hass zum Weltbild, FAZ 20.Okt. 23, S.11 (Abriss der Geschichte der Hamas und ihr Weltbild, als Teil der größeren Geschichte) (Translation:They make hatred their worldview, FAZ Oct. 20, 23, p.11 (outlining the history of Hamas and its worldview, as part of the larger story)).

[2] Joachim Krause, Die Quellen des Arabischen Antisemitismus, FAZ, 23.10.2023,p.8 (This text “The Sources of Arab Anti-Semitism” complements the account by Jeffrey Herf. According to Krause, Arab anti-Semitism has been widely disseminated in the Arab world since the 1920s/ 30s via the Muslim Brotherhood, founded in 1928).

A society in decline

When truth diminishes and hatred grows (and, indirectly, trust evaporates), a society is in free fall. There is no remedy for this; the use of force cannot heal it, only worsen it.

The mere fact that we believe that lack of truth, dwindling trust, and above all, manifest hatred can only be eradicated through violence, shows how seriously we regard these phenomena and at the same time, how helpless we feel in the face of these attitudes.

In a world whose survival is linked to the availability of truth and trust, it is a piercing alarm signal to observe how difficult it is for us as humans to deal with the absence of truth and face hatred.

Is Hatred Incurable?

When we observe how tenaciously hatred persists in humanity, how unimaginably cruel actions driven by hatred can be, and how helpless we humans seem in the face of hatred, one might wonder if hatred is ultimately not a kind of disease—one that threatens the hater themselves and, particularly, those who are hated with severe harm, ultimately death.

With typical diseases, we have learned to search for remedies that can free us from the illness. But what about a disease like hatred? What helps here? Does anything help? Must we, like in earlier times with people afflicted by deadly diseases (like the plague), isolate, lock away, or send away those who are consumed by hatred to some no man’s land? … but everyone knows that this isn’t feasible… What is feasible? What can combat hatred?

After approximately 300.000 years of Homo sapiens on this planet, we seem strangely helpless in the face of the disease of hatred.

What’s even worse is that there are other people who see in every hater a potential tool to redirect that hatred toward goals they want to damage or destroy, using suitable manipulation. Thus, hatred does not disappear; on the contrary, it feels justified, and new injustices fuel the emergence of new hatred… the disease continues to spread.

One of the greatest events in the entire known universe—the emergence of mysterious life on this planet Earth—has a vulnerable point where this life appears strangely weak and helpless. Throughout history, humans have demonstrated their capability for actions that endure for many generations, that enable more people to live fulfilling lives, but in the face of hatred, they appear oddly helpless… and the one consumed by hatred is left incapacitated, incapable of anything else… plummeting into their dark inner abyss…

Instead of hatred, we need (minimally and in outline):

  1. Water: To sustain human life, along with the infrastructure to provide it, and individuals to maintain that infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  2. Food: To sustain human life, along with the infrastructure for its production, storage, processing, transportation, distribution, and provision. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  3. Shelter: To provide a living environment, including the infrastructure for its creation, provisioning, maintenance, and distribution. Individuals are needed to manage this provision, and they, too, require everything they need for their own lives to fulfill this task.
  4. Energy: For heating, cooling, daily activities, and life itself, along with the infrastructure for its generation, provisioning, maintenance, and distribution. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  5. Authorization and Participation: To access water, food, shelter, and energy. This requires an infrastructure of agreements, and individuals to manage these agreements. These individuals also require everything they need for their own lives to fulfill this task.
  6. Education: To be capable of undertaking and successfully completing tasks in real life. This necessitates individuals with enough experience and knowledge to offer and conduct such education. These individuals also require everything they need for their own lives to fulfill this task.
  7. Medical Care: To help with injuries, accidents, and illnesses. This requires individuals with sufficient experience and knowledge to offer and provide medical care, as well as the necessary facilities and equipment. These individuals also require everything they need for their own lives to fulfill this task.
  8. Communication Facilities: So that everyone can receive helpful information needed to navigate their world effectively. This requires suitable infrastructure and individuals with enough experience and knowledge to provide such information. These individuals also require everything they need for their own lives to fulfill this task.
  9. Transportation Facilities: So that people and goods can reach the places they need to go. This necessitates suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  10. Decision Structures: To mediate the diverse needs and necessary services in a way that ensures most people have access to what they need for their daily lives. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  11. Law Enforcement: To ensure disruptions and damage to the infrastructure necessary for daily life are resolved without creating new disruptions. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such services. These individuals also require everything they need for their own lives to fulfill this task.
  12. Sufficient Land: To provide enough space for all these requirements, along with suitable soil (for water, food, shelter, transportation, storage, production, etc.).
  13. Suitable Climate
  14. A functioning ecosystem.
  15. A capable scientific community to explore and understand the world.
  16. Suitable technology to accomplish everyday tasks and support scientific endeavors.
  17. Knowledge in the minds of people to understand daily events and make responsible decisions.
  18. Goal orientations (preferences, values, etc.) in the minds of people to make informed decisions.
  19. Ample time and peace to allow these processes to occur and produce results.
  20. Strong and lasting relationships with other population groups pursuing the same goals.
  21. Sufficient commonality among all population groups on Earth to address their shared needs where they are affected.
  22. A sustained positive and constructive competition for those goal orientations that make life possible and viable for as many people on this planet (in this solar system, in this galaxy, etc.) as possible.
  23. The freedom present within the experiential world, included within every living being, especially within humans, should be given as much room as possible, as it is this freedom that can overcome false ideas from the past in the face of a constantly changing world, enabling us to potentially thrive in the world of the future.

READING A BOOK, LEARNING, BEING SCIENTIFIC, WIKIPEDIA. A dialogue with chatGPT4 bringing you ‘back to earth’

Autor: Gerd Doeben-Henisch

Aug 30, 2023 – Aug 30, 2023



This text belongs to a series of experiments with chatGPT4. While there exist meanwhile some experiments demonstrating the low quality of chatGPT4 in many qualitative tests [1], the author of this texts wants to check the ability of the chatGPT4 software in reproducing man-like behavior associated with the production and reception of texts on a higher level. I started this series with a first post here.[2]


In the following series of dialogues with chatGPT4 the software stated in the beginning that it cannot read a book like a human person. Asking for a special book with the title “Das Experiment sind wir” by the author Christian Stöcker? the software answered, that it doesn’t know the book. Then I tried to ask the software, whether it can nevertheless ‘learn’ something based on the documents given to the software until the year 2021. The answer was quite revealing: “No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.” I didn’t give up to understand what the software is able to do if not ‘reading a book’ or to ‘learn’. I asked: “What can you extract from your text base related to the question what can happen in the head of people if they read a book?” I got a list of general properties summarized from different source (without mentioning these sources). I continued asking the software not about the leaning inside itself but what the software knows about the learning inside humans persons reading a book: “How do you describe the process of learning in human persons while reading a book?” The software did mention some aspects which have to be considered while a human person reads a book. But this gives only some ‘static view’ of the structure being active in the process of reading. More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book? The software gives a detailed breakdown of the learning process in human persons while reading, but again these are statements without any ‘backing up’, no sources,no contexts, but in the everyday world — especially in science –, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’? Then the author did ask the software “Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?” The answer again was very simple: “I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.” Because it is known that wikipedia encyclopedia provides always explicit sources for the texts the author asked directly: Do you know the textbase wikipedia? Clearly, the chatGPT4 knows Wikipedia; besides many positive remarks it answered: “The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.” This sounds quite good for Wikipedia. Thus it is interesting how chatGPT4 compares itself to Wikipedia: Question: “Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?” chatGPT4 doesn’t give a clear answer; it summarizes points for Wikipedia and itself, but leaves the question open. Thus the author continues asking for a more ‘general criterion’, which should be valid for both Wikipedia and chatGPT4: “Which criteria do you know for a good scientific description? ” As usual chatGPT4 gives a long list of criteria . the author asks back “Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.” The answer chatGPT4 gives makes itself nearly unusable for any serious scientific work. Yes, you can use chatGPT4 for some first general information, but without knowing more by yourself, you are really lost. It is surely quite more fruitful to check Wikipedia directly and using the the sources there for further investigations.

Reading a book

I started spontaneously to talk with chatGPT4 about a book from Christian Ströcker with the title “Das Experiment sind wir”, published 2020.

Here is the dialogue:


Can you read a book?


I don’t “read” books in the same way humans do. Instead, I was trained on vast amounts of text data, which includes information from books, articles, websites, and other written sources up to my last update in September 2021.

You can ask me about the content, summaries, themes, and other information from many books, especially those that are well-known or that were published before my last training cut-off. If the book was published after September 2021 or is particularly obscure, I might not have information about it.

If you have a particular book or topic in mind, feel free to ask, and I’ll do my best to provide relevant information!


What can you say about the book entitled “Das Experiment sind wir” by the author Christian Stöcker?


As of my last update in September 2021, “Das Experiment sind wir” by Christian Stöcker hasn’t been one of the primary sources I was trained on. However, Christian Stöcker is known as a journalist and professor in the field of digital communication.

If “Das Experiment sind wir” was published or gained significant attention after my last update, I may not have detailed information about its content, themes, or specific insights.

For a detailed overview or analysis of the book, I would recommend reading reviews, summaries, or author interviews, or accessing the book directly. If you have other questions or need information on a related topic, I’m here to help!


OK, chatGPT4 doesn’t know this book. This is OK, even a high performance System can not know everything.

But interesting is his confession, that chatGPT4 can not read a book as a human person would do. In this sense chatGPT4 is not a ‘typical reader’ but — as wee indeed know — a piece of software, managed by human persons, which decide, which kinds of texts chatGPT4 has to ‘store’ in his memories and which are the basis for hier ‘algorithmic processing’.

But there is left the question whether chatGBT4 at least can learn by its own starting with the text base given to him.


Thus if you have a text base as a point of reference, are you able to learn in some sense by your own?


No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.

Here’s an analogy: Imagine an extremely comprehensive book. This book can answer a vast array of questions based on its contents, but it can’t write new chapters or update its pages on its own. In a similar manner, I can generate answers based on the data I was trained on, but I can’t independently update that data or “learn” new things.

So, while I can generate responses based on a broad range of existing knowledge, I can’t assimilate or understand new information or events after my last training cut-off in September 2021.


As expected, chatGBT4 can not learn as we humans can learn. To generate summaries can be of some help, but this can not replace ‘true learning’.

But let us stay with the topic of ‘reading a book’:


What can you extract from your text base related to the question what can happen in the head of people if they read a book?


Reading a book can have various cognitive, emotional, and psychological effects on individuals. Here are some insights based on literature, cognitive science, and psychological studies up to my last training cut-off:

  1. Cognitive Stimulation: Engaging with the written word can stimulate the brain. Studies have suggested that mental stimulation can slow the progress (or possibly even prevent) Alzheimer’s and dementia, as keeping your brain active and engaged prevents it from losing power.
  2. Mental Escape: For many, reading a book can act as an escape from the real world. Immersing oneself in a different world, whether it’s a fantasy realm or an historical setting, provides a break from everyday stressors.
  3. Increased Knowledge: Naturally, books inform and educate. This builds a knowledge base that can be applied in real-life situations.
  4. Vocabulary Expansion: Reading regularly can expand one’s vocabulary more than direct teaching or verbal communications.
  5. Improved Focus and Concentration: Unlike skimming short web articles or watching quick videos, reading a book requires a longer attention span and can help improve one’s focus and concentration.
  6. Stress Reduction: A well-written novel or non-fiction book can transport the reader to other realms, distracting them and helping them to de-stress more effectively than other methods.
  7. Improved Analytical Skills: Engaging with complex narratives and plots can help readers improve their analytical skills as they ponder plot developments and character motivations.
  8. Empathy Development: Reading fiction, in particular, allows readers to live vicariously through characters, experiencing a vast range of emotions and situations. This can help in developing a deeper understanding and empathy for others.
  9. Mental Tranquility: Reading subjects like poetry or certain religious texts can provide tranquility, help lower blood pressure, and reduce stress.
  10. Memory Improvement: Reading involves remembering various characters, their backgrounds, histories, and nuances. This can create new brain pathways and strengthen existing ones.
  11. Entertainment: Of course, one of the primary reasons people read is for entertainment. A good story can be incredibly engaging, offering hours of enjoyment.

It’s worth noting that the impact of reading can vary based on the type of book (e.g., fiction vs. non-fiction), the style of writing, the personal experiences and preferences of the reader, and the context in which they’re reading.


chatGPT4 mentions some aspects which have to be considered while a human person reads a book. This gives some ‘static view’ of the structure being active in the process of reading.

More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book?


How do you describe the process of learning in human persons while reading a book?


Reading a book is a complex cognitive process that involves multiple levels of mental engagement and comprehension. When a person reads, several processes related to learning occur simultaneously. Here’s a detailed breakdown of the learning process while reading:

  1. Perceptual Processing: This is the basic recognition of words and letters. As your eyes scan a page, the visual cortex interprets the shapes of letters and groups of letters as words. This process becomes automatic with practice and training.
  2. Decoding: This involves understanding the phonetic value of words, which is especially crucial for unfamiliar terms or names. This skill is emphasized heavily in early reading education when children sound out words.
  3. Semantic Processing: Once the words are recognized, the brain works to understand their meaning. This involves linking the words to their associated concepts in the brain.
  4. Syntax and Grammar Processing: The brain understands the structure of sentences and how words relate to each other, determining the relationships among verbs, nouns, adjectives, etc.
  5. Working Memory: This is essential for maintaining an understanding of a sentence or paragraph. As you read, you keep the beginning of a sentence in mind as you progress to the end, allowing you to grasp the entire concept.
  6. Building Connections: Learning often involves connecting new information with prior knowledge. When reading, you constantly relate what you are consuming to what you already know, which can deepen comprehension.
  7. Critical Thinking: Beyond just understanding the content, readers often evaluate, analyze, and critique what they’re reading. They may ask themselves if they agree with the author, if the information is credible, or how it relates to other things they know.
  8. Visualization: Good readers often create mental images of what they are reading. This is especially true for narrative texts where imagining scenes, characters, or events can enhance comprehension.
  9. Emotional Engagement: The content in books can evoke emotions. Whether it’s a novel that tugs at the heartstrings or a non-fiction book that arouses anger or passion about a topic, emotional engagement can increase retention and internalization of content.
  10. Metacognition: This involves thinking about one’s own thinking. Effective readers often monitor their understanding as they read, recognizing when they’re confused and employing strategies to improve comprehension.
  11. Retention & Long-Term Memory: Once the book is done, the process of shifting information from working memory to long-term memory begins. Repetition, discussion, and application of knowledge can aid this transfer.
  12. Application: The ultimate goal of learning is often to apply knowledge. After reading, a person may take actions, make decisions, or change beliefs based on what they’ve learned.

It’s important to note that everyone’s reading process might be a bit different. Some people naturally visualize more, while others might focus on critical analysis. Additionally, background knowledge, context, and the reader’s purpose (e.g., reading for pleasure versus studying for an exam) can influence how these processes unfold.


chatGPT4 presents some facts responding to the question which are — as such — really interesting, but in the everyday world especially in science, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’?

Let us ask chatGPT4:


Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?


I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.

The steps and processes I outlined for learning while reading a book are based on widely accepted theories and research in the fields of cognitive psychology, education, and linguistics. If you’re interested in diving deeper into these topics, you might consider exploring:

  1. Cognitive psychology textbooks: These delve into the processes of perception, working memory, long-term memory, and more.
  2. Reading and literacy research: Scholars in this field have conducted in-depth studies on the processes of decoding, comprehension, and the strategies good readers use.
  3. Educational psychology: This field explores topics related to learning, including metacognition, critical thinking, and the application of knowledge.

If you’d like specific book or article recommendations, or more in-depth explanations on any of these topics, I can certainly provide them.


Do you know the textbase wikipedia?


Yes, I’m familiar with Wikipedia. Wikipedia is a free online encyclopedia that is collaboratively edited by volunteers from around the world. It’s one of the most visited websites globally and provides information on a vast array of topics.

Wikipedia is known for its “wiki” nature, which means that almost any article can be edited by any user with internet access. This collaborative approach has made Wikipedia a rich and ever-evolving source of information. However, because of this open-editing system, it’s always a good idea to verify critical information from Wikipedia with other reliable sources.

The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.

Would you like more information or have a specific question related to Wikipedia?


chatGPT4 knows wikipedia and gives a quite fair description of wikipedia (probably the English version). Of interest is how chatGPT4 describes the characteristic differences with wikipedia.


Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?


Wikipedia and I serve different purposes, and the trustworthiness of information from either source depends on the context and specific use-case.

Advantages of Wikipedia:

  1. Citations: Wikipedia articles often have extensive lists of references, which allow readers to trace back the origin of specific pieces of information. This provides a level of transparency.
  2. Collaboration: Wikipedia is collectively edited by a vast number of individuals, which can mean many eyes reviewing and refining content.
  3. Current Events: Wikipedia is often updated rapidly in the face of current events, so for very recent information, it can sometimes be more up-to-date than my last training data.

Advantages of ChatGPT (me):

  1. Broad Training Data: I’ve been trained on diverse datasets, including books, articles, and other content. This allows me to provide a wide range of information.
  2. Consistent and Neutral: While Wikipedia aims for a neutral point of view, individual articles might sometimes reflect biases of the contributors. My design aims to minimize bias and provide consistent responses.
  3. Conversational Interface: Users can have a back-and-forth dialogue with me, making it easier to ask follow-up questions or dive deeper into topics.


  1. Vandalism: Wikipedia can be edited by anyone, which sometimes leads to misinformation due to vandalism. However, popular pages often have vigilant editors and protective measures in place.
  2. Knowledge Cutoff: My last training data is from September 2021, so events or advancements after that date are not within my current knowledge.

Ultimately, neither Wikipedia nor I should be seen as infallible. Both can serve as starting points for research, but critical information should be cross-referenced with primary sources or authoritative references. It’s always a good practice to be discerning and critical when interpreting information from any source.


The characterization of wikipedia is still quite fair. But in some sens does chatGPT4 ‘downplay’ it’s lack of real sources, which are the basis of any serious arguing.


Which criteria do you know for a good scientific description?


Scientific descriptions are meant to be precise, clear, and replicable. To ensure a high-quality scientific description, several criteria should be met:

  1. Accuracy: The description should correctly represent observations, data, or results without exaggeration or misrepresentation.
  2. Clarity: The language used should be clear and free from ambiguity. It should be easily understood by those with appropriate background knowledge in the field.
  3. Completeness: All relevant details should be included, especially those that would be necessary for replication or to understand the context of the study.
  4. Consistency: Descriptions should be consistent in terms of units of measurement, terminology, and methodology.
  5. Objectivity: The description should be free from personal bias. Instead of using subjective terms, rely on objective measurements and observations.
  6. Replicability: Enough detail should be provided so that another scientist with the same background knowledge and resources can replicate the study or experiment.
  7. Relevance: Only pertinent details should be included. While completeness is important, overloading a description with irrelevant information can detract from its usefulness.
  8. Use of Standardized Terminology: Using accepted and standardized terms in the field ensures that the description is understood universally among experts.
  9. Citing Sources: Any claim, idea, or method that isn’t original to the current study should be appropriately cited, giving credit to the original sources and allowing readers to delve deeper into the background if they wish.
  10. Avoiding Jargon (when possible): While specialized terms can be necessary, over-reliance on jargon can alienate those slightly outside the immediate field. When jargon is used, it should be defined or explained.
  11. Providing Context: A good scientific description often includes background information or a literature review to give context to the study’s significance and position within the broader field.
  12. Transparency: Limitations, potential sources of error, and uncertainties should be clearly stated. This honesty allows readers to gauge the reliability of the findings and understand any potential caveats.

A good scientific description is essentially a balance between precision and clarity. It should be detailed enough to be of value to experts in the field while still being accessible to those with a general background in the discipline.


The overview of criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of chatGPT4 it is not easily to see, whether these criteria can also applied to the output of chatGPT4.

Lets transform this into a question:


Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.


That’s an insightful observation. Let’s discuss the applicability of some of these criteria to ChatGPT (like me) and its outputs:

  1. Replicability: While the underlying research and development process of models like mine can be replicated in theory, the specific outputs for each query might vary slightly, even with similar inputs. Therefore, individual responses from me aren’t precisely replicable.
  2. Citing Sources: I don’t provide direct citations for specific statements in my responses. My training data comes from diverse sources, and I generate responses based on patterns in that data. Thus, I can’t directly attribute my statements to specific sources like a scholarly article or a book. This is indeed a limitation if you’re considering the output as a “scientific description.”
  3. Transparency: OpenAI, the organization behind me, has released details about the architecture, training methodology, and intentions behind models like mine. However, the specifics of the training data or exact processes that lead to a particular response aren’t transparently available in each interaction.

In light of this:

  • For scholarly or academic purposes, my responses should be considered as starting points or general overviews. Critical information should always be cross-referenced with authoritative and primary sources.
  • For general inquiries or casual information, my design is to provide accurate and coherent answers based on the vast amount of data I’ve been trained on.

It’s crucial to recognize the intended use and context when considering the applicability of scientific criteria to any tool or source, including ChatGPT.


[1] There was a conference at the Technical University of Darmstadt with the title “KI – Text und Geltung. Wie verändern KI-Textgeneratoren wissenschaftliche Diskurse?” (Translated: AI – Text and Validity. How do AI text generators change scientific discourses? )., . One lecturer Thomas Arnold, gave a speech entitled “Herausforderungen in der Forschung: Mangelnde Reproduzierbarkeit und Erklärbarkeit” (Translated: Challenges in research: lack of reproducibility and explainability), which will be published by de Gruyter at the end of 2023/ or the beginning of 2024.


Homo Sapiens: empirical and sustained-empirical theories, emotions, and machines. A sketch

Author: Gerd Doeben-Henisch


Aug 24, 2023 — Aug 29, 2023 (10:48h CET)

Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.


This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.

Start of the Lecture

Dear Auditorium,

This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.

In this conference many different views are presented that are possible on this topic.


My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.

An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.

‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.


The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.

This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.


With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.

In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.


On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.


The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.


If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.


A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.

In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.

In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.


One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.

With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.


This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.

If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):

KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.

LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.

MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.

Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.


In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.

If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.

If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?

The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]


In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.

The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?

For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.

This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.

The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.

Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.

From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.

  1. The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
  2. This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
  3. The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
  4. It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
  5. In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
  6. Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
  7. The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
  8. The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
  9. The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
  1. The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
  2. If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
  3. This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
  4. A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
  5. Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
  6. The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.

Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.


With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.

While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.

However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]

If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.

In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]


After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?

My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.

In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.

The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.

This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.

Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?

Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.

Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.

In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.

But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?

A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.

My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]



[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.


(July 12, 2023 – August 24, 2023)

(The following text was created with the support of the software deepL from a German text)

–!! To be continued !!–

–!! See new comment at the end of the text (Aug 24, 2023)!!–


We live in a time in which – if one takes different perspectives – very many different partial world views can be perceived, world views which are not easily ‘compatible’ with each other. Thus, so far, the ‘physical’ and the ‘biological’ worldviews do not necessarily seem to be ‘aligned’ with each other. In addition, there are different directions within each of these worldviews. Where do the social sciences stand here: Not physics, not biology, but then what? The economic sciences also seem to be ‘surfing across’ everything else … this list could easily be extended. Within these assemblies of worldviews, a new worldview emerges quite freshly, that of so-called ‘artificial intelligence’; it is also almost completely unmediated with everything else, but makes heavy use of terminology borrowed from psychology and biology, without adopting the usual conceptual contexts of these terms.

This diversity can be seen as positive if it stimulates thinking and thus perhaps enables new exciting ‘syntheses’. However, there is nothing to be seen of ‘syntheses’ far and wide. The terms ‘interdisciplinary’, ‘multidisciplinary’ or even ‘transdisciplinary’ are probably used more and more often in texts, as a reminder, as a call to integrate diversity in a fruitful way, but there is not much to be seen of it yet. The average university teacher at an average university still tends to be ‘punished’ for venturing out of his disciplinary niche. The ‘curricular norms’ that determine what time may be spent on what content with how many students do not normally provide for multidisciplinary or even transdisciplinary teaching. And when the single-science trained researcher throws himself into a multidisciplinary research project (if he does it at all), then in the end usually only single-science comes out again ….

Against this panorama of many worldviews, the following text will attempt to interpret how the concept of ‘intelligence’ could be grasped and classified today – taking into account the whole range of worldviews. Thereby a special accent will be put on the phenomenon ‘Homo sapiens’: Although Homo sapiens is only a very small sub-population within the whole of the biological, in the course of evolution it takes nevertheless up to now a special position in multiple senses, which shall be considered in this attempt of interpretation.

‘INTELLIGENCE’ – An interpretive hypothesis

This text intends a conceptual clarification of the concept ‘intelligence’ in the larger context of ‘biological systems’. ‘Machine systems’ with specific ‘behavioral properties’ that show ‘similarities’ to such properties that are ‘usually’ called ‘intelligent’ in the case of biological systems are then called ‘machine forms of intelligence’ in this text. However, ‘similarities’ in ‘behavior’ cannot be used to infer ‘similarities in enabling structures’. A ‘behavior X’ can be produced by a variety of ‘enabling structures’ which may be different among themselves. Statements about the ‘intelligence of a system’ therefore refer specifically to ‘behavioral properties’ of that system that can be observed within a particular ‘action environment’ within a particular ‘time interval’. Explicit talk about ‘intelligence’ further presupposes that there is a ‘virtual concept intelligence’ represented in the form of a text, which is able to classify the many individual ’empirical observations’ into ‘virtual contexts/relationships’ in such a way, that both (i) when certain behaviors ‘occur’ with the ‘virtual concept intelligence’ one can assign (classify) the occurring phenomena to the area of ‘intelligent behavior’, and that (ii) with the ‘virtual concept intelligence’ starting from a ‘given situation’ one can make conditional ‘predictions’ about ‘possible behaviors’ that can be ‘expected’ and ’empirically verified’ by the target system.

New Comment

While I was preparing a public lecture for a conference at the Technical University of Darmstadt (Germany) ( ) I decided to abandon the concept of ‘intelligence’ as well as ‘artificial intelligence’ for the near future. The meaning of these concepts is meanwhile completely ‘blurred’ / ‘fuzzy’; to use these concepts or not doesn’t change anything.

COLLECTIVE (man-machine) INTELLIGENCE and SUSTAINABILITY. An investigation

(June 21, 2023 – June 22, 2023)

–!! Not yet finished !!–


The steady progress of science has defeated many familiar ideas from the past and this change of concepts continues. This belongs to concepts like ‘intelligence’, ‘collective intelligence’ , ‘man, ‘machine’, ‘artificial intelligence’, ‘life’, ‘matter’ and many more.

Such changes with concepts are always difficult to describe. Ideally one would be an ‘external observer’ with a ‘full view’ of everything which is going on, and additionally one possesses a ‘full knowledge’ about all the features and dynamics of the field of the phenomena.

But we aren’t. We are part of the process ourselves . Our understanding is interspersed with familiar images and at the same time with new questions and new partial views. Under these conditions to find a ‘consistent new view’ of the whole process can only be worked out step wise, associated with experiments to check the viability of a new aspect of the new view.

And, one should not forget, the ‘reader’ of a text from lives under the same conditions: a mixture of everything is possible; therefore an understanding can crash not because a certain text is ‘wrong’ or ‘bad’ or whatever, but because at that moment of reading the ‘models in the heads’ of reader and writer are not ‘overlapping enough’. Then there is no chance of understanding because we depend completely from the ‘models in our heads’.

Accepting this the following text is an undertaken to describe a special view of life in this universe be laying out some possible principles how this new view could be constructed following these principles.


Because at the beginning of this writing the final outcome is open and the ‘way to reach the result’ is as such difficult, the author decided to make the research process directly the content of a process article.

The following parts of the process article seem to be important:

  1. Describe a ‘working hypotheses’ at the start.
  2. Look for ‘arguments pro or contra’.
  3. Look for ‘other texts’ related to these arguments (always pro & contra).
  4. Make decisions after every step, whether an argument (and possibly different texts) supports or criticizes or modifies the working hypothesis.
  5. Give a new version of the working hypothesis, if necessary.

Moreover it has to be ‘monitored’ (Meta-Level), whether this procedure works satisfyingly.


To begin, a first version of the working hypothesis has to be formulated. What is ‘given’ as an ‘assumption’ are the concepts ‘COLLECTIVE INTELLIGENCE’ with the special focus on the role of the intelligence of ‘man’ and ‘machines’ as part of a — possibly larger — concept of ‘INTELLIGENCE’. Furthermore it is assumed, that these concepts shall be investigated in the context of the question of a possible ‘SUSTAINABILITY’ of the hybrid ‘man-machine’ cooperation as part of the ‘whole life (the ‘biosphere’)’ on this planet, even extended to the whole known universe.

To elaborate these concepts in more concreteness and as a ‘hypothesis’ which can be ‘tested’ in the future, whether it ‘works’ or not, one needs a ‘minimal vision’ of what shall be assumed as ‘wishful future’ for a biosphere with a man-machine pair as part of it.

A ‘wishful future’ which can be ‘tested’ has to be (i) a ‘description of a state’, located some time ahead, and (ii) the ‘way into this future’ must be describable such, that we have a clear ‘starting point’ — e.g. the year 2023 — and (iii) that we have a sufficient knowledge about all possible changes, which can ‘transform/ change’ the actual situation step wise, that it is highly probable that we will reach finally the ‘envisioned future state’. Here highly import are especially those changes, which can be triggered by our own actions as humankind. And it has to be mentioned (iv), that we would need clear instructions how to apply the changes in order to be successful.

To (i): Wishful State

What would a citizen somewhere on this planet answer, if he would be asked “What do you think is a ‘wishful state’ in the future?”

It needs not too much fantasy that we would get nearly as many different answer as there are citizens living on this planet.

To (ii): The ‘way into this future’

To (iii): ‘Knowledge about all possible changes’

To (iv): ‘Clear instructions how to apply’


wkp-en :=

[2023] Raymond NobleUniversity College LondonDenis NobleUniversity of Oxford, Understanding Living Systems, Cambridge University Press. (Expected Online Publication June 23). Words by the publisher: “Life is definitively purposive and creative. Organisms use genes in controlling their destiny. This book presents a paradigm shift in understanding living systems. The genome is not a code, blueprint or set of instructions. It is a tool orchestrated by the system. This book shows that gene-centrism misrepresents what genes are and how they are used by living systems. It demonstrates how organisms make choices, influencing their behaviour, their development and evolution, and act as agents of natural selection. It presents a novel approach to fundamental philosophical and cultural issues, such as free-will. Reading this book will make you see life in a new light, as a marvellous phenomenon, and in some sense a triumph of evolution. We are not in our genes, our genes are in us.”

[2023]  Benedict RattiganDenis NobleAfiq Hatta, (Eds), The Language of Symmetry, CRC Press

[2022] RAYMOND NOBLE and DENIS NOBLE, Physiology restores purpose to evolutionary biology, Biological Journal of the Linnean Society, 2022, XX, 1–13. With 3 figures. Abstract: “Life is purposefully creative in a continuous process of maintaining integrity; it adapts to counteract change. This is an ongoing, iterative process. Its actions are essentially directed to this purpose. Life exists to exist. Physiology is the study of purposeful living function. Function necessarily implies purpose. This was accepted all the way from William Harvey in the 17th century, who identified the purpose of the heart to pump blood and so feed the organs and tissues of the body, through many 19th and early 20th century examples. But late 20th century physiology was obliged to hide these ideas in shame. Teleology became the ‘lady who no physiologist could do without, but who could not be acknowledged in public.’ This emasculation of the discipline accelerated once the Central Dogma of molecular biology was formulated, and once physiology had become sidelined as concerned only with the disposable vehicle of evolution. This development has to be reversed. Even on the practical criterion of relevance to health care, gene-centrism has been a disaster, since prediction from elements to the whole system only rarely succeeds, whereas identifying whole system functions invariably makes testable predictions at an elemental level.”

[2017] Manuel Vogel, Review: From matter to life: information and causality, edited by S. I. Walker, P. C. W. Davies and G. F. R. Ellis: Scope: edited book. Level: general readership, review in Contemporary Physics · June 2017

[2017] S. I. Walker, P. C. W.Davies and G. F. R. Ellis (Eds), From MATTER to LIFE. Information and Causality, Cambridge University Press

[2017] Denis Noble, Dance to the Tune of Life. Biological Relativity, Cambridge University Press

[2007] Denis Noble, Video Lecture, 2007, “Principle of Systems Biology illustrated using the Virtual Heart”, URL:

[2006] Denis Noble, The Music of Life. Biology beyond the genome, Oxford University Press Inc., New York

[] Denis Noble in wkp-en:

THINKING: everyday – philosophical – empirical theoretical (sketch)

(First: June 9, 2023 – Last change: June 10, 2023)

Comment: This post is a translation from a German text in my blog ‘’ with the aid of the deepL software


The current phase of my thinking continues to revolve around the question how the various states of knowledge relate to each other: the many individual scientific disciplines drift side by side; philosophy continues to claim supremacy, but cannot really locate itself convincingly; and everyday thinking continues to run its course unperturbed with the conviction that ‘everything is clear’, that you just have to look at it ‘as it is’. Then the different ‘religious views’ come around the corner with a very high demand and a simultaneous prohibition not to look too closely. … and much more.


In the following text three fundamental ways of looking at our present world are outlined and at the same time they are put in relation to each other. Some hitherto unanswered questions can possibly be answered better, but many new questions arise as well. When ‘old patterns of thinking’ are suspended, many (most? all?) of the hitherto familiar patterns of thinking have to be readjusted. All of a sudden they are simply ‘wrong’ or strongly ‘in need of repair’.

Unfortunately it is only a ‘sketch’.[1]


FIG. 1: In everyday thinking, every human being (a ‘homo sapiens’ (HS)) assumes that what he knows of a ‘real world’ is what he ‘perceives’. That there is this real world with its properties, he is – more or less – ‘aware’ of, there is no need to discuss about it specially. That, what ‘is, is’.

… much could be said …


FIG. 2: Philosophical thinking starts where one notices that the ‘real world’ is not perceived by all people in ‘the same way’ and even less ‘imagined’ in the same way. Some people have ‘their ideas’ about the real world that are strikingly ‘different’ from other people’s ideas, and yet they insist that the world is exactly as they imagine it. From this observation in everyday life, many new questions can arise. The answers to these questions are as manifold as there were and are people who gave or still give themselves to these philosophical questions.

… famous examples: Plato’s allegory of the cave suggests that the contents of our consciousness are perhaps not ‘the things themselves’ but only the ‘shadows’ of what is ultimately ‘true’ … Descartes‘ famous ‘cogito ergo sum’ brings into play the aspect that the contents of consciousness also say something about himself who ‘consciously perceives’ such contents …. the ‘existence of the contents’ presupposes his ‘existence as thinker’, without which the existence of the contents would not be possible at all …what does this tell us? … Kant’s famous ‘thing in itself’ (‘Ding an sich’) can be referred to the insight that the concrete, fleeting perceptions can never directly show the ‘world as such’ in its ‘generality’. This lies ‘somewhere behind’, hard to grasp, actually not graspable at all? ….

… many things could be said …


FIG. 3: The concept of an ’empirical theory’ developed very late in the documented history of man on this planet. On the one hand philosophically inspired, on the other hand independent of the widespread forms of philosophy, but very strongly influenced by logical and mathematical thinking, the new ’empirical theoretical’ thinking settled exactly at this breaking point between ‘everyday thinking’ and ‘theological’ as well as ‘strongly metaphysical philosophical thinking’. The fact that people could make statements about the world ‘with the chest tone of conviction’, although it was not possible to show ‘common experiences of the real world’, which ‘corresponded’ with the expressed statements, inspired individual people to investigate the ‘experiential (empirical) world’ in such a way that everyone else could have the ‘same experiences’ with ‘the same procedure’. These ‘transparent procedures’ were ‘repeatable’ and such procedures became what was later called ’empirical experiment’ or then, one step further, ‘measurement’. In ‘measuring’ one compares the ‘result’ of a certain experimental procedure with a ‘previously defined standard object’ (‘kilogram’, ‘meter’, …).

This procedure led to the fact that – at least the experimenters – ‘learned’ that our knowledge about the ‘real world’ breaks down into two components: there is the ‘general knowledge’ what our language can articulate, with terms that do not automatically have to have something to do with the ‘experiential world’, and such terms that can be associated with experimental experiences, and in such a way that other people, if they engage in the experimental procedure, can also repeat and thereby confirm these experiences. A rough distinction between these two kinds of linguistic expressions might be ‘fictive’ expressions with unexplained claims to experience, and ’empirical’ expressions with confirmed claims to experience.

Since the beginning of the new empirical-theoretical way of thinking in the 17th century, it took at least 300 years until the concept of an ’empirical theory’ was consolidated to such an extent that it became a defining paradigm in many areas of science. However, many methodological questions remained controversial or even ‘unsolved’.


For many centuries, the ‘misuse of everyday language’ for enabling ’empirically unverifiable statements’ was directly chalked up to this everyday language and the whole everyday language was discredited as ‘source of untruths’. A liberation from this ‘ monster of everyday language’ was increasingly sought in formal artificial languages or then in modern axiomatized mathematics, which had entered into a close alliance with modern formal logic (from the end of the 19th century). The expression systems of modern formal logic or then of modern formal mathematics had as such (almost) no ‘intrinsic meaning’. They had to be introduced explicitly on a case-by-case basis. A ‘formal mathematical theory’ could be formulated in such a way that it allowed ‘logical inferences’ even without ‘explicit assignment’ of an ‘external meaning’, which allowed certain formal expressions to be called ‘formally true’ or ‘formally false’.

This seemed very ‘reassuring’ at first sight: mathematics as such is not a place of ‘false’ or ‘foisted’ truths.

The intensive use of formal theories in connection with experience-based experiments, however, then gradually made clear that a single measured value as such does not actually have any ‘meaning’ either: what is it supposed to ‘mean’ that at a certain ‘time’ at a certain ‘place’ one establishes an ‘experienceable state’ with certain ‘properties’, ideally comparable to a previously agreed ‘standard object’? ‘Expansions’ of bodies can change, ‘weight’ and ‘temperature’ as well. Everything can change in the world of experience, fast, slow, … so what can a single isolated measured value say?

It dawned to some – not only to the experience-based researchers, but also to some philosophers – that single measured values only get a ‘meaning’, a possible ‘sense’, if one can at least establish ‘relations’ between single measured values: Relations ‘in time’ (before – after), relations at/in place (higher – lower, next to each other, …), ‘interrelated quantities’ (objects – areas, …), and that furthermore the different ‘relations’ themselves again need a ‘conceptual context’ (single – quantity, interactions, causal – non-causal, …).

Finally, it became clear that single measured values needed ‘class terms’, so that they could be classified somehow: abstract terms like ‘tree’, ‘plant’, ‘cloud’, ‘river’, ‘fish’ etc. became ‘collection points’, where one could deliver ‘single observations’. With this, hundreds and hundreds of single values could then be used, for example, to characterize the abstract term ‘tree’ or ‘plant’ etc.

This distinction into ‘single, concrete’ and ‘abstract, general’ turns out to be fundamental. It also made clear that the classification of the world by means of such abstract terms is ultimately ‘arbitrary’: both ‘which terms’ one chooses is arbitrary, and the assignment of individual experiential data to abstract terms is not unambiguously settled in advance. The process of assigning individual experiential data to particular terms within a ‘process in time’ is itself strongly ‘hypothetical’ and itself in turn part of other ‘relations’ which can provide additional ‘criteria’ as to whether date X is more likely to belong to term A or more likely to belong to term B (biology is full of such classification problems).

Furthermore, it became apparent that mathematics, which comes across as so ‘innocent’, can by no means be regarded as ‘innocent’ on closer examination. The broad discussion of philosophy of science in the 20th century brought up many ‘artifacts’ which can at least easily ‘corrupt’ the description of a dynamic world of experience.

Thus it belongs to formal mathematical theories that they can operate with so-called ‘all- or particular statements’. Mathematically it is important that I can talk about ‘all’ elements of a domain/set. Otherwise talking becomes meaningless. If I now choose a formal mathematical system as conceptual framework for a theory which describes ’empirical facts’ in such a way that inferences become possible which are ‘true’ in the sense of the theory and thus become ‘predictions’ which assert that a certain fact will occur either ‘absolutely’ or with a certain probability X greater than 50%, then two different worlds unite: the fragmentary individual statements about the world of experience become embedded in ‘all-statements’ which in principle say more than empirical data can provide.

At this point it becomes visible that mathematics, which appears to be so ‘neutral’, does exactly the same job as ‘everyday language’ with its ‘abstract concepts’: the abstract concepts of everyday language always go beyond the individual case (otherwise we could not say anything at all in the end), but just by this they allow considerations and planning, as we appreciate them so much in mathematical theories.

Empirical theories in the format of formal mathematical theories have the further problem that they as such have (almost) no meanings of their own. If one wants to relate the formal expressions to the world of experience, then one has to explicitly ‘construct a meaning’ (with the help of everyday language!) for each abstract concept of the formal theory (or also for each formal relation or also for each formal operator) by establishing a ‘mapping’/an ‘assignment’ between the abstract constructs and certain provable facts of experience. What may sound so simple here at first sight has turned out to be an almost unsolvable problem in the course of the last 100 years. Now it does not follow that one should not do it at all; but it does draw attention to the fact that the choice of a formal mathematical theory need not automatically be a good solution.

… many things could still be said …


A formal mathematical theory can derive certain statements as formally ‘true’ or ‘false’ from certain ‘assumptions’. This is possible because there are two basic assumptions: (i) All formal expressions have an ‘abstract truth value’ as ‘abstractly true’ or just as ‘abstractly not true’. Furthermore, there is a so-called ‘formal notion of inference’ which determines whether and how one can ‘infer’ other formal expressions from a given ‘set of formal expressions’ with agreed abstract truth values and a well-defined ‘form’. This ‘derivation’ consists of ‘operations over the signs of the formal expressions’. The formal expressions are here ‘objects’ of the notion of inference, which is located on a ‘level higher’, on a ‘meta-level 1’. The inference term is insofar a ‘formal theory’ of its own, which speaks about certain ‘objects of a deeper level’ in the same way as the abstract terms of a theory (or of everyday language) speak about concrete facts of experience. The interaction of the notion of inference (at meta-level 1) and the formal expressions as objects presupposes its own ‘interpretive relation’ (ultimately a kind of ‘mapping’), which in turn is located at yet another level – meta-level 2. This interpretive relation uses both the formal expressions (with their truth values!) and the inference term as ‘objects’ to install an interpretive relation between them. Normally, this meta-level 2 is handled by the everyday language, and the implicit interpretive relation is located ‘in the minds of mathematicians (actually, in the minds of logicians)’, who assume that their ‘practice of inference’ provides enough experiential data to ‘understand’ the ‘content of the meaning relation’.

It had been Kurt Gödel [2], who in 1930/31 tried to formalize the ‘intuitive procedure’ of meta-proofs itself (by means of the famous Gödelization) and thus made the meta-level 3 again a new ‘object’, which can be discussed explicitly. Following Gödel’s proof, there were further attempts to formulate this meta-level 3 again in a different ways or even to formalize a meta-level 4. But these approaches remained so far without clear philosophical result.

It seems to be clear only that the ability of the human brain to open again and again new meta-levels, in order to analyze and discuss with it previously formulated facts, is in principle unlimited (only limited by the finiteness of the brain, its energy supply, the time, and similar material factors).

An interesting special question is whether the formal inference concept of formal mathematics applied to experience facts of a dynamic empirical world is appropriate to the specific ‘world dynamics’ at all? For the area of the ‘apparently material structures’ of the universe, modern physics has located multiple phenomena which simply elude classical concepts. A ‘matter’, which is at the same time ‘energy’, tends to be no longer classically describable, and quantum physics is – despite all ‘modernity’ – in the end still a ‘classical thinking’ within the framework of a formal mathematics, which does not possess many properties from the approach, which, however, belong to the experienceable world.

This limitation of a formal-mathematical physical thinking shows up especially blatantly at the example of those phenomena which we call ‘life’. The experience-based phenomena that we associate with ‘living (= biological) systems’ are, at first sight, completely material structures, however, they have dynamic properties that say more about the ‘energy’ that gives rise to them than about the materiality by means of which they are realized. In this respect, implicit energy is the real ‘information content’ of living systems, which are ‘radically free’ systems in their basic structure, since energy appears as ‘unbounded’. The unmistakable tendency of living systems ‘out of themselves’ to always ‘enable more complexity’ and to integrate contradicts all known physical principles. ‘Entropy’ is often used as an argument to relativize this form of ‘biological self-dynamics’ with reference to a simple ‘upper bound’ as ‘limitation’, but this reference does not completely nullify the original phenomenon of the ‘living’.

It becomes especially exciting if one dares to ask the question of ‘truth’ at this point. If one locates the meaning of the term ‘truth’ first of all in the situation in which a biological system (here the human being) can establish a certain ‘correspondence’ between its abstract concepts and such concrete knowledge structures within its thinking, which can be related to properties of an experiential world through a process of interaction, not only as a single individual but together with other individuals, then any abstract system of expression (called ‘language’) has a ‘true relation to reality’ only to the extent that there are biological systems that can establish such relations. And these references further depend on the structure of perception and the structure of thought of these systems; these in turn depend on the nature of bodies as the context of brains, and bodies in turn depend on both the material structure and dynamics of the environment and the everyday social processes that largely determine what a member of a society can experience, learn, work, plan, and do. Whatever an individual can or could do, society either amplifies or ‘freezes’ the individual’s potential. ‘Truth’ exists under these conditions as a ‘free-moving parameter’ that is significantly affected by the particular process environment. Talk of ‘cultural diversity’ can be a dangerous ‘trivialization’ of massive suppression of ‘alternative processes of learning and action’ that are ‘withdrawn’ from a society because it ‘locks itself in’. Ignorance tends not to be a good advisor. However, knowledge as such does not guarantee ‘right’ action either. The ‘process of freedom’ on planet Earth is a ‘galactic experiment’, the seriousness and extent of which is hardly seen so far.


[1] References are omitted here. Many hundreds of texts would have to be mentioned. No sketch can do that.

[2] See for the ‘incompleteness theorems’ of Kurt Gödel (1930, published 1931):


ISSN 2567-6458, 23.March 2023 – April 4, 2023
Author: Gerd Doeben-Henisch


This text starts the topic of the Collective Man-Machine Intelligence Paradigm within Sustainable Development.


For most readers the divers content of this blog is hard to understand if told that all these parts belong to one coherent picture. But indeed, there exists one coherent picture. This is the first publication of this one coherent picture.

FIGURE : This figure outlines the first time the intended view of the new ‘Collective Man-Maschine Intelligence’ paradigm within a certain view of ‘Sustainable Development’. The mentioned different kinds of certain algorithms are arbitrary; only the ‘oksimo.R Software’ has a general meaning pointing to a new type of software which is at the same time editor and simulator of a real (sustainable) empirical theory, which can also be used for gaming.

Looking deeper into this figure you can perhaps get a rough idea, which kinds of questions had to be answered before this unified view could be formulated. And every subset of this view is backed up by complete formal specifications and even formal theories. Telling the story ‘afterwards’ is often ‘simple’, but to find all the different parts in the ‘overall picture’ one after the other is rather tedious. At last I needed about 50 years of research …

In the next weeks I will write some more comments. As always there are many ‘threads’ working in parallel and I have to complete some others before.

The Everyday Application Scenario

(The following text is an English translation from an originally German text partially generated with the (free version))

Having a meta-theoretical concept of a ‘sustainable empirical theory (SET)’ accompanied by the meta-theoretical concept of ‘collective intelligence (CI)’ it isn’t straightforward how these components are working together in an everyday scenario. The following figure gives a rough outline of that framework which — probably — has to be assumed.

FIGURE : Outline of the everyday scenario applying a sustainable empirical theory (SET) together with ‘collective intelligence (CI)’. For more explanations see the text.


To have abstract (meta-theoretical) concepts it isn’t sufficient to change the real world only with these. It needs always some ‘translation’ of abstract meanings into concrete, real processes which are ‘working in everyday real environments’. Thus, every ‘concept’ needs a bundle of ‘processes’ associated with the meaning of the abstract concept which are capable to bring the abstract meaning ‘into life’.

Theory Concept

A structural concept describes e.g. on a meta-level what a ‘sustainable empirical theory’ is and compares this concept with the concept ‘game’ and ‘theater play’. Since it can quickly become very time-consuming to write down complete theories by hand, it can be very helpful to have a software (there is one under the name ‘oksimo.R’) that supports citizens in writing down the ‘text of a theory’ together with other citizens in ‘normal language’ and also to ‘simulate’ it as needed; furthermore, it would be good to be able to ‘play’ a theory interactively (and ultimately even much more).

Having the text of a theory, trying it out and developing it further is one thing. But the way to a theory can be tedious and long. It requires a great deal of ‘experience’, ‘knowledge’ and multiple forms of what is usually very vaguely called ‘intelligence’.

Concept Collective Intelligence

Intelligence typically occurs in the context of ‘biological systems’, in ‘humans’ and ‘non-humans’. More recently, there are also examples of vague intelligence being realized by ‘machines’. In the end, all these different phenomena, which are roughly summarized under the term ‘intelligence’, form a pattern which could be considered as ‘collective intelligence’ under a certain consideration. There are many prominent examples of this in the field of ‘non-human biological systems’, and then especially in ‘human biological systems’ with their ‘coordinated behavior’ in connection with their ‘symbolic languages’.

The great challenge of the future is to bring together these different ‘types of individual and collective intelligence’ into a real constructive-collective intelligence.

Concept Empirical Data

The most general form of a language is the so-called ‘normal language’ or ‘everyday language’. It contains in one concept everything we know today about languages.

An interesting aspect is the fact that the everyday language forms for each special kind of language (logic, mathematics, …) that ‘meta-language’, on whose basis the other special language is ‘introduced’.

The possible ‘elements of meaning and structures of meaning’, out of which the everyday language structures have been formed, originate from the space of everyday life and its world of events.

While the normal perceptual processes in coordination among the different speaker-listeners can already provide a lot of valuable descriptions of everyday properties and processes, specialized observation processes in the form of ‘standardized measurement processes’ can considerably increase the accuracy of descriptions. The central moment is that all participating speaker-listeners interested in a ‘certain topic’ (physics, chemistry, spatial relations, game moves, …) agree on ‘agreed description procedures’ for all ‘important properties’, which everyone performs in the same way in a transparent and reproducible way.

Processes in Everyday Life

As pointed out above whatever conceptual structures may have been agreed upon, they can only ‘come into effect’ (‘come to life’) if there are enough people who are willing to live all those ‘processes’ concretely within the framework of everyday life. This requires space, time, the necessary resources and a sufficiently strong and persistent ‘motivation’ to live these processes every day anew.

Thus, in addition to humans, animals and plants and their needs, there is now a huge amount of artificial structures (houses, roads, machines,…), each of which also makes certain demands on its environment. Knowing these requirements and ‘coordinating/managing’ them in such a way that they enable positive ‘synergies’ is a huge challenge, which – according to the impression in 2023 – often overtaxes mankind.


ISSN 2567-6458, 27.February 2023 – 27.February 2023, 01:45 CET
Author: Gerd Doeben-Henisch

Parts of this text have been translated with (free version), afterwards only minimally edited.


( This text is an direct continuation of the text  “The ‘inside’ of the ‘outside’. Basic Building Blocks”) within the project ‘oksimo.R Editor and Simulator for Theories’.

‘Transient’ events and language

After we have worked our way forward in the biological cell galaxy ‘man’ so far that we can determine its ‘structuredness’ (without really understanding its origin and exact functioning), and then find ourselves according to the appearance as ‘concrete body’ which can ‘communicate’ with the ‘environment of the own body’ (often also called ‘outside world’) twofold: We can ‘perceive’ in different ways and we can produce ‘effects’ in the outside world in different ways.

For the ‘coordination’ with other human bodies, especially between the ‘brains’ in these bodies, the ability to ‘speak-listen’ or then also to ‘write-read’ seems to be of highest importance. Already as children we find ourselves in environments where language occurs, and we ‘learn’ very quickly that ‘linguistic expressions’ can refer not only to ‘objects’ and their ‘properties’, but also to fleeting ‘actions’ (‘Peter gets up from the table’) and also other ‘fleeting’ events (‘the sun rises’; ‘the traffic light just turned red’). There are also linguistic expressions that refer only partially to something perceptible, such as ‘Hans’ father’ (who is not in the room at all), ‘yesterday’s food’ (which is not there), ‘I hate you’ (‘hate’ is not an object), ‘the sum of 3+5’ (without there being anything that looks like ‘3’ or ‘5’), and many more.

For the ‘coordination’ with other human bodies, especially between the ‘brains’ in these bodies, the ability to ‘speak-listen’ or then also to ‘write-read’ seems to be of highest importance. Already as children we find ourselves in environments where language occurs, and we ‘learn’ very quickly that ‘linguistic expressions’ can refer not only to ‘objects’ and their ‘properties’, but also to fleeting ‘actions’ (‘Peter gets up from the table’) and also other ‘fleeting’ events (‘the sun rises’; ‘the traffic light just turned red’). There are also linguistic expressions that refer only partially to something perceptible, such as ‘The father of Bill’ (who is not in the room at all), ‘yesterday’s food’ (which is not there), ‘I hate you’ (‘hate’ is not an object), ‘the sum of 3+5’ (without there being anything that looks like ‘3’ or ‘5’), and many more.

If one tries to understand these ‘phenomena of our everyday life’ ‘more’, one can come across many exciting facts, which possibly generate more questions than they provide answers. All phenomena, which can cause ‘questions’, actually serve the ‘liberation of our thinking’ from currently wrong images. Nevertheless, questions are not very popular; they disturb, stress, …

How can one get closer to these manifold phenomena?

Let’s just look at some expressions of ‘normal language’ that we use in our ‘everyday life’.[1] In everyday life there are many different situations in which we sit down (breakfast, office, restaurant, school, university, reception hall, bus, subway, …). In some of these situations we speak, for example, of ‘chairs’, in others of ‘armchairs’, again in other situations of ‘benches’, or simply of ‘seats’. Before an event, someone might ask “Are there enough chairs?” or “Do we have enough armchairs?” or … In the respective concrete situation, it can be quite different objects that would pass for example as ‘chair’ or as ‘armchair’ or … This indicates that the ‘expressions of language’ (the ‘sounds’, the ‘written/printed signs’) can link to quite different things. There is no 1-to-1 mapping here. With other objects like ‘cups’, ‘glasses’, ‘tables’, ‘bottles’, ‘plates’ etc. it is not different.

These examples suggest that there seems to be a ‘structure’ here that ‘manifests’ itself in the concrete examples, but is itself located ‘beyond the events.'[2].

If one tries to ‘mentally sort’ this out, then at least two, rather three ‘dimensions’ suggest themselves here, which play into each other:

  1. There are concrete linguistic expressions – those we call ‘words’ – that a ‘speaker-hearer’ uses.
  2. There is, independently of the linguistic expressions, ‘some phenomenon’ in everyday life to which the ‘speaker-hearer’ refers with his linguistic expression (these can be ‘objects’ or ‘properties’ of objects, …)[3].
  3. The respective ‘speaker’ or ‘listener’ has ‘learned’ to establish a ‘relation’ between the ‘linguistic expression’ and the ‘other to the linguistic expression’.

Since we know that the same objects and events in everyday life can be ‘named’ quite differently in the ‘different languages’, this suggests that the relations assumed in each case by ‘speaker-hearer’ are not ‘innate’, but appear rather ‘arbitrary’ in each ‘language community’.[4] This suggests that the ‘relations’ found in everyday life between linguistic expressions and everyday facts have to be ‘learned’ by each speaker-hearer individually, and this through direct contact with speaker-hearers of the respective language community.

Body-External Conditions

FIGURE: Outline of some of the important structures inside the brain (and the body), which have to be assumed if one wants to explain the empirical observations of the human behavior.

The previous considerations allow the formation of a ‘working hypothesis’ for the phenomenon that a speaker-hearer can encounter ‘outside his body’ single objects (e.g. an object ‘cup’, a word ‘cup’), which as such have no direct relation to each other. But inside the speaker-hearer, ‘abstract concepts’ can then be formed triggered by the perceived concrete events, which ‘abstract a common core’ from the varying occurrences, which then represents the actual ‘abstract concept’.

Under the condition of such abstract concepts, ‘meaning relations’ can then form in the speaker-listener in such a way that a speaker can ‘learn’ to ‘mentally link’ the two individual objects ‘cup’ (as an object) and ‘cup’ (as a heard/written word) in such a way, that in the future the word ‘cup’ evokes an association with the object ‘cup’ and vice versa. This relationship of meaning (object ‘cup’, word ‘cup’) is based on ‘neural processes’ of perception and memory. They can form, but do not have to. If such neural processes are available, then the speaker-hearer can actualize the cognitive element ‘object cup’ even if there is no outside object available; in this case there is no ‘perceptual element’ available too which ‘corresponds’ to the ‘memory element’ object cup.

Given these assumptions, one can formulate two more assumptions:

(i) Abstraction from abstract concepts: the mechanism of ‘abstract concept formation’ works not only under the condition of concrete perceptual events, but also under the condition of already existing abstract concepts. If I already have abstract concepts like ‘table’, ‘chair’, ‘couch’, then I can, for example, form an abstract concept ‘furniture’ as an ‘umbrella concept’ to the three previously mentioned concepts. If one calls abstract concepts that directly refer to virtual-concrete concepts level 1 concepts, then one could call abstract concepts that presuppose at least one concept of level n level n+1 concepts. How many levels are of ‘use’ in the domain of abstract concepts is open. In general, the ‘higher the level’, the more difficult it is to tie back to level-0 concepts.

(ii) Abstraction forming meaning concepts: : the ‘mechanism of forming meaning relations’ also works with reference to arbitrary abstract concepts.

If Hans says to Anna, “Our furniture seems kind of worn out by now,” then the internal relation Furniture := { ‘table’, ‘chair’, ‘couch’ } would lead from the concept Furniture to the other subordinate concepts, and Anna would know (given the same language understanding) that Hans is actually saying, “Our furniture in the form of ‘table’, ‘chair’, ‘couch’ seems kind of worn out by now.”

Body internal Conditions

From the view of the brain are ‘body-internal processes’ (different body organs, manifold ‘sensors’, and more) also ‘external’ (see figure)! The brain also knows about these body-internal conditions only insofar as corresponding ‘signals’ are transmitted to it. These can be assigned to different ‘abstract concepts’ by the memory due to their ‘individual property profile’, and thus they also become ‘candidates for a semantic relation’. However, only if these abstractions are based on body-internal signal events that are represented in ‘current memory’ in such a way that ‘we’ become ‘aware’ of them. [5],[6]

The ‘body-internal event space’ that becomes ‘noticeable’ in the current memory is composed of very many different events. Besides ‘organ-specific’ signals, which sometimes can even be ‘localized’ to some extent inside the body (‘my left molar hurts’, ‘my throat itches’, ‘I am hungry’, etc.). ), there are very many ‘moods’/’feelings’/’emotions’ which are difficult or impossible to localize, but which are nevertheless ‘conscious’, and to which one can assign different ‘intensities’ (‘I am very sad’, ‘This makes me angry’, ‘The situation is hopeless’, ‘I love you very much’, ‘I don’t believe you’, …).

If one ‘assigns words’ to such ‘body-internal’ properties, then also a ‘meaning relation’ arises, however it is then differently difficult to almost unsolvable between two human actors to clarify in each case ‘for oneself’, what ‘the other’ probably ‘means’, if he uses a certain linguistic expression. In the case of ‘localizable’ linguistic expressions, one may be able to understand what is meant because of a similar physical structure (‘my left molar hurts’, ‘my throat itches’, ‘I am hungry’). With other, non-localizable linguistic expressions (‘I am very sad’, ‘This makes me angry’, ‘The situation is hopeless’, ‘I love you very much’, ‘I don’t believe you’, …) it becomes difficult. Often one can only ‘guess’; wrong interpretations are very likely.

It becomes exciting when speaker-hearers combine in their linguistic expressions not only such concepts that derive from body-external perceptual events, but also such concepts that derive from body-internal perceptual events. For example, when someone says “That red car over there, I don’t have a good feeling about it” or “Those people there with their caps scare me” or “When I see that fish roll, it really gives me an appetite” or “Oh, that great air,” etc. We make statements like these all the time. They manifest a continuous ‘duality of our world experience’: with our body we are ‘in’ an external body world, which we can specifically perceive, and at the same time we fragmentarily experience the ‘inside of our body’, how it reacts in the current situation. We can also think of it this way: Our body talks to us by means of the ‘body-internal signals’ about how it experiences/feels/ senses a current ‘external situation’.

Spatial Structures

In the figure above the perceptions and the current memories are represented ‘individually’. But in fact the brain processes all signals of the ‘same time slice’ [7] as if they were ‘elements of a three-dimensional space’. As a consequence, there are ‘spatial relations’ between the elements without the elements themselves being able to generate such relations. In the case of body-external percepts, there is a clear ‘beside’, ‘in front of’, ‘under’, etc. In the case of body-internal perceptions, the body forms a reference point, but the body as a reference point is differently concrete (‘My left toe…’, ‘I am tired’, ‘My stomach growls’, …).

If the speaker-hearers use ‘measuring operations’ in addition to their ‘normal’ innate perception in the case of body-external circumstances, then one can assign different measured values to the ‘circumstances in space’ (lengths, volumes, position in a coordinate system, etc.).

In the case of ‘body-internal’ conditions one can ‘measure’ the body itself including process properties – what e.g. experimental psychologists and brain researchers often do -, but the connection with the body-internal perceptions is, depending on the kind of the ‘body-internal perception’, either only ‘to some extent’ producible (‘My left tooth hurts’), or ‘rather not’ (‘I feel so weak today’, ‘Just now this thought popped into my head’).

Time: Now, Before, ‘Possible’

From everyday life we know the phenomenon that we can perceive ‘changes’: ‘The traffic light turns red’, ‘The engine starts’, ‘The sun rises’, … This is so natural to us that we hardly think about it.

This concept of ‘change’ presupposes a ‘now’ and a ‘before’ and the ability to ‘recognize differences’ between the ‘now’ and the ‘before’.

As a working hypothesis [9] for this property of recognizing ‘change’, the following assumptions are made here:

  1. Events as part of spatial arrangements are deposited as ‘situations’ in ‘potential memory’ in such a way that ‘current perceptions’ that differ from ‘deposited (before)’ situations are ‘noticed’ by unconscious comparison operations: we notice, without wanting to, that the traffic light changes from orange to green. We can describe such ‘changes’ by juxtaposing the ‘before’ and ‘now’ states.
  2. In a ‘comparison’ in the context of ‘changes’ we use ‘abstract remembered’ concepts in conjunction with ‘abstract perceived’ concepts, e.g. the state of the traffic light ‘before’ and ‘now’.
  3. ‘Current’ perceptions quickly pass into ‘remembered’ perceptions (The transition of the traffic light from orange to green happened ‘just’).
  4. We can ‘arrange’ the abstract concepts of remembered percepts ‘in a sequence/row’ such that an element in the row can be seen as ‘temporally’ prior’ to a subsequent element, or ‘temporally posterior’. By mapping into ‘linguistic expressions’ one can make these facts ‘more explicit’.
  5. By the availability of ‘temporal relations’ (‘x is temporally before y’, ‘y is temporally after x’, ‘y is temporally simultaneous with y’, …) one gains a starting point for considering ‘frequencies’ in these relations, e.g. “Is y temporally ‘always’ after y” or only ‘sometimes’? Is this temporal pattern ‘random’ or somehow ‘significant’?
  6. If the observed ‘patterns of temporal occurrence’ are ‘not purely random’ but imply significant probabilities, then on this basis one can formulate ‘hypotheses for such situations’ which ‘are not past and not present’, but in the light of the probabilities appear as ‘possible in the future’.

Time: factual and analytical

The preceding considerations about time assume that the ‘recognition of changes’ is based on an ‘automatic perception’: that something ‘changes’ in our perceptual space is based on ‘unconscious neuronal processes’ which ‘automatically detect’ this change and ‘automatically bring it to our attention’ without us having to do this ‘consciously’. In all languages there are linguistic expressions reflecting this: ‘drive’, ‘change’, ‘grow’, ‘fly’, ‘melt’, ‘heat’, ‘age’, … We can take notice of changes with a certain ‘ease’, but nothing more. It is the ‘pure fact’ of change what makes itself noticeable to us; hence the phrase ‘factual time’.

If we want to ‘understand’ what exactly happens during a change, why, under which conditions, how often, in which period of time etc., then we have to make the effort to ‘analyze’ such changes in more detail. This means we have to look at the ‘whole process of change’ and try to identify as many ‘individual moments’ in it that we can then – eventually – find clues as to what exactly happened, how and why.

Such an analysis can only succeed if we can answer the following questions:

  1. How to describe the situation ‘before’ the change?
  2. How can one describe the situation ‘after’ the change?
  3. What exactly are the ‘differences’?
  4. How can one formulate an if-then rule that states at which ‘condition’ which ‘change’ should be applied in such a way that the desired ‘new state’ results with all ‘changes’?

Example: A passer-by observes that a traffic light changes from orange to green. A (simple) analysis could work as follows:

change Rule (simple format)
  1. Before: The traffic light is orange.
  2. After: The traffic light is green.
  3. Difference: The ‘orange’ property has been replaced by the ‘green’ property.
Rule as a ‘text’:

Change rule: If: ‘A traffic light is orange’. Then: (i) Remove ‘A traffic light is orange’, (ii) Add: ‘A traffic light is green’.

If one wants to deepen this thought, one quickly encounters many questions concerning a single rule of change:

  1. What is important about a ‘situation before’? Is it necessary to write down ‘everything’ or only ‘partial aspects’? How does a group of human actors determine the ‘boundary’ from the situation to the wider environment? If only a partial description: how does one determine what is important?
  2. Corresponding questions also arise for the description of the ‘situation after’.
  3. It is also exciting to ask about the ‘if-part’ of the change rule: how many of the facts of the situation before are important? Are all of them important or only some? For example, if I can distinguish three facts: do they all have to be fulfilled ‘simultaneously’ or only ‘alternatively’?
  4. Interesting is also the ‘relation’ between the situation before and after: Is this observable change (i) ‘completely random’ or (ii) does this relation have a ‘certain frequency’ (a certain ‘probability value’), or (iii) does this relation ‘always’ occur?

If one looks at concrete examples of normal language usage on ‘factual time’ with these questions in mind, one can easily see how ‘minimalist’ change is practiced linguistically in everyday life:

  1. Peter goes upstairs.
  2. Are you coming?
  3. He finished the glass.
  4. She opened the door.
  5. We ate in silence.

All of these expressions (1) – (5) only briefly address the nature of the change, hint at the persons and objects involved, and leave the space in which this occurs unmentioned. The exact duration is also not explicitly stated. The speaker-listeners in these situations obviously presuppose that everyone can ‘infer the corresponding meaning for himself’ on the basis of the linguistic utterances on the one hand through ‘general linguistic knowledge’, on the other hand through being ‘concretely involved’ in the respective concrete situation.

A completely different aspect is provided in the case of an ‘analytic time’ by the question of the ‘description itself’, the ‘rule text’:

Change rule: If: ‘A traffic light is orange’. Then: (i) Remove ‘A traffic light is orange’, (ii) Add: ‘A traffic light is green’.

This text contains linguistic expressions ‘A traffic light is orange’ as well as ‘A traffic light is green’. These linguistic expressions have in the normal language mostly a certain ‘linguistic meaning’, which refer in this case to ‘memories’, which were formed due to ‘perceptions’. It is about the abstract object ‘traffic light’, to which the abstract properties ‘orange’ or ‘green’ are attributed or denied. Normally, speaker-hearers of English have learned to relate these abstract meanings on the occasion of a ‘concrete perception’ to such concrete realities (real traffic lights) which they have learned to ‘belong’ to in the course of their language learning. Without a current concrete perception, it is only a matter of abstract meanings by means of abstract memories, whose ‘reference to reality’ is only ‘potential’. Only with the occurrence of a concrete perception with the ‘suitable properties’ the ‘potential’ meaning becomes a ‘real given’ (empirical) meaning.

The text of a change rule thus abstractly describes a possible transition from an abstractly described situation to an abstractly possible other situation. Whether this abstract possibility ever becomes a concrete real meaning is open. The condensation of ‘repeated events’ of the same kind in the past (stored as memory) in the concept of ‘frequency’ or then in the concept of a ‘probability’ can indeed influence the ‘expectation of an actor’ to the effect that he ‘takes into account’ in his behavior that the change can occur if he ‘recreates’ the ‘triggering situation’, but there would be complete certainty of this only if the described change were based on a completely deterministic context.

What does not appear in this simple consideration is the temporal aspect: whether a change takes place in the millisecond range or in hours, days, months, years, that marks enormous differences.

Likewise the reference to a space: Where does it take place? How?

Working hypothesis CONTEXT

Linguistic descriptions of change happen as ‘abstract formulations’ and usually assume the following:

  1. A shared linguistic knowledge of meaning in the minds of those involved.
  2. A knowledge of the spatial situation in which the change takes place.
  3. A knowledge of the people and objects involved.
  4. A knowledge of the temporal dimension.
  5. Optional: a knowledge of experiential probability.

Descriptions of change, which are written abstractly, must – depending on the case and requirement – make the context aspects (1) – (5) explicit, in order to be ‘understandable’.

The demand for ‘comprehensibility’ is, however, in principle ‘vague’, since the respective contexts can be arbitrarily complex and arbitrarily different.


[1] Instead of ‘normal language’ in ‘everyday life’ I also simply speak of ‘everyday language’ here.

[2] A thinker who has dealt with this phenomenon of the ‘everyday concrete’ and at the same time also ‘everyday – somehow – abstract’ is Ludwig Wittgenstein (see [2b,c]). He introduced the concept of ‘language-game’ for this purpose, without introducing an actual ‘(empirical) theory’ in the proper sense to comprise all these considerations.

[2b] Wittgenstein, L.; Tractatus Logico-Philosophicus, 1921/1922 /* Written during World War I, the work was completed in 1918. It first appeared with the support of Bertrand Russell in Wilhelm Ostwald’s Annalen der Naturphilosophie in 1921. This version, which was not proofread by Wittgenstein, contained gross errors. A corrected, bilingual edition (German/English) was published by Kegan Paul, Trench, Trubner and Co. in London in 1922 and is considered the official version. The English translation was by C. K. Ogden and Frank Ramsey. See introductory Wikipedia-EN: .

[2c] Wittgenstein, L.; Philosophical Investigations (Original Title: Philosophische Untersuchungen),1936-1946, published 1953 . Remark: ‘The Philosophical Investigations’ is Ludwig Wittgenstein’s late, second major work. It exerted an extraordinary influence on the philosophy of the 2nd half of the 20th century; the speech act theory of Austin and Searle as well as the Erlangen constructivism (Paul Lorenzen, Kuno Lorenz) are to be mentioned. The book is directed against the ideal of a logic-oriented language, which, along with Russell, Carnap, and Wittgenstein himself had advocated in his first major work. The book was written in the years 1936-1946, but was not published until 1953, after the author’s death. See introductory Wikipedia-EN: .

[3]In the borderline case, these ‘other’ phenomena of everyday life are also linguistic expressions (when one talks ‘about’ a text or linguistic utterances’).

[4] See: Language Family in wkp-en: Note: Due to ‘spatial proximity’ or temporal context (or both), there may be varying degrees of similarity between different languages.

[5] On the subject of ‘perception’ and ‘memory’ there is a huge literature in various empirical disciplines. The most important ones may well be ‘biology’, ‘experimental psychology’ and ‘brain science’; these supplemented by philosophical ‘phenomenology’, and then combinations of these such as ‘neuro-psychology’ or ‘neuro-phenomenology’, etc. In addition, there are countless other special disciplines such as ‘linguistics’ and ‘neuro-linguistics’.

[6] A question that remains open is how the concept of ‘consciousness’, which is common in everyday life, is to be placed in this context. Like the concept of ‘being’, the concept of ‘consciousness’ has been and still is very prominent in recent European philosophy, but it has also received strong attention in many empirical disciplines; especially in the field of tension between philosophical phenomenology, psychology and brain research, there is a long and intense debate about what is to be understood by ‘consciousness’. Currently (2023) there is no clear, universally accepted outcome of these discussions. Of the many available working hypotheses, the author of this text considers the connection to the empirical models of ‘current memory’ in close connection with the models of ‘perception’ to be the most comprehensible so far. In this context also the concept of the ‘unconscious’ would be easy to explain. For an overview see the entry ‘consciousness’ in wkp-en:

[7] The findings about ‘time slices’ in the processing of body-external circumstances can be found in many works of experimental psychology and brain research. A particularly striking example of how this factor plays out in human behavior is provided by the book by Card, Moran, and Newell (1983), see [8].

[8] Stuart K.Card, Thomas P.Moran, Allen Newell, (1983),The Psychology of Human-Computer Interaction, CRC-Press (Taylor & Francis Group), Boca Raton – London – New York. Note: From the point of view of the author of this text, this book was a milestone in the development of the discipline of human-machine interaction.

[9] On the question of memory, especially on the question of the mechanisms responsible for the storage of contents and their further processing (e.g. also ‘comparisons’), there is much literature, but no final clarity yet. Here again the way of a ‘hypothetical structure formation’ is chosen: explicit assumption of a structure that ‘somewhat explains’ the available phenomena with openness for further modifications.


ISSN 2567-6458, 23.February 2023 – 23.February 2023, 13:23h
Author: Gerd Doeben-Henisch

This text is a translation from a German source, aided by the automatic translation program ‘’ (free version).


This text is part of the Philosophy of Science theme within the the blog.


The following text is a confluence of ideas that have been driving me for many months. Parts of it can be found as texts in all three blogs (Citizen Science 2.0 for Sustainable Development, Integrated Engineering and the Human Factor (this blog), Philosophy Now. In Search for a new Human Paradigm). The choice of the word ‘grammar’ [1] for the following text is rather unusual, but seems to me to reflect the character of the reflections well.

Sustainability for populations

The concept of sustainable development is considered here in the context of ‘biological populations’. Such populations are dynamic entities with many ‘complex properties’. For the analysis of the ‘sustainability’ of such populations, there is one aspect that seems ‘fundamental’ for a proper understanding. It is the aspect whether and how the members of a population – the actors – are interconnected or not.

An ‘unconnected’ set

If I have ‘actors’ of a ‘population’, which are in no direct ‘interaction’ with each other, then also the ‘acting’ of these actors is isolated from each other. In a wide area they probably do not ‘get in each other’s way’; in a narrow area they could easily hinder each other or even fight each other, up to mutual destruction.

It should be noted that even such disconnected actors must have minimal ‘knowledge’ about themselves and the environment, also minimal ’emotions’, in order to live at all.

Without direct interaction, an unconnected population will nevertheless die out relatively quickly as a population.

A ‘connected’ set

A ‘connected set’ exists if the actors of a population have a sufficient number of direct interactions through which they could ‘coordinate’ their knowledge about themselves and the world, as well as their emotions, to such an extent that they are capable of ‘coordinated action’. Thereby the single, individual actions become related to their possible effect to a ‘common (= social) action’ which can effect more than each of them would have been able to do individually.

The ’emotions’ involved must rather be such that they do not so much ‘delimit/exclude’, but rather ‘include/recognize’.

The ‘knowledge’ involved must be rather that it is not ‘static’ and not ‘unrealistic’, but rather ‘open’, ‘learning’ and ‘realistic’.

The ‘survival’ of a connected population is basically possible if the most important ‘factors’ of a survival are sufficiently fulfilled.

Transitions from – to

The ‘transition’ from an ‘unconnected’ to a ‘connected’ state of a population is not inevitable. The primary motive may simply be the ‘will to survive’ (an emotion), and the growing ‘insight’ (= knowledge) that this is only possible with ‘minimal cooperation’. An individual, however, can live in a state of ‘loner’ for the duration of his life, because he does not have to experience his individual death as a sufficient reason to ally with others. A population as such, however, can only survive if a sufficient number of individuals survive, interacting minimally with each other. The history of life on planet Earth suggests the working hypothesis that for 3.5 billion years there have always been sufficient members of a population in biological populations (including the human population) to counter the ‘self-destructive tendencies’ of individuals with a ‘constructive tendency’.

The emergence and the maintenance of a ‘connected population’ needs a minimum of ‘suitable knowledge’ and ‘suitable emotions’ to succeed.

It is a permanent challenge for all biological populations to shape their own emotions in such a way that they tend not to exclude, to despise, but rather to include and to recognize. Similarly, knowledge must be suitable for acquiring a realistic picture of oneself, others, and the environment so that the behavior in question is ‘factually appropriate’ and tends to be more likely to lead to ‘success’.

As the history of the human population shows, both the ‘shaping of emotions’ and the ‘shaping of powerful knowledge’ are usually largely underestimated and poorly or not at all organized. The necessary ‘effort’ is shied away from, one underestimates the necessary ‘duration’ of such processes. Within knowledge there is additionally the general problem that the ‘short time spans’ within an individual life are an obstacle to recognize and form such processes where larger time spans require it (this concerns almost all ‘important’ processes).

We must also note that ‘connected states’ of populations can also collapse again at any time, if those behaviors that make them possible are weakened or disappear altogether. Connections in the realm of biological populations are largely ‘undetermined’! They are based on complex processes within and between the individual actors. Whole societies can ‘topple overnight’ if an event destroys ‘trust in context’. Without trust no context is possible. The emergence and the passing away of trust should be part of the basic concern of every society in a state of interconnectedness.

Political rules of the game

‘Politics’ encompasses the totality of arrangements that members of a human population agree to organize jointly binding decision-making processes.[2] On a rough scale, one could place two extremes: (i) On the one hand, a population with a ‘democratic system’ [3] and a population with a maximally un-democratic system.[4]

As already noted in general for ‘connected systems’: the success of democratic systems is in no way determinate. Enabling and sustaining it requires the total commitment of all participants ‘by their own conviction’.

Basic reality ‘corporeality’

Biological populations are fundamentally characterized by a ‘corporeality’ which is determined through and through by ‘regularities’ of the known material structures. In their ‘complex formations’ biological systems manifest also ‘complex properties’, which cannot be derived simply from their ‘individual parts’, but the respective identifiable ‘material components’ of their ‘body’ together with many ‘functional connections’ are fundamentally subject to a multiplicity of ‘laws’ which are ‘given’. To ‘change’ these is – if at all – only possible under certain limited conditions.

All biological actors consist of ‘biological cells’ which are the same for all. In this, human actors are part of the total development of (biological) life on planet Earth. The totality of (biological) life is also called ‘biome’ and the total habitat of a biome is also called ‘biosphere’. [5] The population of homo sapiens is only a vanishingly small part of the biome, but with the homo sapiens typical way of life it claims ever larger parts of the biosphere for itself at the expense of all other life forms.

(Biological) life has been taking place on planet Earth for about 3.5 billion years.[6] Earth, as part of the solar system [7], has had a very eventful history and shows strong dynamics until today, which can and does have a direct impact on the living conditions of biological life (continental plate displacement, earthquakes, volcanic eruptions, magnetic field displacement, ocean currents, climate, …).

Biological systems generally require a continuous intake of material substances (with energy potentials) to enable their own metabolic processes. They also excrete substances. Human populations need certain amounts of ‘food’, ‘water’, ‘dwellings’, ‘storage facilities’, ‘means of transport’, ‘energy’, … ‘raw materials’, … ‘production processes’, ‘exchange processes’ … As the sheer size of a population grows, the material quantities required (and also wastes) multiply to orders of magnitude that can destroy the functioning of the biosphere.

Predictive knowledge

If a coherent population does not want to leave possible future states to pure chance, then it needs a ‘knowledge’ which is suitable to construct ‘predictions’ (‘prognoses’) for a possible future (or even many ‘variants of future’) from the knowledge about the present and about the past.

In the history of homo sapiens so far, there is only one form of knowledge that has been demonstrably demonstrated to be suitable for resilient sustainable forecasts: the knowledge form of empirical sciences. [8] This form of knowledge is so far not perfect, but a better alternative is actually not known. At its core, ’empirical knowledge’ comprises the following elements: (i) A description of a baseline situation that is assumed to be ’empirically true’; (ii) A set of ‘descriptions of change processes’ that one has been able to formulate over time, and from which one knows that it is ‘highly probable’ that the described changes will occur again and again under known conditions; (iii) An ‘inference concept’ that describes how to apply to the description of a ‘given current situation’ the known descriptions of change processes in such a way that one can modify the description of the current situation to produce a ‘modified description’ that describes a new situation that can be considered a ‘highly probable continuation’ of the current situation in the future. [9]

The just sketched ‘basic idea’ of an empirical theory with predictive ability can be realized concretely in many ways. To investigate and describe this is the task of ‘philosophy of science’. However, the vagueness found in dealing with the notion of an ’empirical theory’ is also found in the understanding of what is meant by ‘philosophy of science.'[9]

In the present text, the view is taken that the ‘basic concept’ of an empirical theory can be fully realized in normal everyday action using everyday language. This concept of a ‘General Empirical Theory’ can be extended by any special languages, methods and sub-theories as needed. In this way, the hitherto unsolved problem of the many different individual empirical disciplines could be solved almost by itself.[10]

Sustainable knowledge

In the normal case, an empirical theory can, at best, generate forecasts that can be said to have a certain empirically based probability. In ‘complex situations’ such a prognosis can comprise many ‘variants’: A, B, …, Z. Now which of these variants is ‘better’ or ‘worse’ in the light of an ‘assumable criterion’ cannot be determined by an empirical theory itself. Here the ‘producers’ and the ‘users’ of the theory are asked: Do they have any ‘preferences’ why e.g. variant ‘B’ should be preferred to variant ‘C”: “Bicycle, subway, car or plane?” , “Genetic engineering or not?”, “Pesticides or not?”, “Nuclear energy or not?”, “Uncontrolled fishing or not?” …

The ‘evaluation criteria’ to be applied actually themselves require ‘explicit knowledge’ for the estimation of a possible ‘benefit’ on the one hand, on the other hand the concept of ‘benefit’ is anchored in the feeling and wanting of human actors: Why exactly do I want something? Why does something ‘feel good’? …

Current discussions worldwide show that the arsenal of ‘evaluation criteria’ and their implementation offer anything but a clear picture.


[1] For the typical use of the term ‘grammar’ see the English Wikipedia: In the text here in the blog I transfer this concept of ‘language’ to that ‘complex process’ in which the population of the life form ‘homo sapiens’ tries to achieve an ‘overall state’ on planet earth that allows a ‘maximally good future’ for as much ‘life’ as possible (with humans as a sub-population). A ‘grammar of sustainability’ presupposes a certain set of basic conditions, factors, which ‘interact’ with each other in a dynamic process, in order to realize as many states as possible in a ‘sequence of states’, which enable as good a life as possible for as many as possible.

[2] For the typical usage of the term politics, see the English Wikipedia: . This meaning is also assumed in the present text here.

[3] A very insightful project on empirical research on the state and development of ’empirical systems’democracies’ on planet Earth is the V-dem Institut::

[4] Of course, one could also choose completely different basic concepts for a scale. However, the concept of a ‘democratic system’ (with all its weaknesses) seems to me to be the ‘most suitable’ system in the light of the requirements for sustainable development; at the same time, however, it makes the highest demands of all systems on all those involved. That it came to the formation of ‘democracy-like’ systems at all in the course of history, actually borders almost on a miracle. The further development of such democracy-like systems fluctuates constantly between preservation and decay. Positively, one could say that the constant struggle for preservation is a kind of ‘training’ to enable sustainable development.

[5]  For typical uses of the terms ‘biome’ and ‘biosphere’, see the corresponding entries in the English Wikipedia: ‘biome’:, ‘biosphere’:

[6] Some basic data for planet Earth:

[7] Some basic data for the solar system:

[8] If you will search for he term ‘Empirical Science’ you ill be disappointed, because the English Wikipedia (as well as the German Version) does not provide such a term. You have either to accept the term ‘Science’ ( ) or the term ‘Empiricism’ (, but both do not cover the general properties of an Empirical theory.

[9] If you have a clock with hour and minute hands, which currently shows 11:04h, and you know from everyday experience that the minute hand advances by one stroke every minute, then you can conclude with a fairly high probability that the minute hand will advance by one stroke ‘very soon’. The initial description ‘The clock shows 11:04h’ would then be changed to that of the new description ‘The clock shows 11:05h’. Before the ’11:05h event’ the statement ‘The clock shows 11:05h’ would have the status of a ‘forecast’.

[10] A single discipline (physics, chemistry, biology, psychology, …) cannot conceptually grasp ‘the whole’ ‘out of itself’; it does not have to. The various attempts to ‘reduce’ any single discipline to another (physics is especially popular here) have all failed so far. Without a suitable ‘meta-theory’ no single discipline can free itself from its specialization. The concept of a ‘General Empirical Theory’ is such a meta-theory. Such a meta-theory fits into the concept of a modern philosophical thinking.

chatGBT. Different Findings

ISSN 2567-6458, 15.January 2023 – 17. March 2023
Author: Gerd Doeben-Henisch


This Text is a collection of Links to different experiments with chatGBT and some reflections about chatGBT.

chatGPT – How drunk do you have to be …

ISSN 2567-6458, 14.February 2023 – 17.April 2023
Author: Gerd Doeben-Henisch


This is a text in the context of ‘Different Findings about chatGPT’ (

Since the release of the chatbot ‘chatGPT’ to the larger public, a kind of ‘earthquake’ has been going through the media, worldwide, in many areas, from individuals to institutions, companies, government agencies …. everyone is looking for the ‘chatGPT experience’. These reactions are amazing, and frightening at the same time.

Remark: The text of this post represents a later ‘stage’ of my thinking about the usefulness of the chatGPT algorithm, which started with my first reflections in the text entitled “chatGBT about Rationality: Emotions, Mystik, Unconscious, Conscious, …” from 15./16.January 2023. The main text to this version is an English translation from an originally German text partially generated with the (free version).


The following lines form only a short note, since it is hardly worthwhile to discuss a ‘surface phenomenon’ so intensively, when the ‘deep structures’ should be explained. Somehow the ‘structures behind chatGPT’ seem to interest hardly anybody (I do not mean technical details of the used algorithms).

chatGPT as an object

The chatbot named ‘chatGPT’ is a piece of software, an algorithm that (i) was invented and programmed by humans. When (ii) people ask it questions, then (iii) it searches the database of documents known to it, which in turn have been created by humans, (iv) for text patterns that have a relation to the question according to certain formal criteria (partly given by the programmers). These ‘text finds’ are (v) also ‘arranged’ according to certain formal criteria (partly given by the programmers) into a new text, which (vi) should come close to those text patterns, which a human reader is ‘used’ to accept as ‘meaningful’.

Text surface – text meaning – truthfulness

A normal human being can distinguish – at least ‘intuitively’ – between the (i) ‘strings’ used as ‘expressions of a language’ and those (ii) ‘knowledge elements’ (in the mind of the hearer-speaker) which are as such ‘independent’ of the language elements, but which (iii) can be ‘freely associated’ by speakers-hearers of a language, so that the correlated ‘knowledge elements’ become what is usually called the ‘meaning’ of the language elements. [1] Of these knowledge elements (iv), every language participant already ‘knows’ ‘pre-linguistically’, as a learning child [2], that some of these knowledge elements are ‘correlatable’ with circumstances of the everyday world under certain circumstances. And the normal language user also ‘intuitively’ (automatically, unconsciously) has the ability to assess such correlation – in the light of the available knowledge – as (v) ‘possible’ or (vi) as rather ‘improbable’ or (vi) as ‘mere fancifulness’.”[3]

The basic ability of a human being to be able to establish a ‘correlation’ of meanings with (intersubjective) environmental facts is called – at least by some – philosophers ‘truth ability’ and in the execution of truth ability one then also can speak of ‘true’ linguistic utterances or of ‘true statements’.[5]

Distinctions like ‘true’, ‘possibly true’, ‘rather not true’ or ‘in no case true’ indicate that the reality reference of human knowledge elements is very diverse and ‘dynamic’. Something that was true a moment ago may not be true the next moment. Something that has long been dismissed as ‘mere fantasy’ may suddenly appear as ‘possible’ or ‘suddenly true’. To move in this ‘dynamically correlated space of meaning’ in such a way that a certain ‘inner and outer consistency’ is preserved, is a complex challenge, which has not yet been fully understood by philosophy and the sciences, let alone even approximately ‘explained’.

The fact is: we humans can do this to a certain extent. Of course, the more complex the knowledge space is, the more diverse the linguistic interactions with other people become, the more difficult it becomes to completely understand all aspects of a linguistic statement in a situation.

‘Air act’ chatGPT

Comparing the chatbot chatGPT with these ‘basic characteristics’ of humans, one can see that chatGPT can do none of these things. (i) It cannot ask questions meaningfully on its own, since there is no reason why it should ask (unless someone induces it to ask). (ii) Text documents (of people) are sets of expressions for him, for which he has no independent assignment of meaning. So he could never independently ask or answer the ‘truth question’ – with all its dynamic shades. He takes everything at ‘face value’ or one says right away that he is ‘only dreaming’.

If chatGPT, because of its large text database, has a subset of expressions that are somehow classified as ‘true’, then the algorithm can ‘in principle’ indirectly determine ‘probabilities’ that other sets of expressions that are not classified as ‘true’ then do ‘with some probability’ appear to be ‘true’. Whether the current chatGPT algorithm uses such ‘probable truths’ explicitly is unclear. In principle, it translates texts into ‘vector spaces’ that are ‘mapped into each other’ in various ways, and parts of these vector spaces are then output again in the form of a ‘text’. The concept of ‘truth’ does not appear in these mathematical operations – to my current knowledge. If, then it would be also only the formal logical concept of truth [4]; but this lies with respect to the vector spaces ‘above’ the vector spaces, forms with respect to these a ‘meta-concept’. If one wanted to actually apply this to the vector spaces and operations on these vector spaces, then one would have to completely rewrite the code of chatGPT. If one would do this – but nobody will be able to do this – then the code of chatGPT would have the status of a formal theory (as in mathematics) (see remark [5]). From an empirical truth capability chatGPT would then still be miles away.

Hybrid illusory truths

In the use case where the algorithm named ‘chatGPT’ uses expression sets similar to the texts that humans produce and read, chatGPT navigates purely formally and with probabilities through the space of formal expression elements. However, a human who ‘reads’ the expression sets produced by chatGPT automatically (= unconsciously!) activates his or her ‘linguistic knowledge of meaning’ and projects it into the abstract expression sets of chatGBT. As one can observe (and hears and reads from others), the abstract expression sets produced by chatGBT are so similar to the usual text input of humans – purely formally – that a human can seemingly effortlessly correlate his meaning knowledge with these texts. This has the consequence that the receiving (reading, listening) human has the ‘feeling’ that chatGPT produces ‘meaningful texts’. In the ‘projection’ of the reading/listening human YES, but in the production of chatGPT NO. chatGBT has only formal expression sets (coded as vector spaces), with which it calculates ‘blindly’. It does not have ‘meanings’ in the human sense even rudimentarily.

Back to the Human?

(Last change: 27.February 2023)

How easily people are impressed by a ‘fake machine’ to the point of apparently forgetting themselves in face of the machine by feeling ‘stupid’ and ‘inefficient’, although the machine only makes ‘correlations’ between human questions and human knowledge documents in a purely formal way, is actually frightening [6a,b], [7], at least in a double sense: (i)Instead of better recognizing (and using) one’s own potentials, one stares spellbound like the famous ‘rabbit at the snake’, although the machine is still a ‘product of the human mind’. (ii) This ‘cognitive deception’ misses to better understand the actually immense potential of ‘collective human intelligence’, which of course could then be advanced by at least one evolutionary level higher by incorporating modern technologies. The challenge of the hour is ‘Collective Human-Machine Intelligence’ in the context of sustainable development with priority given to human collective intelligence. The current so-called ‘artificial (= machine) intelligence’ is only present by rather primitive algorithms. Integrated into a developed ‘collective human intelligence’ quite different forms of ‘intelligence’ could be realized, ones we currently can only dream of at most.

Commenting on other articles from other authors about chatGPT

(Last change: 14.April 2023)

[7], [8],[9],[11],[12],[13],[14]


(Last change: 3.April 2023)


[1] In the many thousands of ‘natural languages’ of this world one can observe how ‘experiential environmental facts’ can become ‘knowledge elements’ via ‘perception’, which are then correlated with different expressions in each language. Linguists (and semioticians) therefore speak here of ‘conventions’, ‘freely agreed assignments’.

[2] Due to physical interaction with the environment, which enables ‘perceptual events’ that are distinguishable from the ‘remembered and known knowledge elements’.

[3] The classification of ‘knowledge elements’ as ‘imaginations/ fantasies’ can be wrong, as many examples show, like vice versa, the classification as ‘probably correlatable’ can be wrong too!

[4] Not the ‘classical (Aristotelian) logic’ since the Aristotelian logic did not yet realize a stricCommenting on other articles from other authors about chatGPTt separation of ‘form’ (elements of expression) and ‘content’ (meaning).

[5] There are also contexts in which one speaks of ‘true statements’ although there is no relation to a concrete world experience. For example in the field of mathematics, where one likes to say that a statement is ‘true’. But this is a completely ‘different truth’. Here it is about the fact that in the context of a ‘mathematical theory’ certain ‘basic assumptions’ were made (which must have nothing to do with a concrete reality), and one then ‘derives’ other statements starting from these basic assumptions with the help of a formal concept of inference (the formal logic). A ‘derived statement’ (usually called a ‘theorem’), also has no relation to a concrete reality. It is ‘logically true’ or ‘formally true’. If one would ‘relate’ the basic assumptions of a mathematical theory to concrete reality by – certainly not very simple – ‘interpretations’ (as e.g. in ‘applied physics’), then it may be, under special conditions, that the formally derived statements of such an ’empirically interpreted abstract theory’ gain an ’empirical meaning’, which may be ‘correlatable’ under certain conditions; then such statements would not only be called ‘logically true’, but also ’empirically true’. As the history of science and philosophy of science shows, however, the ‘transition’ from empirically interpreted abstract theories to empirically interpretable inferences with truth claims is not trivial. The reason lies in the used ‘logical inference concept’. In modern formal logic there are almost ‘arbitrarily many’ different formal inference terms possible. Whether such a formal inference term really ‘adequately represents’ the structure of empirical facts via abstract structures with formal inferences is not at all certain! This pro’simulation’blem is not really clarified in the philosophy of science so far!

[6a] Weizenbaum’s 1966 chatbot ‘Eliza’, despite its simplicity, was able to make human users believe that the program ‘understood’ them even when they were told that it was just a simple algorithm. See the keyword  ‚Eliza‘ in wkp-en:

[6b] Joseph Weizenbaum, 1966, „ELIZA. A Computer Program For the Study of Natural Language. Communication Between Man And Machine“, Communications of the ACM, Vol.9, No.1, January 1966, URL: . Note: Although the program ‘Eliza’ by Weizenbaum was very simple, all users were fascinated by the program because they had the feeling “It understands me”, while the program only mirrored the questions and statements of the users. In other words, the users were ‘fascinated by themselves’ with the program as a kind of ‘mirror’.

[7] Ted Chiang, 2023, “ChatGPT Is a Blurry JPEG of the Web. OpenAI’s chatbot offers paraphrases, whereas Google offers quotes. Which do we prefer?”, The NEW YORKER, February 9, 2023. URL: . Note: Chang looks to the chatGPT program using the paradigm of a ‘compression algorithm’: the abundance of information is ‘condensed/abstracted’ so that a slightly blurred image of the text volumes is created, not a 1-to-1 copy. This gives the user the impression of understanding at the expense of access to detail and accuracy. The texts of chatGPT are not ‘true’, but they ‘mute’.

[8] Dietmar Hansch, 2023, “The more honest name would be ‘Simulated Intelligence’. Which deficits bots like chatGBT suffer from and what that must mean for our dealings with them.”, FAZ Frankfurter Allgemeine Zeitung, March 1, 2023, p.N1 . Note: While Chiang (see [7]) approaches the phenomenon chatGPT with the concept ‘compression algorithm’ Hansch prefers the terms ‘statistical-incremental learning’ as well as ‘insight learning’. For Hansch, insight learning is tied to ‘mind’ and ‘consciousness’, for which he postulates ‘equivalent structures’ in the brain. Regarding insight learning, Hansch further comments “insight learning is not only faster, but also indispensable for a deep, holistic understanding of the world, which grasps far-reaching connections as well as conveys criteria for truth and truthfulness.” It is not surprising then when Hansch writes “Insight learning is the highest form of learning…”. With reference to this frame of reference established by Hansch, he classifies chatGPT in the sense that it is only capable of ‘statistical-incremental learning’. Further, Hansch postulates for humans, “Human learning is never purely objective, we always structure the world in relation to our needs, feelings, and conscious purposes…”. He calls this the ‘human reference’ in human cognition, and it is precisely this what he also denies for chatGPT. For common designation ‘AI’ as ‘Artificial Intelligence’ he postulates that the term ‘intelligence’ in this word combination has nothing to do with the meaning we associate with ‘intelligence’ in the case of humans, so in no case has the term intelligence anything to do with ‘insight learning’, as he has stated before. To give more expression to this fact of mismatch he would rather use the term ‘simulated intelligence’ (see also [9]). This conceptual strategy seems strange, since the term simulation [10] normally presupposes that there is a clear state of affairs, for which one defines a simplified ‘model’, by means of which the behavior of the original system can then be — simplified — viewed and examined in important respects. In the present case, however, it is not quite clear what the original system should be, which is to be simulated in the case of AI. There is so far no unified definition of ‘intelligence’ in the context of ‘AI’! As far as Hansch’s terminology itself is concerned, the terms ‘statistical-incremental learning’ as well as ‘insight learning’ are not clearly defined either; the relation to observable human behavior let alone to the postulated ‘equivalent brain structures’ is arbitrarily unclear (which is not improved by the relation to terms like ‘consciousness’ and ‘mind’ which are not defined yet).

[9] Severin Tatarczyk, Feb 19, 2023, on ‘Simulated Intelligence’:

[10] See the term ‘simulation’ in wkp-en:

[11] Doris Brelowski pointed me to the following article: James Bridle, 16.March 2023, „The stupidity of AI. Artificial intelligence in its current form is based on the wholesale appropriation of existing culture, and the notion that it is actually intelligent could be actively dangerous“, URL: . Comment: An article that knowledgeably and very sophisticatedly describes the interplay between forms of AI that are being ‘unleashed’ on the entire Internet by large corporations, and what this is doing to human culture and then, of course, to humans themselves. Two quotes from this very readable article: Quote 1: „The entirety of this kind of publicly available AI, whether it works with images or words, as well as the many data-driven applications like it, is based on this wholesale appropriation of existing culture, the scope of which we can barely comprehend. Public or private, legal or otherwise, most of the text and images scraped up by these systems exist in the nebulous domain of “fair use” (permitted in the US, but questionable if not outright illegal in the EU). Like most of what goes on inside advanced neural networks, it’s really impossible to understand how they work from the outside, rare encounters such as Lapine’s aside. But we can be certain of this: far from being the magical, novel creations of brilliant machines, the outputs of this kind of AI is entirely dependent on the uncredited and unremunerated work of generations of human artists.“ Quote 2: „Now, this didn’t happen because ChatGPT is inherently rightwing. It’s because it’s inherently stupid. It has read most of the internet, and it knows what human language is supposed to sound like, but it has no relation to reality whatsoever. It is dreaming sentences that sound about right, and listening to it talk is frankly about as interesting as listening to someone’s dreams. It is very good at producing what sounds like sense, and best of all at producing cliche and banality, which has composed the majority of its diet, but it remains incapable of relating meaningfully to the world as it actually is. Distrust anyone who pretends that this is an echo, even an approximation, of consciousness. (As this piece was going to publication, OpenAI released a new version of the system that powers ChatGPT, and said it was “less likely to make up facts”.)“

[12] David Krakauer in an Interview with Brian Gallagher in Nautilus, March 27, 2023, Does GPT-4 Really Understand What We’re Saying?, URL: David Krakauer, an evolutionary theorist and president of the Santa Fe Institute for complexity science, analyzes the role of chat-GPT-4 models compared to the human language model and a more differentiated understanding of what ‘understanding’ and ‘Intelligence’ could mean. His main points of criticism are in close agreement with the position int he text above. He points out that (i) one has clearly to distinguish between the ‘information concept’ of Shannon and the concept of ‘meaning’. Something can represent a high information load but can nevertheless be empty of any meaning. Then he points out (ii) that there are several possible variants of the meaning of ‘understanding’. Coordinating with human understanding can work, but to understand in a constructive sense: no. Then Krakauer (iii) relates GPT-4 to the standard model of science which he characterizes as ‘parsimony’; chat-GPT-4 is clearly the opposite. Another point (iv) is the fact, that human experience has an ’emotional’ and a ‘physical’ aspect based on somato-sensory perceptions within its body. This is missing with GPT-4. This is somehow related (v) to the fact, that the human brain with its ‘algorithms’ is the product of millions of years of evolution in a complex environment. The GPT-4 algorithms have nothing comparable; they have only to ‘convince’ humans. Finally (vi) humans can generate ‘physical models’ inspired by their experience and can quickly argue by using such models. Thus Krakauer concludes “So the narrative that says we’ve rediscovered human reasoning is so misguided in so many ways. Just demonstrably false. That can’t be the way to go.”

[13] By Marie-José Kolly (text) and Merlin Flügel (illustration), 11.04.2023, “Chatbots like GPT can form wonderful sentences. That’s exactly what makes them a problem.” Artificial intelligence fools us into believing something that is not. A plea against the general enthusiasm. Online newspaper ‘Republik’ from Schweiz, URL: Here are some comments:

The text by Marie-José Kolly stands out because the algorithm named chatGPT(4) is characterized here both in its input-output behavior and additionally a comparison to humans is made at least to some extent.

The basic problem of the algorithm chatGPT(4) is (as also pointed out in my text above) that it has as input data exclusively text sets (also those of the users), which are analyzed according to purely statistical procedures in their formal properties. On the basis of the analyzed regularities, arbitrary text collages can then be generated, which are very similar in form to human texts, so much so that many people take them for ‘human-generated texts’. In fact, however, the algorithm lacks what we humans call ‘world knowledge’, it lacks real ‘thinking’, it lacks ‘own’ value positions, and the algorithm ‘does not understand’ its own text.

Due to this lack of its own reference to the world, the algorithm can be manipulated very easily via the available text volumes. A ‘mass production’ of ‘junk texts’, of ‘disinformation’ is thus very easily possible.

If one considers that modern democracies can only function if the majority of citizens have a common basis of facts that can be assumed to be ‘true’, a common body of knowledge, and reliable media, then the chatGPT(4) algorithm can massively destroy precisely these requirements for a democracy.

The interesting question then is whether chatGPT(4) can actually support a human society, especially a democratic society, in a positive-constructive way?

In any case, it is known that humans learn the use of their language from childhood on in direct contact with a real world, largely playfully, in interaction with other children/people. For humans ‘words’ are never isolated quantities, but they are always dynamically integrated into equally dynamic contexts. Language is never only ‘form’ but always at the same time ‘content’, and this in many different ways. This is only possible because humans have complex cognitive abilities, which include corresponding memory abilities as well as abilities for generalization.

The cultural-historical development from spoken language, via writing, books, libraries up to enormous digital data memories has indeed achieved tremendous things concerning the ‘forms’ of language and therein – possibly – encoded knowledge, but there is the impression that the ‘automation’ of the forms drives them into ‘isolation’, so that the forms lose more and more their contact to reality, to meaning, to truth. Language as a central moment of enabling more complex knowledge and more complex action is thus increasingly becoming a ‘parasite’ that claims more and more space and in the process destroys more and more meaning and truth.

[14] Gary Marcus, April 2023, Hoping for the Best as AI Evolves, Gary Marcus on the systems that “pose a real and imminent threat to the fabric of society.” Communications of the ACM, Volume 66, Issue 4, April 2023 pp 6–7, , Comment: Gary Marcus writes on the occasion of the effects of systems like chatGPT(OpenAI), Dalle-E2 and Lensa about the seriously increasing negative effects these tools can have within a society, to an extent that poses a serious threat to every society! These tools are inherently flawed in the areas of thinking, facts and hallucinations. At near zero cost, they can be used to create and execute large-scale disinformation campaigns very quickly. Looking to the globally important website ‘Stack Overflow’ for programmers as an example, one could (and can) see how the inflationary use of chatGPT due to its inherent many flaws pushes the Stack Overflow’s management team having to urge its users to completely stop using chatGPT in order to prevent the site’s collapse after 14 years. In the case of big players who specifically target disinformation, such a measure is ineffective. These players aim to create a data world in which no one will be able to trust anyone. With this in mind, Gary Marcus sets out 4 postulates that every society should implement: (1) Automatically generated not certified content should be completely banned; (2) Legally effective measures must be adopted that can prevent ‘misinformation’; (3) User accounts must be made tamper-proof; (4) A new generation of AI tools is needed that can verify facts. (Translated with partial support from (free version))