Category Archives: truth

There exists only one big Problem for the Future of Human Mankind: The Belief in false Narratives

Author: Gerd Doeben-Henisch

Time: Jan 5, 2024 – Jan 8, 2024 (09:45 a.m. CET)

Email: gerd@doeben-henisch.de

TRANSLATION: The following text is a translation from a German version into English. For the translation I am using the software deepL.com as well as chatGPT 4. The English version is a slightly revised version of the German text.

This blog entry will be completed today. However, it has laid the foundations for considerations that will be pursued further in a new blog entry.

CONTEXT

This text belongs to the topic Philosophy (of Science).

Introduction

Triggered by several reasons I started some investigation in the phenomenon of ‘propaganda’ to sharpen my understanding. My strategy was first to try to characterize the phenomenon of ‘general communication’ in order to find some ‘harder criteria’ that would allow to characterize the concept of ‘propaganda’ to stand out against this general background in a somewhat comprehensible way.

The realization of this goal then actually led to an ever more fundamental examination of our normal (human) communication, so that forms of propaganda become recognizable as ‘special cases’ of our communication. The worrying thing about this is that even so-called ‘normal communication’ contains numerous elements that can make it very difficult to recognize and pass on ‘truth’ (*). ‘Massive cases of propaganda’ therefore have their ‘home’ where we communicate with each other every day. So if we want to prevent propaganda, we have to start in everyday life.

(*) The concept of ‘truth’ is examined and explained in great detail in the following long text below. Unfortunately, I have not yet found a ‘short formula’ for it. In essence, it is about establishing a connection to ‘real’ events and processes in the world – including one’s own body – in such a way that they can, in principle, be understood and verified by others.

DICTATORIAL CONTEXT

However, it becomes difficult when there is enough political power that can set the social framework conditions in such a way that for the individual in everyday life – the citizen! – general communication is more or less prescribed – ‘dictated’. Then ‘truth’ becomes less and less or even non-existent. A society is then ‘programmed’ for its own downfall through the suppression of truth. ([3], [6]).

EVERYDAY LIFE AS A DICTATOR ?
The hour of narratives

But – and this is the far more dangerous form of ‘propaganda’ ! – even if there is not a nationwide apparatus of power that prescribes certain forms of ‘truth’, a mutilation or gross distortion of truth can still take place on a grand scale. Worldwide today, in the age of mass media, especially in the age of the internet, we can see that individuals, small groups, special organizations, political groups, entire religious communities, in fact all people and their social manifestations, follow a certain ‘narrative’ [*11] when they act.

Typical for acting according to a narrative is that those who do so individually believe that it is ‘their own decision’ and that their narrative is ‘true’, and that they are therefore ‘in the right’ when they act accordingly. This ‘feeling to be right’ can go as far as claiming the right to kill others because they ‘act wrongly’ in the light of their own ‘narrative’. We should therefore speak here of a ‘narrative truth’: Within the framework of the narrative, a picture of the world is drawn that ‘as a whole’ enables a perspective that ‘as such’ is ‘found to be good’ by the followers of the narrative, as ‘making sense’. Normally, the effect of a narrative, which is experienced as ‘meaningful’, is so great that the ‘truth content’ is no longer examined in detail.

RELIGIOUS NARRATIVES

This has existed at all times in the history of mankind. Narratives that appeared as ‘religious beliefs’ were particularly effective. It is therefore no coincidence that almost all governments of the last millennia have adopted religious beliefs as state doctrines; an essential component of religious beliefs is that they are ‘unprovable’, i.e. ‘incapable of truth’. This makes a religious narrative a wonderful tool in the hands of the powerful to motivate people to behave in certain ways without the threat of violence.

POPULAR NARRATIVES

In recent decades, however, we have experienced new, ‘modern forms’ of narratives that do not come across as religious narratives, but which nevertheless have a very similar effect: People perceive these narratives as ‘giving meaning’ in a world that is becoming increasingly confusing and therefore threatening for everyone today. Individual people, the citizens, also feel ‘politically helpless’, so that – even in a ‘democracy’ – they have the feeling that they cannot directly influence anything: the ‘people up there’ do what they want. In such a situation, ‘simplistic narratives’ are a blessing for the maltreated soul; you hear them and have the feeling: yes, that’s how it is; that’s exactly how I ‘feel’!

Such ‘popular narratives’, which enable ‘good feelings’, are gaining ever greater power. What they have in common with religious narratives is that the ‘followers’ of popular narratives no longer ask the ‘question of truth’; most of them are also not sufficiently ‘trained’ to be able to clarify the truth of a narrative at all. It is typical for supporters of narratives that they are generally hardly able to explain their own narrative to others. They typically send each other links to texts/videos that they find ‘good’ because these texts/videos somehow seem to support the popular narrative, and tend not to check the authors and sources because they are in the eyes of the followers such ‘decent people’, which always say exactly the ‘same thing’ as the ‘popular narrative’ dictates.

NARRATIVES ARE SEXY FOR POWER

If you now take into account that the ‘world of narratives’ is an extremely tempting offer for all those who have power over people or would like to gain power over people, then it should come as no surprise that many governments in this world, many other power groups, are doing just that today: they do not try to coerce people ‘directly’, but they ‘produce’ popular narratives or ‘monitor’ already existing popular narratives’ in order to gain power over the hearts and minds of more and more people via the detour of these narratives. Some speak here of ‘hybrid warfare’, others of ‘modern propaganda’, but ultimately, I guess, these terms miss the core of the problem.

THE NARRATIVE AS A BASIC CULTURAL PATTERN
The ‘irrational’ defends itself against the ‘rational’

The core of the problem is the way in which human communities have always organized their collective action, namely through narratives; we humans have no other option. However, such narratives – as the considerations further down in the text will show – are extremely susceptible to ‘falsity’, to a ‘distortion of the picture of the world’. In the context of the development of legal systems, approaches have been developed during at least the last 7000 years to ‘improve’ the abuse of power in a society by supporting truth-preserving mechanisms. Gradually, this has certainly helped, with all the deficits that still exist today. Additionally, about 500 years ago, a real revolution took place: humanity managed to find a format with the concept of a ‘verifiable narrative (empirical theory)’ that optimized the ‘preservation of truth’ and minimized the slide into untruth. This new concept of ‘verifiable truth’ has enabled great insights that before were beyond imagination .

The ‘aura of the scientific’ has meanwhile permeated almost all of human culture, almost! But we have to realize that although scientific thinking has comprehensively shaped the world of practicality through modern technologies, the way of scientific thinking has not overridden all other narratives. On the contrary, the ‘non-truth narratives’ have become so strong again that they are pushing back the ‘scientific’ in more and more areas of our world, patronizing it, forbidding it, eradicating it. The ‘irrationality’ of religious and popular narratives is stronger than ever before. ‘Irrational narratives’ are for many so appealing because they spare the individual from having to ‘think for themselves’. Real thinking is exhausting, unpopular, annoying and hinders the dream of a simple solution.

THE CENTRAL PROBLEM OF HUMANITY

Against this backdrop, the widespread inability of people to recognize and overcome ‘irrational narratives’ appears to be the central problem facing humanity in mastering the current global challenges. Before we need more technology (we certainly do), we need more people who are able and willing to think more and better, and who are also able to solve ‘real problems’ together with others. Real problems can be recognized by the fact that they are largely ‘new’, that there are no ‘simple off-the-shelf’ solutions for them, that you really have to ‘struggle’ together for possible insights; in principle, the ‘old’ is not enough to recognize and implement the ‘true new’, and the future is precisely the space with the greatest amount of ‘unknown’, with lots of ‘genuinely new’ things.

The following text examines this view in detail.

MAIN TEXT FOR EXPLANATION

MODERN PROPAGANDA ?

As mentioned in the introduction the trigger for me to write this text was the confrontation with a popular book which appeared to me as a piece of ‘propaganda’. When I considered to describe my opinion with own words I detected that I had some difficulties: what is the difference between ‘propaganda’ and ‘everyday communication’? This forced me to think a little bit more about the ingredients of ‘everyday communication’ and where and why a ‘communication’ is ‘different’ to our ‘everyday communication’. As usual in the beginning of some discussion I took a first look to the various entries in Wikipedia (German and English). The entry in the English Wikipedia on ‘Propaganda [1b] attempts a very similar strategy to look to ‘normal communication’ and compared to this having a look to the phenomenon of ‘propaganda’, albeit with not quite sharp contours. However, it provides a broad overview of various forms of communication, including those forms that are ‘special’ (‘biased’), i.e. do not reflect the content to be communicated in the way that one would reproduce it according to ‘objective, verifiable criteria’.[*0] However, the variety of examples suggests that it is not easy to distinguish between ‘special’ and ‘normal’ communication: What then are these ‘objective verifiable criteria’? Who defines them?

Assuming for a moment that it is clear what these ‘objectively verifiable criteria’ are, one can tentatively attempt a working definition for the general (normal?) case of communication as a starting point:

Working Definition:

The general case of communication could be tentatively described as a simple attempt by one person – let’s call them the ‘author’ – to ‘bring something to the attention’ of another person – let’s call them the ‘interlocutor’. We tentatively call what is to be brought to their attention ‘the message’. We know from everyday life that an author can have numerous ‘characteristics’ that can affect the content of his message.

Here is a short list of properties that characterize the author’s situation in a communication. Then corresponding properties for the interlocutor.

The Author:

  1. The available knowledge of the author — both conscious and unconscious — determines the kind of message the author can create.
  2. His ability to discern truth determines whether and to what extent he can differentiate what in his message is verifiable in the real world — present or past — as ‘accurate’ or ‘true’.
  3. His linguistic ability determines whether and how much of his available knowledge can be communicated linguistically.
  4. The world of emotions decides whether he wants to communicate anything at all, for example, when, how, to whom, how intensely, how conspicuously, etc.
  5. The social context can affect whether he holds a certain social role, which dictates when he can and should communicate what, how, and with whom.
  6. The real conditions of communication determine whether a suitable ‘medium of communication’ is available (spoken sound, writing, sound, film, etc.) and whether and how it is accessible to potential interlocutors.
  7. The author’s physical constitution decides how far and to what extent he can communicate at all.

The Interlocutor:

  1. In general, the characteristics that apply to the author also apply to the interlocutor. However, some points can be particularly emphasized for the role of the interlocutor:
  2. The available knowledge of the interlocutor determines which aspects of the author’s message can be understood at all.
  3. The ability of the interlocutor to discern truth determines whether and to what extent he can also differentiate what in the conveyed message is verifiable as ‘accurate’ or ‘true’.
  4. The linguistic ability of the interlocutor affects whether and how much of the message he can absorb purely linguistically.
  5. Emotions decide whether the interlocutor wants to take in anything at all, for example, when, how, how much, with what inner attitude, etc.
  6. The social context can also affect whether the interlocutor holds a certain social role, which dictates when he can and should communicate what, how, and with whom.
  7. Furthermore, it can be important whether the communication medium is so familiar to the interlocutor that he can use it sufficiently well.
  8. The physical constitution of the interlocutor can also determine how far and to what extent the interlocutor can communicate at all.

Even this small selection of factors shows how diverse the situations can be in which ‘normal communication’ can take on a ‘special character’ due to the ‘effect of different circumstances’. For example, an actually ‘harmless greeting’ can lead to a social problem with many different consequences in certain roles. A seemingly ‘normal report’ can become a problem because the contact person misunderstands the message purely linguistically. A ‘factual report’ can have an emotional impact on the interlocutor due to the way it is presented, which can lead to them enthusiastically accepting the message or – on the contrary – vehemently rejecting it. Or, if the author has a tangible interest in persuading the interlocutor to behave in a certain way, this can lead to a certain situation not being presented in a ‘purely factual’ way, but rather to many aspects being communicated that seem suitable to the author to persuade the interlocutor to perceive the situation in a certain way and to adopt it accordingly. These ‘additional’ aspects can refer to many real circumstances of the communication situation beyond the pure message.

Types of communication …

Given this potential ‘diversity’, the question arises as to whether it will even be possible to define something like normal communication?

In order to be able to answer this question meaningfully, one should have a kind of ‘overview’ of all possible combinations of the properties of author (1-7) and interlocutor (1-8) and one should also have to be able to evaluate each of these possible combinations with a view to ‘normality’.

It should be noted that the two lists of properties author (1-7) and interlocutor (1-8) have a certain ‘arbitrariness’ attached to them: you can build the lists as they have been constructed here, but you don’t have to.

This is related to the general way in which we humans think: on one hand, we have ‘individual events that happen’ — or that we can ‘remember’ —, and on the other hand, we can ‘set’ ‘arbitrary relationships’ between ‘any individual events’ in our thinking. In science, this is called ‘hypothesis formation’. Whether or not such formation of hypotheses is undertaken, and which ones, is not standardized anywhere. Events as such do not enforce any particular hypothesis formations. Whether they are ‘sensible’ or not is determined solely in the later course of their ‘practical use’. One could even say that such hypothesis formation is a rudimentary form of ‘ethics’: the moment one adopts a hypothesis regarding a certain relationship between events, one minimally considers it ‘important’, otherwise, one would not undertake this hypothesis formation.

In this respect, it can be said that ‘everyday life’ is the primary place for possible working hypotheses and possible ‘minimum values’.

The following diagram demonstrates a possible arrangement of the characteristics of the author and the interlocutor:

FIGURE : Overview of the possible overlaps of knowledge between the author and the interlocutor, if everyone can have any knowledge at its disposal.

What is easy to recognize is the fact that an author can naturally have a constellation of knowledge that draws on an almost ‘infinite number of possibilities’. The same applies to the interlocutor. In purely abstract terms, the number of possible combinations is ‘virtually infinite’ due to the assumptions about the properties Author 1 and Interlocutor 2, which ultimately makes the question of ‘normality’ at the abstract level undecidable.


However, since both authors and interlocutors are not spherical beings from some abstract angle of possibilities, but are usually ‘concrete people’ with a ‘concrete history’ in a ‘concrete life-world’ at a ‘specific historical time’, the quasi-infinite abstract space of possibilities is narrowed down to a finite, manageable set of concretes. Yet, even these can still be considerably large when related to two specific individuals. Which person, with their life experience from which area, should now be taken as the ‘norm’ for ‘normal communication’?


It seems more likely that individual people are somehow ‘typified’, for example, by age and learning history, although a ‘learning history’ may not provide a clear picture either. Graduates from the same school can — as we know — possess very different knowledge afterwards, even though commonalities may be ‘minimally typical’.

Overall, the approach based on the characteristics of the author and the interlocutor does not seem to provide really clear criteria for a norm, even though a specification such as ‘the humanistic high school in Hadamar (a small German town) 1960 – 1968’ would suggest rudimentary commonalities.


One could now try to include the further characteristics of Author 2-7 and Interlocutor 3-8 in the considerations, but the ‘construction of normal communication’ seems to lead more and more into an unclear space of possibilities based on the assumptions of Author 1 and Interlocutor 2.

What does this mean for the typification of communication as ‘propaganda’? Isn’t ultimately every communication also a form of propaganda, or is there a possibility to sufficiently accurately characterize the form of ‘propaganda’, although it does not seem possible to find a standard for ‘normal communication’? … or will a better characterization of ‘propaganda’ indirectly provide clues for ‘non-propaganda’?

TRUTH and MEANING: Language as Key

The spontaneous attempt to clarify the meaning of the term ‘propaganda’ to the extent that one gets a few constructive criteria for being able to characterize certain forms of communication as ‘propaganda’ or not, gets into ever ‘deeper waters’. Are there now ‘objective verifiable criteria’ that one can work with, or not? And: Who determines them?

Let us temporarily stick to working hypothesis 1, that we are dealing with an author who articulates a message for an interlocutor, and let us expand this working hypothesis by the following addition 1: such communication always takes place in a social context. This means that the perception and knowledge of the individual actors (author, interlocutor) can continuously interact with this social context or ‘automatically interacts’ with it. The latter is because we humans are built in such a way that our body with its brain just does this, without ‘us’ having to make ‘conscious decisions’ for it.[*1]

For this section, I would like to extend the previous working hypothesis 1 together with supplement 1 by a further working hypothesis 2 (localization of language) [*4]:

  1. Every medium (language, sound, image, etc.) can contain a ‘potential meaning’.
  2. When creating the media event, the ‘author’ may attempt to ‘connect’ possible ‘contents’ that are to be ‘conveyed’ by him with the medium (‘putting into words/sound/image’, ‘encoding’, etc.). This ‘assignment’ of meaning occurs both ‘unconsciously/automatically’ and ‘(partially) consciously’.
  3. In perceiving the media event, the ‘interlocutor’ may try to assign a ‘possible meaning’ to this perceived event. This ‘assignment’ of meaning also happens both ‘unconsciously/automatically’ and ‘(partially) consciously’.
  4. The assignment of meaning requires both the author and the interlocutor to have undergone ‘learning processes’ (usually years, many years) that have made it possible to link certain ‘events of the external world’ as well as ‘internal states’ with certain media events.
  5. The ‘learning of meaning relationships’ always takes place in social contexts, as a media structure meant to ‘convey meaning’ between people belongs to everyone involved in the communication process.
  6. Those medial elements that are actually used for the ‘exchange of meanings’ all together form what is called a ‘language’: the ‘medial elements themselves’ form the ‘surface structure’ of the language, its ‘sign dimension’, and the ‘inner states’ in each ‘actor’ involved, form the ‘individual-subjective space of possible meanings’. This inner subjective space comprises two components: (i) the internally available elements as potential meaning content and (ii) a dynamic ‘meaning relationship’ that ‘links’ perceived elements of the surface structure and the potential meaning content.


To answer the guiding question of whether one can “characterize certain forms of communication as ‘propaganda’ or not,” one needs ‘objective, verifiable criteria’ on the basis of which a statement can be formulated. This question can be used to ask back whether there are ‘objective criteria’ in ‘normal everyday dialogue’ that we can use in everyday life to collectively decide whether a ‘claimed fact’ is ‘true’ or not; in this context, the word ‘true’ is also used. Can this be defined a bit more precisely?

For this I propose an additional working hypotheses 3:

  1. At least two actors can agree that a certain meaning, associated with the media construct, exists as a sensibly perceivable fact in such a way that they can agree that the ‘claimed fact’ is indeed present. Such a specific occurrence should be called ‘true 1’ or ‘Truth 1.’ A ‘specific occurrence’ can change at any time and quickly due to the dynamics of the real world (including the actors themselves), for example: the rain stops, the coffee cup is empty, the car from before is gone, the empty sidewalk is occupied by a group of people, etc.
  2. At least two actors can agree that a certain meaning, associated with the media construct, is currently not present as a real fact. Referring to the current situation of ‘non-occurrence,’ one would say that the statement is ‘false 1’; the claimed fact does not actually exist contrary to the claim.
  3. At least two actors can agree that a certain meaning, associated with the media construct, is currently not present, but based on previous experience, it is ‘quite likely’ to occur in a ‘possible future situation.’ This aspect shall be called ‘potentially true’ or ‘true 2’ or ‘Truth 2.’ Should the fact then ‘actually occur’ at some point in the future, Truth 2 would transform into Truth 1.
  4. At least two actors can agree that a certain meaning associated with the media construct does not currently exist and that, based on previous experience, it is ‘fairly certain that it is unclear’ whether the intended fact could actually occur in a ‘possible future situation’. This aspect should be called ‘speculative true’ or ‘true 3’ or ‘truth 3’. Should the situation then ‘actually occur’ at some point, truth 3 would change into truth 1.
  5. At least two actors can agree that a certain meaning associated with the medial construct does not currently exist, and on the basis of previous experience ‘it is fairly certain’ that the intended fact could never occur in a ‘possible future situation’. This aspect should be called ‘speculative false’ or ‘false 2’.

A closer look at these 5 assumptions of working hypothesis 3 reveals that there are two ‘poles’ in all these distinctions, which stand in certain relationships to each other: on the one hand, there are real facts as poles, which are ‘currently perceived or not perceived by all participants’ and, on the other hand, there is a ‘known meaning’ in the minds of the participants, which can or cannot be related to a current fact. This results in the following distribution of values:

REAL FACTsRelationship to Meaning
Given1Fits (true 1)
Given2Doesn’t fit (false 1)
Not given3Assumed, that it will fit in the future (true 2)
Not given4Unclear, whether it would fit in the future (true 3)
Not given5Assumed, that it would not fit in the future (false 2)

In this — still somewhat rough — scheme, ‘the meaning of thoughts’ can be qualified in relation to something currently present as ‘fitting’ or ‘not fitting’, or in the absence of something real as ‘might fit’ or ‘unclear whether it can fit’ or ‘certain that it cannot fit’.

However, it is important to note that these qualifications are ‘assessments’ made by the actors based on their ‘own knowledge’. As we know, such an assessment is always prone to error! In addition to errors in perception [*5], there can be errors in one’s own knowledge [*6]. So contrary to the belief of an actor, ‘true 1’ might actually be ‘false 1’ or vice versa, ‘true 2’ could be ‘false 2’ and vice versa.

From all this, it follows that a ‘clear qualification’ of truth and falsehood is ultimately always error-prone. For a community of people who think ‘positively’, this is not a problem: they are aware of this situation and they strive to keep their ‘natural susceptibility to error’ as small as possible through conscious methodical procedures [*7]. People who — for various reasons — tend to think negatively, feel motivated in this situation to see only errors or even malice everywhere. They find it difficult to deal with their ‘natural error-proneness’ in a positive and constructive manner.

TRUTH and MEANING : Process of Processes

In the previous section, the various terms (‘true1,2’, ‘false 1,2’, ‘true 3’) are still rather disconnected and are not yet really located in a tangible context. This will be attempted here with the help of working hypothesis 4 (sketch of a process space).

FIGURE 1 Process : The process space in the real world and in thinking, including possible interactions

The basic elements of working hypothesis 4 can be characterized as follows:

  1. There is the real world with its continuous changes, and within an actor which includes a virtual space for processes with elements such as perceptions, memories, and imagined concepts.
  2. The link between real space and virtual space occurs through perceptual achievements that represent specific properties of the real world for the virtual space, in such a way that ‘perceived contents’ and ‘imagined contents’ are distinguishable. In this way, a ‘mental comparison’ of perceived and imagined is possible.
  3. Changes in the real world do not show up explicitly but are manifested only indirectly through the perceivable changes they cause.
  4. It is the task of ‘cognitive reconstruction’ to ‘identify’ changes and to describe them linguistically in such a way that it is comprehensible, based on which properties of a given state, a possible subsequent state can arise.
  5. In addition to distinguishing between ‘states’ and ‘changes’ between states, it must also be clarified how a given description of change is ‘applied’ to a given state in such a way that a ‘subsequent state’ arises. This is called here ‘successor generation rule’ (symbolically: ⊢). An expression like Z ⊢V Z’ would then mean that using the successor generation rule ⊢ and employing the change rule V, one can generate the subsequent state Z’ from the state Z. However, more than one change rule V can be used, for example, ⊢{V1, V2, …, Vn} with the change rules V1, …, Vn.
  6. When formulating change rules, errors can always occur. If certain change rules have proven successful in the past in derivations, one would tend to assume for the ‘thought subsequent state’ that it will probably also occur in reality. In this case, we would be dealing with the situation ‘true 2’. If a change rule is new and there are no experiences with it yet, we would be dealing with the ‘true 3’ case for the thought subsequent state. If a certain change rule has failed repeatedly in the past, then the case ‘false 2’ might apply.
  7. The outlined process model also shows that the previous cases (1-5 in the table) only ever describe partial aspects. Suppose a group of actors manages to formulate a rudimentary process theory with many states and many change rules, including a successor generation instruction. In that case, it is naturally of interest how the ‘theory as a whole’ ‘proves itself’. This means that every ‘mental construction’ of a sequence of possible states according to the applied change rules under the assumption of the process theory must ‘prove itself’ in all cases of application for the theory to be said to be ‘generically true’. For example, while the case ‘true 1’ refers to only a single state, the case ‘generically true’ refers to ‘very many’ states, as many until an ‘end state’ is reached, which is supposed to count as a ‘target state’. The case ‘generically contradicted’ is supposed to occur when there is at least one sequence of generated states that keeps generating an end state that is false 1. As long as a process theory has not yet been confirmed as true 1 for an end state in all possible cases, there remains a ‘remainder of cases’ that are unclear. Then a process theory would be called ‘generically unclear’, although it may be considered ‘generically true’ for the set of cases successfully tested so far.

FIGURE 2 Process : The individual extended process space with an indication of the dimension ‘META-THINKING’ and ‘EVALUATION’.

If someone finds the first figure of the process room already quite ‘challenging’, they he will certainly ‘break into a sweat’ with this second figure of the ‘expanded process room’.

Everyone can check for himself that we humans have the ability — regardless of what we are thinking — to turn our thinking at any time back onto our own thinking shortly before, a kind of ‘thinking about thinking’. This opens up an ‘additional level of thinking’ – here called the ‘meta-level’ – on which we thinkers ‘thematize’ everything that is noticeable and important to us in the preceding thinking. [*8] In addition to ‘thinking about thinking’, we also have the ability to ‘evaluate’ what we perceive and think. These ‘evaluations’ are fueled by our ’emotions’ [*9] and ‘learned preferences’. This enables us to ‘learn’ with the help of our emotions and learned preferences: If we perform certain actions and suffer ‘pain’, we will likely avoid these actions next time. If we go to restaurant X to eat because someone ‘recommended’ it to us, and the food and/or service were really bad, then we will likely not consider this suggestion in the future. Therefore, our thinking (and our knowledge) can ‘make possibilities visible’, but it is the emotions that comment on what happens to be ‘good’ or ‘bad’ when implementing knowledge. But beware, emotions can also be mistaken, and massively so.[*10]

TRUTH AND MEANING – As a collective achievement

The previous considerations on the topic of ‘truth and meaning’ in the context of individual processes have outlined that and how ‘language’ plays a central role in enabling meaning and, based on this, truth. Furthermore, it was also outlined that and how truth and meaning must be placed in a dynamic context, in a ‘process model’, as it takes place in an individual in close interaction with the environment. This process model includes the dimension of ‘thinking’ (also ‘knowledge’) as well as the dimension of ‘evaluations’ (emotions, preferences); within thinking there are potentially many ‘levels of consideration’ that can relate to each other (of course they can also take place ‘in parallel’ without direct contact with each other (the unconnected parallelism is the less interesting case, however).

As fascinating as the dynamic emotional-cognitive structure within an individual actor can be, the ‘true power’ of explicit thinking only becomes apparent when different people begin to coordinate their actions by means of communication. When individual action is transformed into collective action in this way, a dimension of ‘society’ becomes visible, which in a way makes the ‘individual actors’ ‘forget’, because the ‘overall performance’ of the ‘collectively connected individuals’ can be dimensions more complex and sustainable than any one individual could ever realize. While a single person can make a contribution in their individual lifetime at most, collectively connected people can accomplish achievements that span many generations.

On the other hand, we know from history that collective achievements do not automatically have to bring about ‘only good’; the well-known history of oppression, bloody wars and destruction is extensive and can be found in all periods of human history.

This points to the fact that the question of ‘truth’ and ‘being good’ is not only a question for the individual process, but also a question for the collective process, and here, in the collective case, this question is even more important, since in the event of an error not only individuals have to suffer negative effects, but rather very many; in the worst case, all of them.

To be continued …

COMMENTS

[*0] The meaning of the terms ‘objective, verifiable’ will be explained in more detail below.

[*1] In a system-theoretical view of the ‘human body’ system, one can formulate the working hypothesis that far more than 99% of the events in a human body are not conscious. You can find this frightening or reassuring. I tend towards the latter, towards ‘reassurance’. Because when you see what a human body as a ‘system’ is capable of doing on its own, every second, for many years, even decades, then this seems extremely reassuring in view of the many mistakes, even gross ones, that we can make with our small ‘consciousness’. In cooperation with other people, we can indeed dramatically improve our conscious human performance, but this is only ever possible if the system performance of a human body is maintained. After all, it contains 3.5 billion years of development work of the BIOM on this planet; the building blocks of this BIOM, the cells, function like a gigantic parallel computer, compared to which today’s technical supercomputers (including the much-vaunted ‘quantum computers’) look so small and weak that it is practically impossible to express this relationship.

[*2] An ‘everyday language’ always presupposes ‘the many’ who want to communicate with each other. One person alone cannot have a language that others should be able to understand.

[*3] A meaning relation actually does what is mathematically called a ‘mapping’: Elements of one kind (elements of the surface structure of the language) are mapped to elements of another kind (the potential meaning elements). While a mathematical mapping is normally fixed, the ‘real meaning relation’ can constantly change; it is ‘flexible’, part of a higher-level ‘learning process’ that constantly ‘readjusts’ the meaning relation depending on perception and internal states.

[*4] The contents of working hypothesis 2 originate from the findings of modern cognitive sciences (neuroscience, psychology, biology, linguistics, semiotics, …) and philosophy; they refer to many thousands of articles and books. Working hypothesis 2 therefore represents a highly condensed summary of all this. Direct citation is not possible in purely practical terms.

[*5] As is known from research on witness statements and from general perception research, in addition to all kinds of direct perception errors, there are many errors in the ‘interpretation of perception’ that are largely unconscious/automated. The actors are normally powerless against such errors; they simply do not notice them. Only methodically conscious controls of perception can partially draw attention to these errors.

[*6] Human knowledge is ‘notoriously prone to error’. There are many reasons for this. One lies in the way the brain itself works. ‘Correct’ knowledge is only possible if the current knowledge processes are repeatedly ‘compared’ and ‘checked’ so that they can be corrected. Anyone who does not regularly check the correctness will inevitably confirm incomplete and often incorrect knowledge. As we know, this does not prevent people from believing that everything they carry around in their heads is ‘true’. If there is a big problem in this world, then this is one of them: ignorance about one’s own ignorance.

[*7] In the cultural history of mankind to date, it was only very late (about 500 years ago?) that a format of knowledge was discovered that enables any number of people to build up fact-based knowledge that, compared to all other known knowledge formats, enables the ‘best results’ (which of course does not completely rule out errors, but extremely minimizes them). This still revolutionary knowledge format has the name ’empirical theory’, which I have since expanded to ‘sustainable empirical theory’. On the one hand, we humans are the main source of ‘true knowledge’, but at the same time we ourselves are also the main source of ‘false knowledge’. At first glance, this seems like a ‘paradox’, but it has a ‘simple’ explanation, which at its root is ‘very profound’ (comparable to the cosmic background radiation, which is currently simple, but originates from the beginnings of the universe).

[*8] In terms of its architecture, our brain can open up any number of such meta-levels, but due to its concrete finiteness, it only offers a limited number of neurons for different tasks. For example, it is known (and has been experimentally proven several times) that our ‘working memory’ (also called ‘short-term memory’) is only limited to approx. 6-9 ‘units’ (whereby the term ‘unit’ must be defined depending on the context). So if we want to solve extensive tasks through our thinking, we need ‘external aids’ (sheet of paper and pen or a computer, …) to record the many aspects and write them down accordingly. Although today’s computers are not even remotely capable of replacing the complex thought processes of humans, they can be an almost irreplaceable tool for carrying out complex thought processes to a limited extent. But only if WE actually KNOW what we are doing!

[*9] The word ’emotion’ is a ‘collective term’ for many different phenomena and circumstances. Despite extensive research for over a hundred years, the various disciplines of psychology are still unable to offer a uniform picture, let alone a uniform ‘theory’ on the subject. This is not surprising, as much of the assumed emotions takes place largely ‘unconsciously’ or is only directly available as an ‘internal event’ in the individual. The only thing that seems to be clear is that we as humans are never ’emotion-free’ (this also applies to so-called ‘cool’ types, because the apparent ‘suppression’ or ‘repression’ of emotions is itself part of our innate emotionality).

[*10] Of course, emotions can also lead us seriously astray or even to our downfall (being wrong about other people, being wrong about ourselves, …). It is therefore not only important to ‘sort out’ the factual things in the world in a useful way through ‘learning’, but we must also actually ‘keep an eye on our own emotions’ and check when and how they occur and whether they actually help us. Primary emotions (such as hunger, sex drive, anger, addiction, ‘crushes’, …) are selective, situational, can develop great ‘psychological power’ and thus obscure our view of the possible or very probable ‘consequences’, which can be considerably damaging for us.

[*11] The term ‘narrative’ is increasingly used today to describe the fact that a group of people use a certain ‘image’, a certain ‘narrative’ in their thinking for their perception of the world in order to be able to coordinate their joint actions. Ultimately, this applies to all collective action, even for engineers who want to develop a technical solution. In this respect, the description in the German Wikipedia is a bit ‘narrow’: https://de.wikipedia.org/wiki/Narrativ_(Sozialwissenschaften)

REFERENCES

The following sources are just a tiny selection from the many hundreds, if not thousands, of articles, books, audio documents and films on the subject. Nevertheless, they may be helpful for an initial introduction. The list will be expanded from time to time.

[1a] Propaganda, in the German Wikipedia https://de.wikipedia.org/wiki/Propaganda

[1b] Propaganda in the English Wikipedia : https://en.wikipedia.org/wiki/Propaganda /*The English version appears more systematic, covers larger periods of time and more different areas of application */

[3] Propaganda der Russischen Föderation, hier: https://de.wikipedia.org/wiki/Propaganda_der_Russischen_F%C3%B6deration (German source)

[6] Mischa Gabowitsch, Mai 2022, Von »Faschisten« und »Nazis«, https://www.blaetter.de/ausgabe/2022/mai/von-faschisten-und-nazis#_ftn4 (German source)

Pain does not replace the truth …

Time: Oct 18, 2023 — Oct 24, 2023)
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.d
e

CONTEXT

This post is part of the uffmm science blog. It is a translation from the German source: https://www.cognitiveagent.org/2023/10/18/schmerz-ersetzt-nicht-die-wahrheit/. For the translation I have used chatGPT4 and deepl.com. Because in the text the word ‘hamas’ is occurring, chatGPT didn’t translate a long paragraph with this word. Thus the algorithm is somehow ‘biased’ by a certain kind of training. This is really bad because the following text is offers some reflections about a situation where someone ‘hates’ others. This is one of our biggest ‘disease’ today.

Preface

The Hamas terrorist attack on Israeli citizens on October 7, 2023, has shaken the world. For years, terrorist acts have been shaking our world. In front of our eyes, a is attempting, since 2022 (actually since 2014), to brutally eradicate the entire Ukrainian population. Similar events have been and are taking place in many other regions of the world…

… Pain does not replace the truth [0]…

Truth is not automatic. Making truth available requires significantly more effort than remaining in a state of partial truth.

The probability that a person knows the truth or seeks the truth is smaller than staying in a state of partial truth or outright falsehood.

Whether in a democracy, falsehood or truth predominates depends on how a democracy shapes the process of truth-finding and the communication of truth. There is no automatic path to truth.

In a dictatorship, the likelihood of truth being available is extremely dependent on those who exercise centralized power. Absolute power, however, has already fundamentally broken with the truth (which does not exclude the possibility that this power can have significant effects).

The course of human history on planet Earth thus far has shown that there is evidently no simple, quick path that uniformly leads all people to a state of happiness. This must have to do with humans themselves—with us.

The interest in seeking truth, in cultivating truth, in a collective process of truth, has never been strong enough to overcome the everyday exclusions, falsehoods, hostilities, atrocities…

One’s own pain is terrible, but it does not help us to move forward…

Who even wants a future for all of us?????

[0] There is an overview article by the author from 2018, in which he presents 15 major texts from the blog “Philosophie Jetzt” ( “Philosophy Now”) ( “INFORMAL COSMOLOGY. Part 3a. Evolution – Truth – Society. Synopsis of previous contributions to truth in this blog” ( https://www.cognitiveagent.org/2018/03/20/informelle-kosmologie-teil-3a-evolution-wahrheit-gesellschaft-synopse-der-bisherigen-beitraege-zur-wahrheit-in-diesem-blog/ )), in which the matter of truth is considered from many points of view. In the 5 years since, society’s treatment of truth has continued to deteriorate dramatically.

Hate cancels the truth


Truth is related to knowledge. However, in humans, knowledge most often is subservient to emotions. Whatever we may know or wish to know, when our emotions are against it, we tend to suppress that knowledge.

One form of emotion is hatred. The destructive impact of hatred has accompanied human history like a shadow, leaving a trail of devastation everywhere it goes: in the hater themselves and in their surroundings.

The event of the inhumane attack on October 7, 2023 in Israel, claimed by Hamas, is unthinkable without hatred.

If one traces the history of Hamas since its founding in 1987 [1,2], then one can see that hatred is already laid down as an essential moment in its founding. This hatred is joined by the moment of a religious interpretation, which calls itself Islamic, but which represents a special, very radicalized and at the same time fundamentalist form of Islam.

The history of the state of Israel is complex, and the history of Judaism is no less so. And the fact that today’s Judaism also contains strong components that are clearly fundamentalist and to which hatred is not alien, this also leads within many other factors at the core to a constellation of fundamentalist antagonisms on both sides that do not in themselves reveal any approaches to a solution. The many other people in Israel and Palestine ‘around’ are part of these ‘fundamentalist force fields’, which simply evaporate humanity and truth in their vicinity. By the trail of blood one can see this reality.

Both Judaism and Islam have produced wonderful things, but what does all this mean in the face of a burning hatred that pushes everything aside, that sees only itself.

[1] Jeffrey Herf, Sie machen den Hass zum Weltbild, FAZ 20.Okt. 23, S.11 (Abriss der Geschichte der Hamas und ihr Weltbild, als Teil der größeren Geschichte) (Translation:They make hatred their worldview, FAZ Oct. 20, 23, p.11 (outlining the history of Hamas and its worldview, as part of the larger story)).

[2] Joachim Krause, Die Quellen des Arabischen Antisemitismus, FAZ, 23.10.2023,p.8 (This text “The Sources of Arab Anti-Semitism” complements the account by Jeffrey Herf. According to Krause, Arab anti-Semitism has been widely disseminated in the Arab world since the 1920s/ 30s via the Muslim Brotherhood, founded in 1928).

A society in decline

When truth diminishes and hatred grows (and, indirectly, trust evaporates), a society is in free fall. There is no remedy for this; the use of force cannot heal it, only worsen it.

The mere fact that we believe that lack of truth, dwindling trust, and above all, manifest hatred can only be eradicated through violence, shows how seriously we regard these phenomena and at the same time, how helpless we feel in the face of these attitudes.

In a world whose survival is linked to the availability of truth and trust, it is a piercing alarm signal to observe how difficult it is for us as humans to deal with the absence of truth and face hatred.

Is Hatred Incurable?

When we observe how tenaciously hatred persists in humanity, how unimaginably cruel actions driven by hatred can be, and how helpless we humans seem in the face of hatred, one might wonder if hatred is ultimately not a kind of disease—one that threatens the hater themselves and, particularly, those who are hated with severe harm, ultimately death.

With typical diseases, we have learned to search for remedies that can free us from the illness. But what about a disease like hatred? What helps here? Does anything help? Must we, like in earlier times with people afflicted by deadly diseases (like the plague), isolate, lock away, or send away those who are consumed by hatred to some no man’s land? … but everyone knows that this isn’t feasible… What is feasible? What can combat hatred?

After approximately 300.000 years of Homo sapiens on this planet, we seem strangely helpless in the face of the disease of hatred.

What’s even worse is that there are other people who see in every hater a potential tool to redirect that hatred toward goals they want to damage or destroy, using suitable manipulation. Thus, hatred does not disappear; on the contrary, it feels justified, and new injustices fuel the emergence of new hatred… the disease continues to spread.

One of the greatest events in the entire known universe—the emergence of mysterious life on this planet Earth—has a vulnerable point where this life appears strangely weak and helpless. Throughout history, humans have demonstrated their capability for actions that endure for many generations, that enable more people to live fulfilling lives, but in the face of hatred, they appear oddly helpless… and the one consumed by hatred is left incapacitated, incapable of anything else… plummeting into their dark inner abyss…


Instead of hatred, we need (minimally and in outline):

  1. Water: To sustain human life, along with the infrastructure to provide it, and individuals to maintain that infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  2. Food: To sustain human life, along with the infrastructure for its production, storage, processing, transportation, distribution, and provision. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  3. Shelter: To provide a living environment, including the infrastructure for its creation, provisioning, maintenance, and distribution. Individuals are needed to manage this provision, and they, too, require everything they need for their own lives to fulfill this task.
  4. Energy: For heating, cooling, daily activities, and life itself, along with the infrastructure for its generation, provisioning, maintenance, and distribution. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  5. Authorization and Participation: To access water, food, shelter, and energy. This requires an infrastructure of agreements, and individuals to manage these agreements. These individuals also require everything they need for their own lives to fulfill this task.
  6. Education: To be capable of undertaking and successfully completing tasks in real life. This necessitates individuals with enough experience and knowledge to offer and conduct such education. These individuals also require everything they need for their own lives to fulfill this task.
  7. Medical Care: To help with injuries, accidents, and illnesses. This requires individuals with sufficient experience and knowledge to offer and provide medical care, as well as the necessary facilities and equipment. These individuals also require everything they need for their own lives to fulfill this task.
  8. Communication Facilities: So that everyone can receive helpful information needed to navigate their world effectively. This requires suitable infrastructure and individuals with enough experience and knowledge to provide such information. These individuals also require everything they need for their own lives to fulfill this task.
  9. Transportation Facilities: So that people and goods can reach the places they need to go. This necessitates suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  10. Decision Structures: To mediate the diverse needs and necessary services in a way that ensures most people have access to what they need for their daily lives. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  11. Law Enforcement: To ensure disruptions and damage to the infrastructure necessary for daily life are resolved without creating new disruptions. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such services. These individuals also require everything they need for their own lives to fulfill this task.
  12. Sufficient Land: To provide enough space for all these requirements, along with suitable soil (for water, food, shelter, transportation, storage, production, etc.).
  13. Suitable Climate
  14. A functioning ecosystem.
  15. A capable scientific community to explore and understand the world.
  16. Suitable technology to accomplish everyday tasks and support scientific endeavors.
  17. Knowledge in the minds of people to understand daily events and make responsible decisions.
  18. Goal orientations (preferences, values, etc.) in the minds of people to make informed decisions.
  19. Ample time and peace to allow these processes to occur and produce results.
  20. Strong and lasting relationships with other population groups pursuing the same goals.
  21. Sufficient commonality among all population groups on Earth to address their shared needs where they are affected.
  22. A sustained positive and constructive competition for those goal orientations that make life possible and viable for as many people on this planet (in this solar system, in this galaxy, etc.) as possible.
  23. The freedom present within the experiential world, included within every living being, especially within humans, should be given as much room as possible, as it is this freedom that can overcome false ideas from the past in the face of a constantly changing world, enabling us to potentially thrive in the world of the future.

THINKING: everyday – philosophical – empirical theoretical (sketch)

(First: June 9, 2023 – Last change: June 10, 2023)

Comment: This post is a translation from a German text in my blog ‘cognitiveagent.org’ with the aid of the deepL software

CONTEXT

The current phase of my thinking continues to revolve around the question how the various states of knowledge relate to each other: the many individual scientific disciplines drift side by side; philosophy continues to claim supremacy, but cannot really locate itself convincingly; and everyday thinking continues to run its course unperturbed with the conviction that ‘everything is clear’, that you just have to look at it ‘as it is’. Then the different ‘religious views’ come around the corner with a very high demand and a simultaneous prohibition not to look too closely. … and much more.

INTENTION

In the following text three fundamental ways of looking at our present world are outlined and at the same time they are put in relation to each other. Some hitherto unanswered questions can possibly be answered better, but many new questions arise as well. When ‘old patterns of thinking’ are suspended, many (most? all?) of the hitherto familiar patterns of thinking have to be readjusted. All of a sudden they are simply ‘wrong’ or strongly ‘in need of repair’.

Unfortunately it is only a ‘sketch’.[1]

THOUGHTS IN EVERYDAY

FIG. 1: In everyday thinking, every human being (a ‘homo sapiens’ (HS)) assumes that what he knows of a ‘real world’ is what he ‘perceives’. That there is this real world with its properties, he is – more or less – ‘aware’ of, there is no need to discuss about it specially. That, what ‘is, is’.

… much could be said …

PHILOSOPHICAL THINKING

FIG. 2: Philosophical thinking starts where one notices that the ‘real world’ is not perceived by all people in ‘the same way’ and even less ‘imagined’ in the same way. Some people have ‘their ideas’ about the real world that are strikingly ‘different’ from other people’s ideas, and yet they insist that the world is exactly as they imagine it. From this observation in everyday life, many new questions can arise. The answers to these questions are as manifold as there were and are people who gave or still give themselves to these philosophical questions.

… famous examples: Plato’s allegory of the cave suggests that the contents of our consciousness are perhaps not ‘the things themselves’ but only the ‘shadows’ of what is ultimately ‘true’ … Descartes‘ famous ‘cogito ergo sum’ brings into play the aspect that the contents of consciousness also say something about himself who ‘consciously perceives’ such contents …. the ‘existence of the contents’ presupposes his ‘existence as thinker’, without which the existence of the contents would not be possible at all …what does this tell us? … Kant’s famous ‘thing in itself’ (‘Ding an sich’) can be referred to the insight that the concrete, fleeting perceptions can never directly show the ‘world as such’ in its ‘generality’. This lies ‘somewhere behind’, hard to grasp, actually not graspable at all? ….

… many things could be said …

EMPIRICAL-THEORETICAL THINKING

FIG. 3: The concept of an ’empirical theory’ developed very late in the documented history of man on this planet. On the one hand philosophically inspired, on the other hand independent of the widespread forms of philosophy, but very strongly influenced by logical and mathematical thinking, the new ’empirical theoretical’ thinking settled exactly at this breaking point between ‘everyday thinking’ and ‘theological’ as well as ‘strongly metaphysical philosophical thinking’. The fact that people could make statements about the world ‘with the chest tone of conviction’, although it was not possible to show ‘common experiences of the real world’, which ‘corresponded’ with the expressed statements, inspired individual people to investigate the ‘experiential (empirical) world’ in such a way that everyone else could have the ‘same experiences’ with ‘the same procedure’. These ‘transparent procedures’ were ‘repeatable’ and such procedures became what was later called ’empirical experiment’ or then, one step further, ‘measurement’. In ‘measuring’ one compares the ‘result’ of a certain experimental procedure with a ‘previously defined standard object’ (‘kilogram’, ‘meter’, …).

This procedure led to the fact that – at least the experimenters – ‘learned’ that our knowledge about the ‘real world’ breaks down into two components: there is the ‘general knowledge’ what our language can articulate, with terms that do not automatically have to have something to do with the ‘experiential world’, and such terms that can be associated with experimental experiences, and in such a way that other people, if they engage in the experimental procedure, can also repeat and thereby confirm these experiences. A rough distinction between these two kinds of linguistic expressions might be ‘fictive’ expressions with unexplained claims to experience, and ’empirical’ expressions with confirmed claims to experience.

Since the beginning of the new empirical-theoretical way of thinking in the 17th century, it took at least 300 years until the concept of an ’empirical theory’ was consolidated to such an extent that it became a defining paradigm in many areas of science. However, many methodological questions remained controversial or even ‘unsolved’.

DATA and THEORY

For many centuries, the ‘misuse of everyday language’ for enabling ’empirically unverifiable statements’ was directly chalked up to this everyday language and the whole everyday language was discredited as ‘source of untruths’. A liberation from this ‘ monster of everyday language’ was increasingly sought in formal artificial languages or then in modern axiomatized mathematics, which had entered into a close alliance with modern formal logic (from the end of the 19th century). The expression systems of modern formal logic or then of modern formal mathematics had as such (almost) no ‘intrinsic meaning’. They had to be introduced explicitly on a case-by-case basis. A ‘formal mathematical theory’ could be formulated in such a way that it allowed ‘logical inferences’ even without ‘explicit assignment’ of an ‘external meaning’, which allowed certain formal expressions to be called ‘formally true’ or ‘formally false’.

This seemed very ‘reassuring’ at first sight: mathematics as such is not a place of ‘false’ or ‘foisted’ truths.

The intensive use of formal theories in connection with experience-based experiments, however, then gradually made clear that a single measured value as such does not actually have any ‘meaning’ either: what is it supposed to ‘mean’ that at a certain ‘time’ at a certain ‘place’ one establishes an ‘experienceable state’ with certain ‘properties’, ideally comparable to a previously agreed ‘standard object’? ‘Expansions’ of bodies can change, ‘weight’ and ‘temperature’ as well. Everything can change in the world of experience, fast, slow, … so what can a single isolated measured value say?

It dawned to some – not only to the experience-based researchers, but also to some philosophers – that single measured values only get a ‘meaning’, a possible ‘sense’, if one can at least establish ‘relations’ between single measured values: Relations ‘in time’ (before – after), relations at/in place (higher – lower, next to each other, …), ‘interrelated quantities’ (objects – areas, …), and that furthermore the different ‘relations’ themselves again need a ‘conceptual context’ (single – quantity, interactions, causal – non-causal, …).

Finally, it became clear that single measured values needed ‘class terms’, so that they could be classified somehow: abstract terms like ‘tree’, ‘plant’, ‘cloud’, ‘river’, ‘fish’ etc. became ‘collection points’, where one could deliver ‘single observations’. With this, hundreds and hundreds of single values could then be used, for example, to characterize the abstract term ‘tree’ or ‘plant’ etc.

This distinction into ‘single, concrete’ and ‘abstract, general’ turns out to be fundamental. It also made clear that the classification of the world by means of such abstract terms is ultimately ‘arbitrary’: both ‘which terms’ one chooses is arbitrary, and the assignment of individual experiential data to abstract terms is not unambiguously settled in advance. The process of assigning individual experiential data to particular terms within a ‘process in time’ is itself strongly ‘hypothetical’ and itself in turn part of other ‘relations’ which can provide additional ‘criteria’ as to whether date X is more likely to belong to term A or more likely to belong to term B (biology is full of such classification problems).

Furthermore, it became apparent that mathematics, which comes across as so ‘innocent’, can by no means be regarded as ‘innocent’ on closer examination. The broad discussion of philosophy of science in the 20th century brought up many ‘artifacts’ which can at least easily ‘corrupt’ the description of a dynamic world of experience.

Thus it belongs to formal mathematical theories that they can operate with so-called ‘all- or particular statements’. Mathematically it is important that I can talk about ‘all’ elements of a domain/set. Otherwise talking becomes meaningless. If I now choose a formal mathematical system as conceptual framework for a theory which describes ’empirical facts’ in such a way that inferences become possible which are ‘true’ in the sense of the theory and thus become ‘predictions’ which assert that a certain fact will occur either ‘absolutely’ or with a certain probability X greater than 50%, then two different worlds unite: the fragmentary individual statements about the world of experience become embedded in ‘all-statements’ which in principle say more than empirical data can provide.

At this point it becomes visible that mathematics, which appears to be so ‘neutral’, does exactly the same job as ‘everyday language’ with its ‘abstract concepts’: the abstract concepts of everyday language always go beyond the individual case (otherwise we could not say anything at all in the end), but just by this they allow considerations and planning, as we appreciate them so much in mathematical theories.

Empirical theories in the format of formal mathematical theories have the further problem that they as such have (almost) no meanings of their own. If one wants to relate the formal expressions to the world of experience, then one has to explicitly ‘construct a meaning’ (with the help of everyday language!) for each abstract concept of the formal theory (or also for each formal relation or also for each formal operator) by establishing a ‘mapping’/an ‘assignment’ between the abstract constructs and certain provable facts of experience. What may sound so simple here at first sight has turned out to be an almost unsolvable problem in the course of the last 100 years. Now it does not follow that one should not do it at all; but it does draw attention to the fact that the choice of a formal mathematical theory need not automatically be a good solution.

… many things could still be said …

INFERENCE and TRUTH

A formal mathematical theory can derive certain statements as formally ‘true’ or ‘false’ from certain ‘assumptions’. This is possible because there are two basic assumptions: (i) All formal expressions have an ‘abstract truth value’ as ‘abstractly true’ or just as ‘abstractly not true’. Furthermore, there is a so-called ‘formal notion of inference’ which determines whether and how one can ‘infer’ other formal expressions from a given ‘set of formal expressions’ with agreed abstract truth values and a well-defined ‘form’. This ‘derivation’ consists of ‘operations over the signs of the formal expressions’. The formal expressions are here ‘objects’ of the notion of inference, which is located on a ‘level higher’, on a ‘meta-level 1’. The inference term is insofar a ‘formal theory’ of its own, which speaks about certain ‘objects of a deeper level’ in the same way as the abstract terms of a theory (or of everyday language) speak about concrete facts of experience. The interaction of the notion of inference (at meta-level 1) and the formal expressions as objects presupposes its own ‘interpretive relation’ (ultimately a kind of ‘mapping’), which in turn is located at yet another level – meta-level 2. This interpretive relation uses both the formal expressions (with their truth values!) and the inference term as ‘objects’ to install an interpretive relation between them. Normally, this meta-level 2 is handled by the everyday language, and the implicit interpretive relation is located ‘in the minds of mathematicians (actually, in the minds of logicians)’, who assume that their ‘practice of inference’ provides enough experiential data to ‘understand’ the ‘content of the meaning relation’.

It had been Kurt Gödel [2], who in 1930/31 tried to formalize the ‘intuitive procedure’ of meta-proofs itself (by means of the famous Gödelization) and thus made the meta-level 3 again a new ‘object’, which can be discussed explicitly. Following Gödel’s proof, there were further attempts to formulate this meta-level 3 again in a different ways or even to formalize a meta-level 4. But these approaches remained so far without clear philosophical result.

It seems to be clear only that the ability of the human brain to open again and again new meta-levels, in order to analyze and discuss with it previously formulated facts, is in principle unlimited (only limited by the finiteness of the brain, its energy supply, the time, and similar material factors).

An interesting special question is whether the formal inference concept of formal mathematics applied to experience facts of a dynamic empirical world is appropriate to the specific ‘world dynamics’ at all? For the area of the ‘apparently material structures’ of the universe, modern physics has located multiple phenomena which simply elude classical concepts. A ‘matter’, which is at the same time ‘energy’, tends to be no longer classically describable, and quantum physics is – despite all ‘modernity’ – in the end still a ‘classical thinking’ within the framework of a formal mathematics, which does not possess many properties from the approach, which, however, belong to the experienceable world.

This limitation of a formal-mathematical physical thinking shows up especially blatantly at the example of those phenomena which we call ‘life’. The experience-based phenomena that we associate with ‘living (= biological) systems’ are, at first sight, completely material structures, however, they have dynamic properties that say more about the ‘energy’ that gives rise to them than about the materiality by means of which they are realized. In this respect, implicit energy is the real ‘information content’ of living systems, which are ‘radically free’ systems in their basic structure, since energy appears as ‘unbounded’. The unmistakable tendency of living systems ‘out of themselves’ to always ‘enable more complexity’ and to integrate contradicts all known physical principles. ‘Entropy’ is often used as an argument to relativize this form of ‘biological self-dynamics’ with reference to a simple ‘upper bound’ as ‘limitation’, but this reference does not completely nullify the original phenomenon of the ‘living’.

It becomes especially exciting if one dares to ask the question of ‘truth’ at this point. If one locates the meaning of the term ‘truth’ first of all in the situation in which a biological system (here the human being) can establish a certain ‘correspondence’ between its abstract concepts and such concrete knowledge structures within its thinking, which can be related to properties of an experiential world through a process of interaction, not only as a single individual but together with other individuals, then any abstract system of expression (called ‘language’) has a ‘true relation to reality’ only to the extent that there are biological systems that can establish such relations. And these references further depend on the structure of perception and the structure of thought of these systems; these in turn depend on the nature of bodies as the context of brains, and bodies in turn depend on both the material structure and dynamics of the environment and the everyday social processes that largely determine what a member of a society can experience, learn, work, plan, and do. Whatever an individual can or could do, society either amplifies or ‘freezes’ the individual’s potential. ‘Truth’ exists under these conditions as a ‘free-moving parameter’ that is significantly affected by the particular process environment. Talk of ‘cultural diversity’ can be a dangerous ‘trivialization’ of massive suppression of ‘alternative processes of learning and action’ that are ‘withdrawn’ from a society because it ‘locks itself in’. Ignorance tends not to be a good advisor. However, knowledge as such does not guarantee ‘right’ action either. The ‘process of freedom’ on planet Earth is a ‘galactic experiment’, the seriousness and extent of which is hardly seen so far.

COMMENTS

[1] References are omitted here. Many hundreds of texts would have to be mentioned. No sketch can do that.

[2] See for the ‘incompleteness theorems’ of Kurt Gödel (1930, published 1931): https://en.wikipedia.org/wiki/Kurt_G%C3%B6del#Incompleteness_theorems

Pierre Lévy : Collective Intelligence – Chapter 1 – Introduction

eJournal: uffmm.org, ISSN 2567-6458, 17.March 2022 – 22.March 2022, 8:40
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

SCOPE

In the uffmm review section the different papers and books are discussed from the point of view of the oksimo paradigm. [1] In the following text the author discusses some aspects of the book “Collective Intelligence. mankind’s emerging world in cyberspace” by Pierre Lévy (translated by Robert Bonono),1997 (French: 1994)[2]

PREVIEW

Before starting a more complete review here a notice in advance.

Only these days I started reading this book of Pierre Lévy after working more than 4 years intensively with the problem of an open knowledge space for everybody as genuine part of the cyberspace. I have approached the problem from several disciplines culminating in a new theory concept which has additionally a direct manifestation in a new kind of software too. While I am now are just testing version 2 of this software and having in parallel worked through several papers of the early, the middle, and the late Karl Popper [3], I detected this book of Lévy [*] and was completely impressed by the preface of this book. His view of mankind and cyberspace is intellectual deep and a real piece of art. I had the feeling that this text could be without compromise a direct preview of our software paradigm although I didn’t know about him before.

Looking to know more about him I detected some more interesting books but especially also his blog intlekt – metadata [4], where he develops his vision of a new language for a new ‘collective intelligence’ being practiced in the cyberspace. While his ideas about ‘collective intelligence’ associated with the ‘cyberspace’ are fascinating, it appears to me that his ideas about a new language are strongly embedded in ‘classical’ concepts of language, semiotics, and computer, concepts which — in my view — are not sufficient for a new language enabling collective intelligence.

Thus it can become an exciting reading with continuous reflections about the conditions about ‘collective intelligence’ and the ‘role of language’ within this.

Chapter 1: Introduction

Position lévy

The following description of the position of Lévy described in his 1st chapter is clearly an ‘interpretation’ from the ‘viewpoint’ of the writer at this time. This is more or less ‘inevitable’. [5]

A good starting point for the project of ‘understanding the book’ seems to be the historical outline which Lévy gives on the pages 5-10. Starting with the appearance of the homo sapiens he characterizes different periods of time with different cultural patterns triggered by the homo sapiens. In the last period, which is still lasting, knowledge takes radical new ‘forms’; one central feature is the appearance of the ‘cyberspace’.

Primarily the cyberspace is ‘machine-based’, some material structure, enhanced with a certain type of dynamics enabled by algorithms working in the machine. But as part of the cultural life of the homo sapiens the cyberspace is also a cultural reality increasingly interacting directly with individuals, groups, institutions, companies, industry, nature, and even more. And in this space enabled by interactions the homo sapiens does not only encounter with technical entities alone, but also with effects/ events/ artifacts produced by other homo sapiens companions.

Lévy calls this a “re-creation of the social bond based on reciprocal apprenticeship, shared skills, imagination, and collective intelligence.” (p.10) And he adds as a supplement that “collective intelligence is not a purely cognitive object.” (p.10)

Looking into the future Lévy assumes two main axes: “The renewal of the social bond through our relation to knowledge and collective intelligence itself.” (p.11)

Important seems to be that ‘knowledge’ is also not be confined to ‘facts alone’ but it ‘lives’ in the reziproke interactions of human actors and thereby knowledge is a dynamic process.(cf. p.11) Humans as part of such knowledge processes receive their ‘identities’ from this flow. (cf. p.12) One consequence from this is “… the other remains enigmatic, becomes a desirable being in every respect.”(p.12) With some further comment: “No one knows everything, everyone knows something, all knowledge resides in humanity. There is no transcendent store of knowledge and knowledge is simply the sum of what we know.”(p.13f)

‘Collective intelligence’ dwells nearby to dynamic knowledge: “The basis and goal of collective intelligence is the mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities.”(p.13) Thus Lévy can state that collective intelligence “is born with culture and growth with it.”(p.16) And making it more concrete with a direct embedding in a community: “In an intelligent community the specific objective is to permanently negotiate the order of things, language, the role of the individual, the identification and definition of objects, the reinterpretation of memory. Nothing is fixed.”(p.17)

These different aspects are accumulating in the vision of “a new humanism that incorporates and enlarges the scope of self knowledge into a form of group knowledge and collective thought. … [the] process of collective intelligence [is] leading to the creation of a distinct sense of community.”(p.17)

One side effect of such a new humanism could be “new forms of democracy, better suited to the complexity of contemporary problems…”.(p.18)

First COMMENTS

At this point I will give only some few comments, waiting with more general and final thoughts until the end of the reading of the whole text.

Shortened Timeline – Wrong Picture

The timeline which Lévy is using is helpful, but this timeline is ‘incomplete’. What is missing is the whole time ‘before’ the advent of the homo sapiens within the biological evolution. And this ‘absence’ hides the understanding of one, if not ‘the’, most important concept of all life, including the homo sapiens and its cultural process.

This central concept is today called ‘sustainable development’. It points to a ‘dynamical structure’, which is capable of ‘adapting to an ever changing environment’. Life on the planet earth is only possible from the very beginning on account of this fundamental capability starting with the first cells and being kept strongly alive through all the 3.5 Billion years (10^9) in all the following fascinating developments.

This capability to be able to ‘adapt to an ever changing environment’ implies the ability to change the ‘working structure, the body’ in a way, that the structure can change to respond in new ways, if the environment changes. Such a change has two sides: (i) the real ‘production’ of the working structures of a living system, and (ii) the ‘knowledge’, which is necessary to ‘inform’ the processes of formation and keeping an organism ‘in action’. And these basic mechanisms have additionally (iii) to be ‘distributed in a whole population’, whose sheer number gives enough redundancy to compensate for ‘wrong proposals’.

Knowing this the appearance of the homo sapiens life form manifests a qualitative shift in the structure of the adaption so far: surely prepared by several Millions of years the body of the homo sapiens with an unusual brain enabled new forms of ‘understanding the world’ in close connection with new forms of ‘communication’ and ‘cooperation’. With the homo sapiens the brains became capable to talk — mediated by their body and the surrounding body world — with other brains hidden in other bodies in a way, which enabled the sharing of ‘meaning’ rooted in the body world as well in the own body. This capability created by communication a ‘network of distributed knowledge’ encoded in the shared meaning of individual meaning functions. As long as communication with a certain meaning function with the shared meanings ‘works’, as long does this distributed knowledge’ exist. If the shared meaning weakens or breaks down this distributed knowledge is ‘gone’.

Thus, a homo sapiens population has not to wait for another generation until new varieties of their body structures could show up and compete with the changing environment. A homo sapiens population has the capability to perceive the environment — and itself — in a way, that allows additionally new forms of ‘transformations of the perceptions’ in a way, that ‘cognitive varieties of perceived environments’ can be ‘internally produced’ and being ‘communicated’ and being used for ‘sequences of coordinated actions’ which can change the environment and the homo sapiens them self.

The cultural history then shows — as Lévy has outlined shortly on his pages 5-10 — that the homo sapiens population (distributed in many competing smaller sub-populations) ‘invented’ more and more ‘behavior pattern’, ‘social rules’ and a rich ‘diversity of tools’ to improve communication and to improve the representation and processing of knowledge, which in turn helped for even more complex ‘sequences of coordinated actions’.

Sustainability & Collective Intelligence

Although until today there are no commonly accepted definitions of ‘intelligence’ and of ‘knowledge’ available [6], it makes some sense to locate ‘knowledge’ and ‘intelligence’ in this ‘communication based space of mutual coordinated actions’. And this embedding implies to think about knowledge and intelligence as a property of a population, which ‘collectively’ is learning, is understanding, is planning, is modifying its environment as well as them self.

And having this distributed capability a population has all the basics to enable a ‘sustainable development’.

Therefore the capability for a sustainable development is an emergent capability based on the processes enabled by a distributed knowledge enabled by a collective intelligence.

Having sketched out this then all the wonderful statements of Lévy seem to be ‘true’ in that they describe a dynamic reality which is provided by biological life as such.

A truly Open Space with Real Boundaries

Looking from the outside onto this biological mystery of sustainable processes based on collective intelligence using distributed knowledge one can identify incredible spaces of possible continuations. In principle these spaces are ‘open spaces’.

Looking to the details of this machinery — because we are ‘part of it’ — we know by historical and everyday experience that these processes can fail every minute, even every second.

To ‘improve’ a given situation one needs (i) not only a criterion which enables a judgment about something to be classified as being ‘not good’ (e.g. the given situation), one needs further (ii) some ‘minimal vision’ of a ‘different situation’, which can be classified by a criterion as being ‘better’. And, finally, one needs (iii) a minimal ‘knowledge’ about possible ‘actions’ which can change the given situation in successive steps to transform it into the envisioned ‘new better situation’ functioning as a ‘goal’.

Looking around, looking back, everybody has surely experiences from everyday life that these three tasks are far from being trivial. To judge something to be ‘not good’ or ‘not good enough’ presupposes a minimum of ‘knowledge’ which should be sufficiently evenly be ‘distributed’ in the ‘brains of all participants’. Without a sufficient agreement no common judgment will be possible. At the time of this writing it seems that there is plenty of knowledge around, but it is not working as a coherent knowledge space accepted by all participants. Knowledge battles against knowledge. The same is effective for the tasks (ii) and (iii).

There are many reasons why it is no working. While especially the ‘big challenges’ are of ‘global nature’ and are following a certain time schedule there is not too much time available to ‘synchronize’ the necessary knowledge between all. Mankind has until now supportet predominantly the sheer amount of knowledge and ‘individual specialized solutions’, but did miss the challenge to develop at the same time new and better ‘common processes’ of ‘shared knowledge’. The invention of computer, networks of computer, and then the multi-faceted cyberspace is a great and important invention, but is not really helpful as long as the cyberspace has not become a ‘genuin human-like’ tool for ‘distributed human knowledge’ and ‘distributed collective human-machine intelligence’.

Truth

One of the most important challenges for all kinds of knowledge is the ability to enable a ‘knowledge inspired view’ of the environment — including the actor — which is ‘in agreement with the reality of the environment’; otherwise the actions will not be able to support life in the long run. [7] Such an ‘agreement’ is a challenge, especially if the ‘real processes’ are ‘complex’ , ‘distributed’ and are happening in ‘large time frames’. As all human societies today demonstrate, this fundamental ability to use ’empirically valid knowledge’ is partially well developed, but in many other cases it seems to be nearly not in existence. There is a strong — inborn ! — tendency of human persons to think that the ‘pictures in their heads’ represent ‘automatically’ such a knowledge what is in agreement with the real world. It isn’t. Thus ‘dreams’ are ruling the everyday world of societies. And the proportion of brains with such ‘dreams’ seems to grow. In a certain sense this is a kind of ‘illness’: invisible, but strongly effective and highly infectious. Science alone seems to be not a sufficient remedy, but it is a substantial condition for a remedy.

COMMENTS

[*] The decisive hint for this book came from Athene Sorokowsky, who is member of my research group.

[1] Gerd Doeben-Henisch,The general idea of the oksimo paradigm: https://www.uffmm.org/2022/01/24/newsletter/, January 2022

[2] Pierre Lévy in wkp-en: https://en.wikipedia.org/wiki/Pierre_L%C3%A9vy

[3] Karl Popper in wkp-en: https://en.wikipedia.org/wiki/Karl_Popper. One of the papers I have written commenting on Popper can be found HERE.

[4] Pierre Lévy, intlekt – metadata, see: https://intlekt.io/blog/

[5] Who wants to know, what Lévy ‘really’ has written has to go back to the text of Lévy directly. … then the reader will read the text of Lévy with ‘his own point of view’ … indeed, even then the reader will not know with certainty, whether he did really understand Lévy ‘right’. … reading a text is always a ‘dialogue’ .. .

[6] Not in Philosophie, not in the so-called ‘Humanities’, not in the Social Sciences, not in the Empirical Sciences, and not in Computer Science!

[7] The ‘long run’ can be very short if you misjudge in the traffic a situation, or a medical doctor makes a mistake or a nuclear reactor has the wrong sensors or ….

Continuation

See HERE.

POPPER – Objective Knowledge (1971). Summary, Comments, how to develope further


eJournal: uffmm.org
ISSN 2567-6458, 07.March 22 – 12.March 2022, 10:55h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Philosophy of Science theme which is part of the uffmm blog.

PREFACE

In this post a short summary of Poppers view of an empirical theory is outlined as he describes it in his article “Conjectural Knowledge: My Solution of the Problem of Induction” from 1971.[1] The view of Popper will be commented and the relationsship to the oksimo paradigm of the author will be outlined.

Empirical Theory according to Popper in a Nutshell

Figure: Popper’s concept from 1971 of an empirical theory, compressed in a nutshell. Graphic by Gerd Doeben-Henisch based on the article using Popper’s summarizing ideas on the pages 29-31

POPPER’S POSITION 1971

In this article from 1971 Popper discusses several positions. Finally he offers the following ‘demarcation’ between only two cases: ‘Pseudo Science’ and ‘Empirical Science’.(See p.29) In doing so this triggers the question how it is possible to declare something as an ‘objective empirical theory’ without claiming to have some ‘absolute truth’?

Although Popper denies to have some kind of absolute truth he will “not give up the search for truth”, which finally leads to a “true explanatory theory”.(cf. p.29) “Truth” plays the “role of a regulative idea”.(cf. p.30) Thus according to Popper one can “guess for truth” and some of the hypotheses “may well be true”.(cf.p.30)

In Popper’s view finally ‘observation’ shows up as that behaviour which enables the production of ‘statements’ as the ’empirical basis’ for all arguments.(cf.p.30) Empirical statements are a ‘function of the used language’.(cf. p.31)

This dimension of language leads Popper to the concept of ‘deductive logic’ which describes formal mechanisms to derive from a set of statements — which are assumed to be true — those statements, which are ‘true’ by logical deduction only. If statements are ‘logically false’ then this can be used to classify the set of assumed statements as ‘logically not consistent’. (cf. p.31)

comments on popper’s 1971-position 50 years later

The preceding outline of Popper’s position reveals a minimalist account of the ingredients of an ‘objective empirical theory’. But we as the readers of these ideas are living 50 years later. Our minds are shaped differently. The author of this text thinks that Popper is basically ‘true’, although there are some points in Popper’s argument, which deserve some comments.

Subjective – Absolute

Popper is moving between two boundaries: One boundary is the so called ‘subjective believe’ which can support any idea, and which thereby can include pure nonsense; the other boundary is ‘absolute truth’, which is requiring to hold all the time at all places although the ‘known world’ is evidently showing a steady change.

Empirical Basis

In searching for a possible position between these boundaries, which would allow a minimum of ‘rationality’, he is looking for an ’empirical Basis’ as a point of reference for a ‘rational theory’. He is locating such an empirical basis in ‘observation statements’ which can be used for ‘testing a theory’.

In his view a ‘rational empirical theory’ has to have a ‘set of statements’ (often called ‘assumptions’ of the theory or ‘axioms’) which are assumed to ‘describe the observable world’ in a way that these statements should be able to be ‘confirmed’ or be ‘falsified’.

Confirmation – Falsification

A ‘confirmation’ does not imply that the confirmed statement is ‘absolutely true’ (his basic conviction); but one can experience that a confirmed statement can function as a ‘hypothesis/ conjecture’ which ‘workes in the actual observation’. This does not exclude that it perhaps will not work in a future test. The pragmatical difference between ‘interesting conjectures’ and those which are of less interest is that a ‘repeated confirmation’ increases the ‘probability’, that such a confirmation can happen again. An ‘increasing probability’ can induce an ‘increased expectation’. Nevertheless, increased probabilities and associated increased expectations are no substitutes for ‘truth’.

A test which shows ‘no confirmation’ for a logically derived statement from the theory is difficult to interpret:

Case (i): A theory is claiming that a statement S refers to a proposition A to be ‘true in a certain experiment’, but in the real experiment the observation reveals a proposition B which translates to non-A which can interpreted as ‘the opposite to A is being the case’ (= being ‘true’). This outcome will be interpreted in the way that the proposition B interpreted as ‘non-A’ contradicts ‘A’ and this will be interpreted further in the way, that the statement S of the theory represents a partial contradiction to the observable world.

Case (ii): A theory is claiming that a statement S refers to a proposition A to be ‘true in a certain experiment’, but in the real experiment the observation reveals a proposition B ‘being the case’ (= being ‘true’) which shows a different proposition. And this outcome cannot be related to the proposition ‘A’ which is forecasted by the theory. If the statement ‘can not be interpreted sufficiently well’ then the situation is neither ‘true’ nor ‘false’; it is ‘undefined’.

Discussion: Case (ii) reveals that there exist an observable (empirical) fact which is not related to a certain ‘logically derived’ statement with proposition A. There can be many circumstances why the observation did not generate the ‘expected proposition A’. If one would assume that the observation is related to an ‘agreed process of generating an outcome M’, which can be ‘repeated at will’ from ‘everybody’, then the observed fact of a ‘proposition B distinguished from proposition A’ could be interpreted in the way, that the expectation of the theory cannot be reproduced with the agreed procedure M. This lets the question open, whether there could eventually exist another procedure M’ producing an outcome ‘A’. This case is for the actors which are running the procedure M with regard to the logically derived statement S talking about proposition A ‘unclear’, ‘not defined’, a ‘non-confirmation’. Otherwise it is at the same time no confirmation either.

Discussion: Case (i) seems — at a first glance — to be more ‘clear’ in its interpretation. Assuming here too that the observation is associated with an agreed procedure M producing the proposition B which can be interpreted as non-A (B = non-A). If everybody accepts this ‘classification’ of B as ‘non-A’, then by ‘purely logical reasons’ (depending from the assumed concept of logic !) ‘non-A’ contradicts ‘A’. But in the ‘real world’ with ‘real observations’ things are usually not as ‘clear-cut’ as a theory may assume. The observable outcome B of an agreed procedure M can show a broad spectrum of ‘similarities’ with proposition A varying between 100% and less. Even if one repeats the agreed procedure M several times it can show a ‘sequence of propositions <B1, B2, …, Bn>’ which all are not exactly 100% similar to proposition A. To speak in such a case (the normal case!), of a logical contradiction it is difficult if not impossible. The idea of Popper-1971 with a possible ‘falsification’ of a theory would then become difficult to interpret. A possible remedy for this situation could be to modify a theory in the way that a theory does forecast only statements with a proposition A which is represented as a ‘field of possible instances A = <a1, a2, …, am>’, where every ‘ai‘ represents some kind of a variation. In that modified case it would be ‘more probable’ to judge a non-confirmation between A as <a1, a2, …, am> and B as <B1, B2, …, Bn>, if one would take into account the ‘variability’ of a proposition.[3]

Having discussed the case of ‘non-confirmation’ in the described modified way this leads back again to the case of ‘confirmation’: The ‘fuzziness’ of observable facts even in the context of agreed procedures M of observation, which are repeatable by everyone (usually called measurement) requires for a broader concept of ‘similarity’ between ‘derived propositions’ and ‘observed propositions’. This is since long a hot debated point in the philosophy of science (see e.g. [4]). Until now does no general accepted solution exist for this problem.

Thus the clear idea of Popper to associate a theory candidate with a minimum of rationality by relating the theory in an agreed way to empirical observations becomes in the ‘dust of reality’ a difficult case. It is interesting that the ‘late Popper’ (1988-1991) has modified his view onto this subject a little bit more into the direction of the interpretation of observable events (cf. [5])

Logic as an Organon

In the discussion of the possible confirmation or falsification of a theory Popper uses two different perspectives: (i) in a more broader sense he is talking about the ‘process of justification’ of the theoretical statements with regard to an empirical basis relying on the ‘regulative idea of truth’, and (ii) in a more specialized sense he is talking about ‘deductive logic as an organon of criticism’. These two perspectives demand for more clarification.

While the meaning of the concept ‘theory’ is rather vague (statements, which have to be confirmed or falsified with respect to observational statements), the concept ‘deductive logic as an organon’ isn’t really clearer.

Until today we have two big paradigms of logic: (i) the ‘classical logic’ inspired by Aristotle (with many variants) and (ii) ‘modern formal logic’ (cf. [6]) in combination with modern mathematics (cf. [7],[8]). Both paradigms represent a whole universe of different variants, whose combinations into concrete formal empirical theories shows more than one paradigm.(cf. [4], [8], [10])

As outlined in the figure above the principal idea of logic in general follows the following schema: one has a set of expressions of some language L for which one assumes at least, that these expressions are classified as ‘true expressions’. According to an agreed procedure of ‘derivation’ one can derive (deduce, infer, …) other expressions of the language which are assumed to be classified as ‘true’ if the assumptions hold.[11]

The important point here is, that the modern concept of logic does not explain, what ‘true’ means nor exists there an explanation, how exactly a procedure looks like which enables the classification of an expression as ‘being true’. Logic works with the minimalist assumption that the ‘user of logic’ is using statements which he assumes to be ‘true’ independent of how this classification came into being. This frees the user of logic to deal with the cumbersome process of clarifying the meaning and the existence of something which makes a statement ‘true’, but on the other side the user of modern logic has no real control whether his ‘concept of derivation’ makes any sense in a real world, from which observation statements are generated claiming to be ’empirically true’, and that the relationships between these observational statements are appropriately ‘represented’ by the formal derivation concept. Until today there exists no ‘meta-theory’ which explains the relationship between the derivation concept of formal logic (there are many such concepts!) and the ‘dynamics of real events’.

Thus, if Popper mentions formal logic as a tool for the handling of assumed true statements of a theory, it is not really clear whether such a formal logical derivation really is appropriate to explain the ‘relationships between assumed true statements’ without knowing, which kind of reality is ‘designated’/ ‘referred to’ by such statements and their relationships between each other.

(Formalized) Theory and Logic

In his paper Popper does not explain too much what he is concretely mean with a (formalized) theory. Today there exist many different proposals of formalized theories for the usage as ’empirical theories’, but there is no commonly agreed final ‘template’ of a ‘formal empirical theory’.

Nevertheless we need some minimal conception to be able to discuss some of the properties of a theory more concretely. I will address this problem in another post accompanied with concrete applications.

COMMENTS

[1] Karl R.Popper, Conjectural Knowledge: My Solution of the Problem of Induction, in: [2], pp.1-31

[2] Karl R.Popper, Objective Knowledge. An Evolutionary Approach, Oxford University Press, London, 1972 (reprint with corrections 1973)

[3] In our everyday use of our ‘normal’ language it is the ‘normal’ case that a statement S like ‘There s a cup on the table’ can be interpreted in many different ways depending which concrete thing (= proposition B of the above examples) called a ‘cup’ or called ‘table’ can be observed.

[4] F. Suppe, Ed., The Structure of Scientific Theories, University of
Illinois Press, Urbana, 2nd edition, 1979.

[5] Gerd Doeben-Henisch, 2022,(SPÄTER) POPPER – WISSENSCHAFT – PHILOSOPHIE – OKSIMO-DISKURSRAUM, in: eJournal: Philosophie Jetzt – Menschenbild, ISSN 2365-5062, 22.-23.Februar 2022,
URL: https://www.cognitiveagent.org/2022/02/22/popper-wissenschaft-philosophie-oksimo-paradigma/

[6] William Kneale and Martha Kneale, The development of logic, Oxford University Press, Oxford, 1962 with several corrections and reprints 1986.

[7] Jean Dieudonnè, Geschichte der Mathematik 1700-1900, Friedrich Viehweg & Sohn, Braunschweig – Wiesbaden, 1985 (From the French edition “Abrégé d’histoire des mathématique 1700-1900, Hermann, Paris, 1978)

[8] Philip J.Davis & Reuben Hersh, The Mathematical Experience, Houghton Mifflin Company, Boston, 1981

[9] Nicolas Bourbaki, Elements of Mathematics. Theory of Sets, Springer-Verlag, Berlin, 1968

[10] Wolfgang Balzer, C.Ulises Moulines, Joseph D.Sneed, An Architectonic for Science. The Structuralist Program,D.Reidel Publ. Company, Dordrecht -Boston – Lancaster – Tokyo, 1987

[11] The usage of the terms ‘expression’, ‘proposition’, and ‘statement’ is in this text as follows: An ‘expression‘ is a string of signs from some alphabet A and which is accepted as ‘well formed expression’ of some language L. A ‘statement‘ is an utterance of some actor using expressions of the language L to talk ‘about’ some ‘experience’ — from the world of bodies or from his consciousness –, which is understood as the ‘meaning‘ of the statement. The relationship between the expressions of the statement and the meaning is located ‘in the actor’ and has been ‘learned’ by interactions with the world and himself. This hypothetical relationship is here called ‘meaning function  φ’. A ‘proposition‘ is (i) the inner construct of the meaning of a statement (here called ‘intended proposition’) and (ii) that part of the experience, which is correlated with the inner construct of the stated meaning (here called ‘occurring proposition’). The special relationship between the intended proposition and the occurring proposition is often expressed as ‘referring to’ or ‘designate’. A statement is called to ‘hold’/ to be ‘true’ or ‘being the case’ if there exists an occurring proposition which is ‘similar enough’ to the intended proposition of the statement. If such an occurring proposition is lacking then the designation of the statement is ‘undefined’ or ‘non confirming’ the expectation.

Follow-up Post

For a follow-up post see here.

OKSIMO MEETS POPPER. Popper’s Position

eJournal: uffmm.org
ISSN 2567-6458, 31.March – 31.March  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

POPPERs POSITION IN THE CHAPTERS 1-17

In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.

Scientific Theory

A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference , which allows the deduction of a certain theorem t  if one makes  some additional concrete assumptions H.

Example: Theory T1 = <AX1,>

AX1= {Birds can fly}

H1= {Peter is  a bird}

: Peter can fly

Because  there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can  fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.

One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completely true. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds   like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)

The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:

AX1= {Birds can fly}

H2= {Doxy is  a bird, Doxy cannot fly}

: ‘Doxy can fly’ & ~’Doxy can fly’

If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

In this case the set {AX1, H2} is called inconsistent.

If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.

Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof  a falsification of a theory T can destroy the ability  of this  theory to distinguish between true and false statements.

Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try  it.(cf. p.33, 63)

Meta Theory, Logic of Scientific Discovery, Philosophy of Science

Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.

Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person  like Popper.

Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:

A scientific theory from a point of  view of Philosophy of Science represents a structure like the following one (minimal version):

MT=<S, A[μ], E, L, AX, , ET, E+, E-, true, false, contradiction, inconsistent>

In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L.  Based on their built-in adaptive meaning function μ the human actors A can relate  properties of the situation S with expressions E of L.  Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions  E-. Both sets of expressions are true subsets of E: E+ ⊂ E  and E- ⊂ E. Additionally the experts can define some special  set of expressions called axioms  AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T  ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either  only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions  that this unified set  confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions  can not  distinguish between true and false expressions.

Popper gives some general requirements for the axioms of a theory (cf. p.71):

  1. Axioms must be free from contradiction.
  2. The axioms  must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
  3. The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.

While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.

Empirical Interpretation(s)

Popper assumes that the universal statements  of scientific theories   are linguistic representations, and this means  they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning.  Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true),  or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).

Examples:

  1. Two human actors in a situation S agree about the  fact, that there is ‘something’ which  they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is  some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract  concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one).  To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
  2. These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’.  If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
  3. The  expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
  4. To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’  they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations

The basic idea with a system of axioms AX is — according to Popper —  that the axioms as universal expressions represent  a system of equations where  the  general terms   should be able to be substituted by certain values. The set of admissible values is different from the set of  inadmissible values. The relation between those values which can be substituted for the terms  is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)

But Popper has difficulties with an axiomatic system interpreted as a system of equations  since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is,  that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)

On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)

But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.

Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.

As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E  of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists  together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and  are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!

Example:

Lets us have a look to another  example.

If we take the system of axioms AX as the following schema:  AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m  as follows: m={<1,2,3>, <2,3,5>}. The values are again given as  a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers   which can satisfy the general concept of the equation.  In this secondary interpretation m is becoming  a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.

It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing  objects; the individual contributions will be  reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derived forecast of the theory 1+2 = 3  and the empirically measured value 1+2 = 2. If one would  define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with  ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief  the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …

Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.

SOURCES

[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by  Basic Books, New York (more editions have been published  later; I am using the eBook version of Routledge (2002))

 

 

THE OKSIMO CASE as SUBJECT FOR PHILOSOPHY OF SCIENCE. Part 1

eJournal: uffmm.org
ISSN 2567-6458, 22.March – 23.March 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE OKSIMO EVENT SPACE

The characterization of the oksimo software paradigm starts with an informal characterization  of the oksimo software event space.

EVENT SPACE

An event space is a space which can be filled up by observable events fitting to the species-specific internal processed environment representations [1], [2] here called internal environments [ENVint]. Thus the same external environment [ENV] can be represented in the presence of  10 different species  in 10 different internal formats. Thus the expression ‘environment’ [ENV] is an abstract concept assuming an objective reality which is common to all living species but indeed it is processed by every species in a species-specific way.

In a human culture the usual point of view [ENVhum] is simultaneous with all the other points of views [ENVa] of all the other other species a.

In the ideal case it would be possible to translate all species-specific views ENVa into a symbolic representation which in turn could then be translated into the human point of view ENVhum. Then — in the ideal case — we could define the term environment [ENV] as the sum of all the different species-specific views translated in a human specific language: ∑ENVa = ENV.

But, because such a generalized view of the environment is until today not really possible by  practical reasons we will use here for the beginning only expressions related to the human specific point of view [ENVhum] using as language an ordinary language [L], here  the English language [LEN]. Every scientific language — e.g. the language of physics — is understood here as a sub language of the ordinary language.

EVENTS

An event [EV] within an event space [ENVa] is a change [X] which can be observed at least from the  members of that species [SP] a which is part of that environment ENV which enables  a species-specific event space [ENVa]. Possibly there can be other actors around in the environment ENV from different species with their specific event space [ENVa] where the content of the different event spaces  can possible   overlap with regard to  certain events.

A behavior is some observable movement of the body of some actor.

Changes X can be associated with certain behavior of certain actors or with non-actor conditions.

Thus when there are some human or non-human  actors in an environment which are moving than they show a behavior which can eventually be associated with some observable changes.

CHANGE

Besides being   associated with observable events in the (species specific) environment the expression  change is understood here as a kind of inner state in an actor which can compare past (stored) states Spast with an actual state SnowIf the past and actual state differ in some observable aspect Diff(Spast, Snow) ≠ 0, then there exists some change X, or Diff(Spast, Snow) = X. Usually the actor perceiving a change X will assume that this internal structure represents something external to the brain, but this must not necessarily be the case. It is of help if there are other human actors which confirm such a change perception although even this does not guarantee that there really is a  change occurring. In the real world it is possible that a whole group of human actors can have a wrong interpretation.

SYMBOLIC COMMUNICATION AND MEANING

It is a specialty of human actors — to some degree shared by other non-human biological actors — that they not only can built up internal representations ENVint of the reality external to the  brain (the body itself or the world beyond the body) which are mostly unconscious, partially conscious, but also they can built up structures of expressions of an internal language Lint which can be mimicked to a high degree by expressions in the body-external environment ENV called expressions of an ordinary language L.

For this to work one  has  to assume that there exists an internal mapping from internal representations ENVint into the expressions of the internal language   Lint as

meaning : ENVint <—> Lint.

and

speaking: Lint —> L

hearing: Lint <— L

Thus human actors can use their ordinary language L to activate internal encodings/ decodings with regard to the internal representations ENVint  gained so far. This is called here symbolic communication.

NO SPEECH ACTS

To classify the occurrences of symbolic expressions during a symbolic communication  is a nearly infinite undertaking. First impressions of the unsolvability of such a classification task can be gained if one reads the Philosophical Investigations of Ludwig Wittgenstein. [5] Later trials from different philosophers and scientists  — e.g. under the heading of speech acts [4] — can  not fully convince until today.

Instead of assuming here a complete scientific framework to classify  occurrences of symbolic expressions of an ordinary language L we will only look to some examples and discuss these.

KINDS OF EXPRESSIONS

In what follows we will look to some selected examples of symbolic expressions and discuss these.

(Decidable) Concrete Expressions [(D)CE]

It is assumed here that two human actors A and B  speaking the same ordinary language L  are capable in a concrete situation S to describe objects  OBJ and properties PROP of this situation in a way, that the hearer of a concrete expression E can decide whether the encoded meaning of that expression produced by the speaker is part of the observable situation S or not.

Thus, if A and B are together in a room with a wooden  white table and there is a enough light for an observation then   B can understand what A is saying if he states ‘There is a white wooden table.

To understand means here that both human actors are able to perceive the wooden white table as an object with properties, their brains will transform these external signals into internal neural signals forming an inner — not 1-to-1 — representation ENVint which can further be mapped by the learned meaning function into expressions of the inner language Lint and mapped further — by the speaker — into the external expressions of the learned ordinary language L and if the hearer can hear these spoken expressions he can translate the external expressions into the internal expressions which can be mapped onto the learned internal representations ENVint. In everyday situations there exists a high probability that the hearer then can respond with a spoken ‘Yes, that’s true’.

If this happens that some human actor is uttering a symbolic expression with regard to some observable property of the external environment  and the other human actor does respond with a confirmation then such an utterance is called here a decidable symbolic expression of the ordinary language L. In this case one can classify such an expression  as being true. Otherwise the expression  is classified as being not true.

The case of being not true is not a simple case. Being not true can mean: (i) it is actually simply not given; (ii) it is conceivable that the meaning could become true if the external situation would be  different; (iii) it is — in the light of the accessible knowledge — not conceivable that the meaning could become true in any situation; (iv) the meaning is to fuzzy to decided which case (i) – (iii) fits.

Cognitive Abstraction Processes

Before we talk about (Undecidable) Universal Expressions [(U)UE] it has to clarified that the internal mappings in a human actor are not only non-1-to-1 mappings but they are additionally automatic transformation processes of the kind that concrete perceptions of concrete environmental matters are automatically transformed by the brain into different kinds of states which are abstracted states using the concrete incoming signals as a  trigger either to start a new abstracted state or to modify an existing abstracted state. Given such abstracted states there exist a multitude of other neural processes to process these abstracted states further embedded  in numerous  different relationships.

Thus the assumed internal language Lint does not map the neural processes  which are processing the concrete events as such but the processed abstracted states! Language expressions as such can never be related directly to concrete material because this concrete material  has no direct  neural basis.  What works — completely unconsciously — is that the brain can detect that an actual neural pattern nn has some similarity with a  given abstracted structure NN  and that then this concrete pattern nn  is internally classified as an instance of NN. That means we can recognize that a perceived concrete matter nn is in ‘the light of’ our available (unconscious) knowledge an NN, but we cannot argue explicitly why. The decision has been processed automatically (unconsciously), but we can become aware of the result of this unconscious process.

Universal (Undecidable) Expressions [U(U)E]

Let us repeat the expression ‘There is a white wooden table‘ which has been used before as an example of a concrete decidable expression.

If one looks to the different parts of this expression then the partial expressions ‘white’, ‘wooden’, ‘table’ can be mapped by a learned meaning function φ into abstracted structures which are the result of internal processing. This means there can be countable infinite many concrete instances in the external environment ENV which can be understood as being white. The same holds for the expressions ‘wooden’ and ‘table’. Thus the expressions ‘white’, ‘wooden’, ‘table’ are all related to abstracted structures and therefor they have to be classified as universal expressions which as such are — strictly speaking —  not decidable because they can be true in many concrete situations with different concrete matters. Or take it otherwise: an expression with a meaning function φ pointing to an abstracted structure is asymmetric: one expression can be related to many different perceivable concrete matters but certain members of  a set of different perceived concrete matters can be related to one and the same abstracted structure on account of similarities based on properties embedded in the perceived concrete matter and being part of the abstracted structure.

In a cognitive point of view one can describe these matters such that the expression — like ‘table’ — which is pointing to a cognitive  abstracted structure ‘T’ includes a set of properties Π and every concrete perceived structure ‘t’ (caused e.g. by some concrete matter in our environment which we would classify as a ‘table’) must have a ‘certain amount’ of properties Π* that one can say that the properties  Π* are entailed in the set of properties Π of the abstracted structure T, thus Π* ⊆ Π. In what circumstances some speaker-hearer will say that something perceived concrete ‘is’ a table or ‘is not’ a table will depend from the learning history of this speaker-hearer. A child in the beginning of learning a language L can perhaps call something   a ‘chair’ and the parents will correct the child and will perhaps  say ‘no, this is table’.

Thus the expression ‘There is a white wooden table‘ as such is not true or false because it is not clear which set of concrete perceptions shall be derived from the possible internal meaning mappings, but if a concrete situation S is given with a concrete object with concrete properties then a speaker can ‘translate’ his/ her concrete perceptions with his learned meaning function φ into a composed expression using universal expressions.  In such a situation where the speaker is  part of  the real situation S he/ she  can recognize that the given situation is an  instance of the abstracted structures encoded in the used expression. And recognizing this being an instance interprets the universal expression in a way  that makes the universal expression fitting to a real given situation. And thereby the universal expression is transformed by interpretation with φ into a concrete decidable expression.

SUMMING UP

Thus the decisive moment of turning undecidable universal expressions U(U)E into decidable concrete expressions (D)CE is a human actor A behaving as a speaker-hearer of the used  language L. Without a speaker-hearer every universal expressions is undefined and neither true nor false.

makedecidable :  S x Ahum x E —> E x {true, false}

This reads as follows: If you want to know whether an expression E is concrete and as being concrete is  ‘true’ or ‘false’ then ask  a human actor Ahum which is part of a concrete situation S and the human actor shall  answer whether the expression E can be interpreted such that E can be classified being either ‘true’ or ‘false’.

The function ‘makedecidable()’ is therefore  the description (like a ‘recipe’) of a real process in the real world with real actors. The important factors in this description are the meaning functions inside the participating human actors. Although it is not possible to describe these meaning functions directly one can check their behavior and one can define an abstract model which describes the observable behavior of speaker-hearer of the language L. This is an empirical model and represents the typical case of behavioral models used in psychology, biology, sociology etc.

SOURCES

[1] Jakob Johann Freiherr von Uexküll (German: [ˈʏkskʏl])(1864 – 1944) https://en.wikipedia.org/wiki/Jakob_Johann_von_Uexk%C3%BCll

[2] Jakob von Uexküll, 1909, Umwelt und Innenwelt der Tiere. Berlin: J. Springer. (Download: https://ia802708.us.archive.org/13/items/umweltundinnenwe00uexk/umweltundinnenwe00uexk.pdf )

[3] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act

[4] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein

[5] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */

Extended Concept for Meaning Based Inferences. Version 1

ISSN 2567-6458, 30.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF DOCUMENT

TruthTheoryExtended-v1

The Simulator as a Learning Artificial Actor [LAA]. Version 1

ISSN 2567-6458, 23.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

Abstract

The analysis of the main application scenario revealed that classical
logical inference concepts are insufficient for the assistance of human ac-
tors during shared planning. It turned out that the simulator has to be
understood as a real learning artificial actor which has to gain the required
knowledge during the process.

PDF DOCUMENT

LearningArtificialActor-v1 (last change: Aug 23, 2020)

THE BIG PICTURE: HCI – HMI – AAI in History – Engineering – Society – Philosophy

eJournal: uffmm.org,
ISSN 2567-6458, 20.April 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

A first draft version …

CONTEXT

The context for this text is the whole block dedicated to the AAI (Actor-Actor Interaction)  paradigm. The aim of this text is to give the big picture of all dimensions and components of this subject as it shows up during April 2019.

The first dimension introduced is the historical dimension, because this allows a first orientation in the course of events which lead  to the actual situation. It starts with the early days of real computers in the thirties and forties of the 20 century.

The second dimension is the engineering dimension which describes the special view within which we are looking onto the overall topic of interactions between human persons and computers (or machines or technology or society). We are interested how to transform a given problem into a valuable solution in a methodological sound way called engineering.

The third dimension is the whole of society because engineering happens always as some process within a society.  Society provides the resources which can be used and spends the preferences (values) what is understood as ‘valuable’, as ‘good’.

The fourth dimension is Philosophy as that kind of thinking which takes everything into account which can be thought and within thinking Philosophy clarifies conditions of thinking, possible tools of thinking and has to clarify when some symbolic expression becomes true.

HISTORY

In history we are looking back in the course of events. And this looking back is in a first step guided by the  concepts of HCI (Human-Computer Interface) and  HMI (Human-Machine Interaction).

It is an interesting phenomenon how the original focus of the interface between human persons and the early computers shifted to  the more general picture of interaction because the computer as machine developed rapidly on account of the rapid development of the enabling hardware (HW)  the enabling software (SW).

Within the general framework of hardware and software the so-called artificial intelligence (AI) developed first as a sub-topic on its own. Since the last 10 – 20 years it became in a way productive that it now  seems to become a normal part of every kind of software. Software and smart software seem to be   interchangeable. Thus the  new wording of augmented or collective intelligence is emerging intending to bridge the possible gap between humans with their human intelligence and machine intelligence. There is some motivation from the side of society not to allow the impression that the smart (intelligent) machines will replace some day the humans. Instead one is propagating the vision of a new collective shape of intelligence where human and machine intelligence allows a symbiosis where each side gives hist best and receives a maximum in a win-win situation.

What is revealing about the actual situation is the fact that the mainstream is always talking about intelligence but not seriously about learning! Intelligence is by its roots a static concept representing some capabilities at a certain point of time, while learning is the more general dynamic concept that a system can change its behavior depending from actual external stimuli as well as internal states. And such a change includes real changes of some of its internal states. Intelligence does not communicate this dynamics! The most demanding aspect of learning is the need for preferences. Without preferences learning is impossible. Today machine learning is a very weak example of learning because the question of preferences is not a real topic there. One assumes that some reward is available, but one does not really investigate this topic. The rare research trying to do this job is stating that there is not the faintest idea around how a general continuous learning could happen. Human society is of no help for this problem while human societies have a clash of many, often opposite, values, and they have no commonly accepted view how to improve this situation.

ENGINEERING

Engineering is the art and the science to transform a given problem into a valuable and working solution. What is valuable decides the surrounding enabling society and this judgment can change during the course of time.  Whether some solution is judged to be working can change during the course of time too but the criteria used for this judgment are more stable because of their adherence to concrete capabilities of technical solutions.

While engineering was and is  always  a kind of an art and needs such aspects like creativity, innovation, intuition etc. it is also and as far as possible a procedure driven by defined methods how to do things, and these methods are as far as possible backed up by scientific theories. The real engineer therefore synthesizes art, technology and science in a unique way which can not completely be learned in the schools.

In the past as well as in the present engineering has to happen in teams of many, often many thousands or even more, people which coordinate their brains by communication which enables in the individual brains some kind of understanding, of emerging world pictures,  which in turn guide the perception, the decisions, and the concrete behavior of everybody. And these cognitive processes are embedded — in every individual team member — in mixtures of desires, emotions, as well as motivations, which can support the cognitive processes or obstruct them. Therefore an optimal result can only be reached if the communication serves all necessary cognitive processes and the interactions between the team members enable the necessary constructive desires, emotions, and motivations.

If an engineering process is done by a small group of dedicated experts  — usually triggered by the given problem of an individual stakeholder — this can work well for many situations. It has the flavor of a so-called top-down approach. If the engineering deals with states of affairs where different kinds of people, citizens of some town etc. are affected by the results of such a process, the restriction to  a small group of experts  can become highly counterproductive. In those cases of a widespread interest it seems promising to include representatives of all the involved persons into the executing team to recognize their experiences and their kinds of preferences. This has to be done in a way which is understandable and appreciative, showing esteem for the others. This manner of extending the team of usual experts by situative experts can be termed bottom-up approach. In this usage of the term bottom-up this is not the opposite to top-down but  is reflecting the extend in which members of a society are included insofar they are affected by the results of a process.

SOCIETY

Societies in the past and the present occur in a great variety of value systems, organizational structures, systems of power etc.  Engineering processes within a society  are depending completely on the available resources of a society and of its value systems.

The population dynamics, the needs and wishes of the people, the real territories, the climate, housing, traffic, and many different things are constantly producing demands to be solved if life shall be able and continue during the course of time.

The self-understanding and the self-management of societies is crucial for their ability to used engineering to improve life. This deserves communication and education to a sufficient extend, appropriate public rules of management, otherwise the necessary understanding and the freedom to act is lacking to use engineering  in the right way.

PHILOSOPHY

Without communication no common constructive process can happen. Communication happens according to many  implicit rules compressed in the formula who when can speak how about what with whom etc. Communication enables cognitive processes of for instance  understanding, explanations, lines of arguments.  Especially important for survival is the ability to make true descriptions and the ability to decide whether a statement is true or not. Without this basic ability communication will break down, coordination will break down, life will break down.

The basic discipline to clarify the rules and conditions of true communication, of cognition in general, is called Philosophy. All the more modern empirical disciplines are specializations of the general scope of Philosophy and it is Philosophy which integrates all the special disciplines in one, coherent framework (this is the ideal; actually we are far from this ideal).

Thus to describe the process of engineering driven by different kinds of actors which are coordinating themselves by communication is primarily the task of philosophy with all their sub-disciplines.

Thus some of the topics of Philosophy are language, text, theory, verification of a  theory, functions within theories as algorithms, computation in general, inferences of true statements from given theories, and the like.

In this text I apply Philosophy as far as necessary. Especially I am introducing a new process model extending the classical systems engineering approach by including the driving actors explicitly in the formal representation of the process. Learning machines are included as standard tools to improve human thinking and communication. You can name this Augmented Social Learning Systems (ASLS). Compared to the wording Augmented Intelligence (AI) (as used for instance by the IBM marketing) the ASLS concept stresses that the primary point of reference are the biological systems which created and create machine intelligence as a new tool to enhance biological intelligence as part of biological learning systems. Compared to the wording Collective Intelligence (CI) (as propagated by the MIT, especially by Thomas W.Malone and colleagues) the spirit of the CI concept seems to be   similar, but perhaps only a weak similarity.

AAI-THEORY V2 – BLUEPRINT: Bottom-up

eJournal: uffmm.org,
ISSN 2567-6458, 27.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 28.February 2019 (Several corrections)

CONTEXT

An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the special topic how to proceed in a bottom-up approach.

BOTTOM-UP: THE GENERAL BLUEPRINT

Outine of the process how to generate an AS
Figure 1: Outline of the process how to generate an AS with a bottom-up approach

As the introductory figure shows it is assumed here that there is a collection of citizens and experts which offer their individual knowledge, experiences, and skills to ‘put them on the table’ challenged by a given problem P.

This knowledge is in the beginning not structured. The first step in the direction of an actor story (AS) is to analyze the different contributions in a way which shows distinguishable elements with properties and relations. Such a set of first ‘objects’ and ‘relations’ characterizes a set of facts which define a ‘situation’ or a ‘state’ as a collection of ‘facts’. Such a situation/ state can also be understood as a first simple ‘model‘ as response to a given problem. A model is as such ‘static‘; it describes what ‘is’ at a certain point of ‘time’.

In a next step the group has to identify possible ‘changes‘ which can be associated with at least one fact. There can be many possible changes which eventually  need different durations to come into effect. These effects can happen  as ‘exclusive alternatives’ or in ‘parallel’. Apply the possible changes to a  situation  generates   ‘successors’ to the actual situation. A sequence of situations generated by applied changes is  usually called a ‘simulation‘.

If one allows the interaction between real actors with a simulation by associating  a real actor to one of the actors ‘inside the simulation’ one is turning the simulation into an ‘interactive simulation‘ which represents basically a ‘computer game‘ (short: ‘egame‘).

One can use interactive simulations e.g. to (i) learn about the dynamics of a model, to (ii) test the assumptions of a model, to (iii) test the knowledge and skills of the real actors.

Making new experiences with a  simulation allows a continuous improvement of the model and its change rules.

Additionally one can include more citizens and experts into this process and one can use available knowledge from databases and libraries.

EPISTEMOLOGY OF CONCEPTS

Epistemology of concepts used in an AAI Analysis rprocess
Fig.2: Epistemology of concepts used in an AAI Analysis process

As outlined in the preceding section about the blueprint of a bottom-up process there will be a heavy   usage of concepts to describe state of affairs.

The literature about this topic in philosophy as well as many scientific disciplines is overwhelmingly and therefore this small text here can only be a ‘pointer’ into a complex topic. Nevertheless I will use exactly this pointer to explore this topic further.

While the literature is mainly dealing with  more or less specific partial models, I am trying here to point out a very general framework which fits to a more genera philosophical — especially epistemological — view as well as gives respect to many results of scientific disciplines.

The main dimensions here are (i) the outside external empirical world, which connects via sensors to the (ii) internal body, especially the brain,  which works largely ‘unconscious‘, and then (iii) the ‘conscious‘ part of he brain.

The most important relationship between the ‘conscious’ and the ‘unconscious’ part of the brain is the ability of the unconscious brain to transform automatically incoming concrete sens-experiences into more   ‘abstract’ structures, which have at least three sub-dimensions: (i) different concrete material, (ii) a sub-set of extracted common properties, (iii) different sets of occurring contexts associated with the different subsets. This enables the brain to extract only a ‘few’ abstract structures (= abstract concepts)  to deal with ‘many’  concrete events. Thus the abstract concept ‘chair’ can cover many different concrete chairs which have only a few properties in common. Additionally the chairs can occur in different ‘contexts’ associating them with different ‘relations’ which can  specify  possible different ‘usages’   of  the concept ‘chair’.

Thus, if the actor perceives something which ‘matches’ some ‘known’ concept then the actor is  not only conscious about the empirical concrete phenomenon but also simultaneously about the abstract concept which will automatically be activated. ‘Immediately’ the actor ‘knows’ that this empirical something is e.g. a ‘chair’. Concrete: this concrete something is matching an abstract concept ‘chair’ which can as such cover many other concrete things too which can be as concrete somethings partially different from another concrete something.

From this follows an interesting side effect: while an actor can easily decide, whether a concrete something is there  (“it is the case, that” = “it is true”) or not (“it is not the case, that” = “it isnot true” = “it is false”), an actor can not directly decide whether an abstract concept like ‘chair’ as such is ‘true’ in the sense, that the concept ‘as a whole’ corresponds to concrete empirical occurrences. This depends from the fact that an abstract concept like ‘chair’ can match with a  nearly infinite set of possible concrete somethings which are called ‘possible instances’ of the abstract concept. But a human actor can directly   ‘check’ only a ‘few’ concrete somethings. Therefore the usage of abstract concepts like ‘chair’, ‘house’, ‘bottle’ etc. implies  inherently an ‘open set’ of ‘possible’ concrete  exemplars and therefor is the usage of such concepts necessarily a ‘hypothetical’ usage.  Because we can ‘in principle’ check the real extensions of these abstract concepts   in everyday life as long there is the ‘freedom’ to do  such checks,  we are losing the ‘truth’ of our concepts and thereby the basis for a  realistic cooperation, if this ‘freedom of checking’ is not possible.

If some incoming perception is ‘not yet known’,  because nothing given in the unconsciousness does ‘match’,  it is in a basic sens ‘new’ and the brain will automatically generate a ‘new concept’.

THE DIMENSION OF MEANING

In Figure 2 one can find two other components: the ‘meaning relation’ which maps concepts into ‘language expression’.

Language expressions inside the brain correspond to a diversity of visual, auditory, tactile or other empirical event sequences, which are in use for communicative acts.

These language expressions are usually not ‘isolated structures’ but are embedded in relations which map the expression structures to conceptual structures including  the different substantiations of the abstract concepts and the associated contexts. By these relations the expressions are attached to the conceptual structures which are called the ‘meaning‘ of the expressions and vice versa the expressions are called the ‘language articulation’ of the meaning structures.

As far as conceptual structures are related via meaning relations to language expressions then  a perception can automatically cause the ‘activation’ of the associated language expressions, which in turn can be uttered in some way. But conceptual structures   can exist  (especially with children) without an available  meaning relation.

When language expressions are used within a communicative act then  their usage can activate in all participants of the communication the ‘learned’ concepts as their intended meanings. Heaving the meaning activated in someones ‘consciousness’ this is a real phenomenon for that actor. But from the occurrence of  concepts alone does not automatically follow, that a  concept is ‘backed up’ by some ‘real matter’ in the external world. Someone can utter that it is raining, in the hearer of this utterance the intended concepts can become activated, but in the outside external world no rain is happening. In this case one has to state that the utterance of the language expressions “Look, its raining” has no counterpart in the real world, therefore we call the utterance in this case ‘false‘ or  ‘not true‘.

THE DIMENSION OF TIME

The dimension of time based on past experience and combinatoric thinking
Fig.3: The dimension of time based on past experience and combinatoric thinking

The preceding figure 2 of the conceptual space is not yet complete. There is another important dimension based on the ability of the unconscious brain to ‘store’ certain structures in a ‘timely order’ which enables an actor — under certain conditions ! — to decide whether a certain structure X occurred in the consciousness ‘before’ or ‘after’ or ‘at the same time’ as another structure Y.

Evidently the unconscious brain is able do exactly this:  (i) it can arrange the different structures under certain conditions in a ‘timely order’;  (ii)  it can detect ‘differences‘ between timely succeeding structures;  the brain (iii) can conceptualize these changes as ‘change concepts‘ (‘rules of change’), and it can  can classify different kinds of change like ‘deterministic’, ‘non-deterministic’ with different kinds of probabilities, as well as ‘arbitrary’ as in the case of ‘free learning systems‘. Free learning systems are able to behave in a ‘deterministic-like manner’, but they can also change their patterns on account of internal learning and decision processes in nearly any direction.

Based on memories of conceptual structures and derived change concepts (rules of change) the unconscious brain is able to generate different kinds of ‘possible configurations’, whose quality is  depending from the degree of dependencies within the  ‘generating  criteria’: (i) no special restrictions; (ii) empirical restrictions; (iii) empirical restrictions for ‘upcoming states’ (if all drinkable water would be consumed, then one cannot plan any further with drinkable water).

 

 

 

 

 

 

 

AAI THEORY V2 – Actor Story (AS)

eJournal: uffmm.org,
ISSN 2567-6458, 28.Januar 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

— Outdated —

CONTEXT

An overview to the enhanced AAI theory  version 2 you can find here.  In this post we talk about  the generation of the actor story (AS).

ACTOR STORY

To get from the problem P to an improved configuration S measured by some expectation  E needs a process characterized by a set of necessary states Q which are connected by necessary changes X. Such a process can be described with the aid of  an actor story AS.

  1. The target of an actor story (AS) is a full specification of all identified necessary tasks T which lead from a start state q* to a goal state q+, including all possible and necessary changes X between the different states M.
  2. A state is here considered as a finite set of facts (F) which are structured as an expression from some language L distinguishing names of objects (like  ‘D1’, ‘Un1’, …) as well as properties of objects (like ‘being open’, ‘being green’, …) or relations between objects (like ‘the user stands before the door’). There can also e a ‘negation’ like ‘the door is not open’. Thus a collection of facts like ‘There is a door D1’ and ‘The door D1 is open’ can represent a state.
  3. Changes from one state q to another successor state q’ are described by the object whose action deletes previous facts or creates new facts.
  4. In this approach at least three different modes of an actor story will be distinguished:
    1. A textual mode generating a Textual Actor Story (TAS): In a textual mode a text in some everyday language (e.g. in English) describes the states and changes in plain English. Because in the case of a written text the meaning of the symbols is hidden in the heads of the writers it can be of help to parallelize the written text with the pictorial mode.
    2. A pictorial mode generating a Pictorial Actor Story (PAS). In a pictorial mode the drawings represent the main objects with their properties and relations in an explicit visual way (like a Comic Strip). The drawings can be enhanced by fragments of texts.
    3. A mathematical mode generating a Mathematical Actor Story (MAS): this can be done either (i) by  a pictorial graph with nodes and edges as arrows associated with formal expressions or (ii)  by a complete formal structure without any pictorial elements.
    4. For every mode it has to be shown how an AAI expert can generate an actor story out of the virtual cognitive world of his brain and how it is possible to decide the empirical soundness of the actor story.