COMMON SCIENCE as Sustainable Applied Empirical Theory, besides ENGINEERING, in a SOCIETY

ISSN 2567-6458, 19.Juni 2022 – 13 August 2022
Author: Gerd Doeben-Henisch

— Not yet finished !!! —

— allusions to other known texts will be added after finishing the main text !!! —


In a rather foundational paper about an idea, how one can generalize ‘systems engineering’ [*1] to the art of ‘theory engineering’ [1] a new conceptual framework has been outlined for a ‘sustainable applied empirical theory (SAET)’. Part of this new framework has been the idea that the classical recourse to groups of special experts (mostly ‘engineers’ in engineering) is too restrictive in the light of the new requirement of being sustainable: sustainability is primarily based on ‘diversity’ combined with the ‘ability to predict’ from this diversity probable future states which keep life alive. The aspect of diversity induces the challenge to see every citizen as a ‘natural expert’, because nobody can know in advance and from some non-existing absolut point of truth, which knowledge is really important. History shows that the ‘mainstream’ is usually to a large degree ‘biased’ [*1b].

With this assumption, that every citizen is a ‘natural expert’, science turns into a ‘general science’ where all citizens are ‘natural members’ of science. I will call this more general concept of science ‘sustainable citizen science (SCS)’ or ‘Citizen Science 2.0 (CS2)’. The important point here is that a sustainable citizen science is not necessarily an ‘arbitrary’ process. While the requirement of ‘diversity’ relates to possible contents, to possible ideas, to possible experiments, and the like, it follows from the other requirement of ‘predictability’/ of being able to make some useful ‘forecasts’, that the given knowledge has to be in a format, which allows in a transparent way the construction of some consequences, which ‘derive’ from the ‘given’ knowledge and enable some ‘new’ knowledge. This ability of forecasting is typically the business of ‘logic’ providing an ‘inference concept’ given by ‘rules of deduction’ and a ‘practical pattern (on the meta level)’, which defines how these rules have to be applied to fulfill the inference concept. Insofar as sustainable citizen science agrees to a ‘common logic’ it is structurally not different from ‘usual science’.


  1. Outline of an assumed framework (figure)
  2. Language
  3. Concrete Abstract Statements
  4. True False Undefined
  5. Beyond Now
  6. Playing with the Future
  7. Forecasting
  8. THE LOGIC OF EVERYDAY THINKING. Lets try an Example
  9. Boolean Logic
  10. Everyday Language: German
  11. The Cognitive Setting
  12. Natural Logic
  13. Everyday Language: English
  14. Predicate Logic
  15. True Statements
  16. Formal Logic Inference: Preserving Truth
  17. Ordinary Language Inference: Preserving and Creating Truth
  18. Hidden Ontologies: Cognitively Real and Empirically Real
  21. Side Trip to Wikipedia
    1. Knowledge in a population (9 Aug 22, 08:30h)
    2. Sustainable empirical theory concept II (10 Aug 2022, 10:00h)


Figure 1: A first view identifying the key concept of ‘common science’ as a ‘sustainable applied empirical theory’.


The words ‘science’, ‘theory’, and ‘scientific theory’ are well known passengers travelling through the times with different meanings, depending from the circumstances, from the minds of different people.[2]-[4] In modern times we have learned a lot about the nature of ‘signs’ and ‘sign-based’ communication as it happens when we are using a ‘language’. And, becoming more sensitive about the dynamics of sign-based communication, we can detect that it is exactly our human use of language which provides the key to a deeper understanding of how our brains are working, located in our bodies, where the brains are playing the roles of ‘spin doctors’ of the pictures in our heads, which are ‘showing’ our mind a ‘virtual world’ of an assumed ‘real world’ somewhere ‘out there’.[14]

Until today we have no final explanation of how exactly this ability of human actors has developed through the times stretching to millions of years ago. And until today there exists no complete description of a living language with the involved structures, meanings, and dynamics. One reason for this ‘fundamental inability’ of describing with a language exactly this language roots is the fact, that language is not a ‘single fixed object’ in front of your eyes, but a dynamic reality happening between many, many different human actors simultaneously; every brain has only some fragments of this assumed ‘whole thing’ called ‘language’, and every communicative act between humans embraces besides ‘rather stable parts’ always a lot of ‘incidental’, ‘casual’ moments of a complex dynamic situation, which constitutes — mostly unconscious — the working of language communication, possible meanings and connotations of meaning. Thus, all the known scientific endeavors until today trying to describe this phenomenon of language communication are more reminding some ‘stuttering’ than a final ‘ordered’ theory.

One lesson we can learn from this tells us, that the so-called ‘everyday language’, the ‘ordinary language’, the ‘natural language’ is the ‘basic’ pattern of language communication. But, as mentioned just before, on account of the fundamental distributed and dynamical character of everyday language, a natural language has no clear cut ‘boundaries’. Never you can tell with certainty where a language ends and where this language just in that moment ‘evolves’, ‘expands’, is ‘changing’.

For people which are looking for ‘clear statements’, for ‘finite views’, for a ‘stable truth’ this situation is terrifying. It can cause ‘anxious feelings’. People who like to ‘control’ life don’t like such a ‘living dynamics’ which can not be owned by a single person alone, not even by ‘many’…

One basic property of ordinary language is it’s ‘expandability’: at every time someone can introduce new expressions embedded within new contexts following new patterns of usage. If other human actors start to follow this usage, this ‘new’ behavior is ‘spreading’ through the ‘population of language users’ and by this new growing practice the ordinary language is expanding and thereby changing.

One ‘part’ of ordinary language is called ‘logic’ [6],[7], with various different realizations through history. Another part of ordinary language is ‘mathematics’, especially what is today assumed as being the ‘kernel’ of mathematics, the ‘Theory of Sets’.(cf. [8], [9]) Because ordinary language can always be used to speak ‘about ordinary language’, it is possible to extend an ordinary language with arbitrary many new ‘artificial languages’ like a ‘logic language’ or a ‘mathematical language’.[10] After introducing a special language like a mathematical language’ by using ordinary language one can apply this special language ‘as if it is the only language’, but if you start to ‘look consciously’ to your real practice of speaking, you can easily detect, that this impression ‘it is the only language’ is a fake! Cutting away the ordinary language you will be lost with your special language. The ordinary language is the ‘meta language’ to every special language. This can be used as a ‘hint’ to something really great: the mystery of the ‘self-creating’ power of the ordinary language which for most people is unknown although it happens every moment.

Concrete – Abstract Statements

From the everyday language we know that we can talk ‘about the world’, and even more, we can even ‘act’ with the language. [11] – [13] Saying “Give me the butter, please”, in that case a ‘normal’ [*2] speaker would ‘hear’ the ‘sound of the statement’, he can ‘translate the sound’ into some internal meaning constructs related to the sounds of the language, which in turn will — usually — be ‘matched against’ meaning constructs ‘actually provided’ by the ‘perception’. If there happens to be a ‘sufficiently well match’ then the hearer can identify ‘something concrete’ located on the table which he can associate with the ‘activated language related meaning’ and he then ‘knows’, that this concrete something on the table seems to be an ‘instance’ of those things which are called ‘butter’. But there can exist many different ‘concrete things’ which we agree to accept as ‘instances’ of the meaning construct ‘butter’. Thus, already in very usual everyday situations we encounter the fact, that our perceptions can create signals from ‘something concrete in our perceptions’ and our ‘language-mediated understanding’ can create ‘meaning structures’ which can ‘match’ nearly uncountable different concrete things. [*3] Those meaning constructs — activated by the language, but different from the language — which can match more than one concrete perception, will here be called ‘abstract meaning’ or ‘abstract concept’. And ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like. [*4]

Knowing this you will probably detect, that nearly all words of a language are ‘abstract words’ activating ‘abstract meanings’. This is in one sense ‘wonderful’, because the real empirical world consists of uncountable many concrete perceivable properties and to relate every concrete property with an individually matching word would turn the project of language into an infeasible task. Thus with only a few abstract words language users can talk about ‘nearly everything’. This makes language communication possible. The ‘dark side’ of this wonderful ability is the necessity to provide real situations, if you want to demonstrate which of all these concrete properties of a real situation you want to be understood as ‘related’ to the one used word (= language expression) with an abstract meaning. If you cannot provide such ‘concrete situations’ the intended meaning of your abstract words will stay ‘unclear’: they can mean ‘nothing or all’, depending from the decoding of the hearer.


Talking about ‘butter’ on ‘tables’ during a ‘breakfast’ will usually stimulate lots of ‘imaginations’ in the head of the hearer of such utterances. Because an abstract word can trigger many different ‘concrete things’ these individual imaginations can vary a lot. If different hearers would start to ‘paint’ those imaginations on some paper it could happen, that nearly no two paintings would ‘match’ with all details. The ‘space of possible meanings’ of an abstract word (‘butter’, ‘table’, ‘breakfast’, ‘kitchen’, …) is in principle ‘infinite’. And the manifested ‘diversity’ of the details reveals a kind of ‘fuzziness’ which at a first glance seems to be ‘infeasible’ in the practice of language communication.

This appearing diversity, fuzziness in the examples points to some ‘internal mechanism’ in our brains which works in complete ‘silence’, always ‘automatically’, completely ‘unconscious’, which ‘arranges’ the many different perceptions in a way, which selects some finite set of properties out of the many perceived properties and makes such a ‘selection’ to a kind of ‘signature’, ‘address’, which starts to play the ‘role’ of an individual representation for all those possible sets of perceived properties in the future, which are ‘sufficiently well’ ‘similar’ to those ‘signature properties’. The ‘boundaries’ are not sharp; the boundaries can vary; there can grow large ‘clusters of different property sets’ intersecting with this ‘signature set’ but are different otherwise. Thus, there exists a growing meaning structure in our brains which creates a ‘meaning space’, whose elements can be associated in arbitrary many ways.

If my friend Bill starts talking with me by asking whether there already is some butter on the table, than his utterance — a question — will trigger in me a subset of possible meanings of butter which are in my memory available. Then, when I am looking to the table in the kitchen, I will ‘scan’ the table whether there is something concrete which will ‘match’ these activated internal meanings. Either there happens a direct match or there is something, which looks like something, which feeds back through my perception and urging my memory to ‘look for something alike’. If this happens, then there will be a match too. Thus if such an internal match between ‘perceived properties’ and ‘remembered properties’ will happen then I would shout to Bill “Yes, there is already some butter on the table”. If no such match would happen, then I would shout back “No, there is not yet butter on the table”. In the first case we are used to classify a statement as ‘true’, if the abstract meaning matches a concrete perception sufficiently well; otherwise not. If Mary standing nearby the table would have said before “No, there is no butter on the table” while Jeremy has stated that there is some butter, then these two statements would ‘contradict’ each other. If Jeremy and Mary can come to a common opinion by observable evidence that there is some butter on the table or not, they would be able to ‘agree’ to the positive, affirmative statement that there is some butter on the table, otherwise not. To classify a statement as being ‘false’ would presuppose that the contradicting format of this statement is classified as being ‘true’. If the human actors can not come to a sufficient agreement whether either the statement “Yes, there is already some butter on the table” is true or “No, there is no butter on the table”, then both statements are ‘undecidable’ by the human actors with regard to some observable evidence. In that case these statements are with regard to being ‘true’ or ‘false’ ‘undefined’.[*5]

This everyday situation offers some more variants. If for instance Bill is asking Jeremy whether there is some butter on the table it could happen either that Jeremy says ‘no’ because his ‘understanding’ of the word ‘butter’ consist of kinds of meaning which are not matching that concrete thing on the table, which Bill would understand as ‘butter’. Such a ‘misunderstanding’ can happen easily if people from different cultures are coming together. Thus, having some observable evidence does not guarantee the right classification within a certain language if the language users have learned ‘different meanings in their memory’. In the other case, if Mary has a bad visual perception on account of some ‘visual handicap’ but has in principle the same meaning space like Bill, then it can happen too that she would deny that there is some butter on the table because her visual perceptions are ‘disturbed by their visual handicap’ in a way that the perceptional key to her memory is not in that format which has to match their remembered language induced meaning.

Thus, in this simple example of a ‘true’ statement there are already several ‘factors’ needed to make a ‘true statement’: (i) a perception which works ‘normal’; (ii) a language meaning which is ‘sufficiently common’; (iii) a ‘successful match’ between an actual observation and the triggered memory based meaning. Every factor (i) – (iii) is not simple, can vary a lot. And there exists even more factors which can influence the final classification of being ‘true’ or not; in cases of ‘contradicting statements’ all these different factors can be involved.

In our times of ‘growing fake news’ we can experience, that the agreement between different human persons about the ‘truth’ of a statement can in practice be very difficult or even seems to appear impossible. This points to one more factor which is finally decisive: whatever we perceive and remember, these processes are ’embedded’ in some larger ‘conceptual frameworks‘, which are further ’embedded’ in a system of preferences’ which can be ‘decisive’ for the ‘handling’ of our opinions. Human persons having certain ‘convictions’ related to political or religious or ethical opinions can be ‘driven’ by these convictions in a way, which ‘overrides’ empirical evidences because their ‘conceptual frameworks’ ‘interpret’ these perceptions in a different way. Modern scientific observations are meanwhile often in a format, which only experts can interpret adequately related to a ‘theoretical conceptual framework’. If a non-expert does ‘not trust’ in this scientific interpretation he can ‘switch’ to a different conceptual framework in which he is trusting more, although this other conceptual framework contradicts the scientific framework, and thus he can assume ‘facts’ which are contradicting those ‘facts’ classified as scientific. Scientists can classify these other facts as ‘fake news’, but this will have no effect on the believer of the fake news. The fake-news believer thinks he is ‘right’ because it matches his individual framework shared by others in social groups.

From this follows that the classification of a statement as being ‘true’ is a complex matter depending from many factors which have to be ‘synchronized’ to come to an agreement. Especially it reveals that ’empirical (observational) evidence’ is not necessarily an automatism: it presupposes appropriate meaning spaces embedded in sets of preferences, which are ‘observation friendly’.


Every (biological) system which has some sensory input possesses certain states which represent for the system the NOW: that what ‘happens actually’, what is ‘present’ in a mixture of properties and events.

But a NOW provides as such no ‘knowledge’. It is only a NOW.

To ‘overcome’ the NOW a system must be able to map parts of the NOW into other systems states, into such states, which can be ‘recalled’, and which as ‘recalled states’ can be ‘compared’ with the actual NOW. Such a ‘comparison’ can yield ‘similarities’ and ‘differences’. Out of differences distributed over different recalled states ‘sequences of states’ can be constructed’, and sequences of such states can reveal by differences ‘changes’ of properties between consecutive states. With the aid of such sequences revealing possible changes the NOW by turning it into a ‘moment’ embedded in a ‘process’, which is becoming the more important reality. The NOW is something, but the PROCESS is more.


In this enlarged reality of a process the ability to generate ‘signatures’ representing ‘some properties’ out of a set of properties, is the other ‘magic’ tool to compose ‘abstract structures’ which can be expanded if necessary, which can be related to nearly everything; an abstract structure can become associated with other structures, can be embedded in ‘hierarchic’ structures, and even more. Abstract structures are the other ‘tools’ to overcome the NOW: ‘reality’ is not only ‘what is now’, but in the same time also that what can be added, extended, combined to the given structure. Abstract structures are as part of an embracing process ‘potentials’, ‘possible alternatives’, something which can become ‘true’ in some following state, that means in some ‘possible future.’

If someone has introduced the word ‘cup’ for something concrete which allows to hold some fluid, which can be used to ‘drink’ out of this concrete something, the word ‘cup’ — an expression of some language — is not a fixed, static object but — as part of a possible process — can be used to ‘touch’ more and more different concrete objects allowing them to become ‘part of the internal meaning structure’ of a speaker-hearer. Thus while the ‘word’ as language expression stays ‘the same’ the associated meaning space can change, can grow, can shrink, can be associated with other meaning spaces.

In this sense seems ‘language’ to be the master tool for every brain to mediate its dynamic meaning structures with symbolic fix points (= words, expressions) which as such do not change, but the meaning is ‘free to change’ in any direction. And this ‘built in ‘dynamics’ represents an ‘internal potential’ for uncountable many possible states, which could perhaps become ‘true’ in some ‘future state’. Thus ‘future’ can begin in these potentials, and thinking is the ‘playground’ for possible futures.(but see [18])


We know from everyday life and partially from science that this ability of abstract potentials as part of possible processes can under certain conditions be used for ‘forecasts’ with important practical consequences: for the Egyptian people it was of high importance to know in advance when the floods of the Nil river would arise again. Generally it was important to understand the different periods of the year, the process of time, or the connections between food and effects on our bodies, or the ‘art of agriculture’ to prepare for enough food for all people, and much more.

With the reality of being part of a process with a NOW, with the ability to overcome the NOW by generating abstractions, sequences of states, and recognizing changes, with the ability to derive ‘possible follow-up states’ out of the known sequences of states, it is generally possible to produce forecasts.

But not any forecast is ‘helpful’.

If the experts say that in two weeks the floods of the river will come, but this would not happen, it would not be appreciated; if people recommend certain food for your health and you will become ill, it would not be appreciated either. Thus forecasts should possess the property, that the state, which is ‘announced to become true’, indeed would become ‘true’. ‘True’ means here that the ‘announced state’ will at some ‘point in the future’ be ‘instantiated by some real facts which can be observed.

This leads to the interesting question, how it is possible to ‘derive’ from some ‘given states’ in the memory ‘possible states’ in the memory, which have the potential to become in some time ‘instantiated’ in a way, which makes them ‘real’ and thereby ‘observable’.

In modern formal logic language expressions are well defined expressions of some language but ‘without any concrete meaning’. The only assumed property of logical statements is the property to be called ‘true’ or ‘false’ without relating these abstract properties to some real meaning. Thus you can play with these ‘logical expressions’ in a purely formal way by defining some rules, how one can change an expression and under which conditions the transformation of a set of given expressions into another set of expressions is called a ‘logical derivation’ which preserves the ‘abstract trues’ of the assumed primary set of expressions. These are nice games allowing numerous different kinds of definitions of ‘logical derivation’ without any real relation to everyday language and meaning. All the known examples how to use formal logic applied to everyday meaning until today are not really convincing. The numerous articles and even books dealing with such examples can only work, if we forget nearly everything which we know about our everyday world. This seems to be a strange deal.

If one instead looks to the way human actors are making forecasts in the everyday world without using formal logic one can detect, that this is not only possible, it seems to be the only powerful way to do it.


Let’s try an Example

In the following examples four languages are used simultaneously: (i) Boolean logic, (ii) German language, (iii) English language, (iv) Predicate logic. The idea is to make ‘visible’ that formal logic provides not only a very limited profit, but that the normal language can offer all what formal logic can offer, but even much more. If one keeps in mind that the ‘normal’ language is principally the meta-language for every kind of ‘special’ language then this should be no surprise.

Figure 2: Outline of boolean logic from the perspective of language usage by human actors.

In the language of boolean logic — also called ‘propositional logic’ (but see [17]) — we have only expressions for ‘names’ of statements — like ‘A’, ‘B’, ‘CD’, … — , which can be classified (on a meta-level!) as being ‘true‘ or ‘false‘.

Whether one of the used names for statements is ‘true’ or ‘false’ has to be explained ‘separately’ — on a meta-level! — often written in a list or table called ‘truth table’ like:

  1. A, true
  2. B, false
  3. C, false

Further we have some expressions naming ‘logical operators’ which we write here as ‘not‘ and ‘and‘. Strictly speaking these are on a ‘meta-level’ compared to the expressions representing statements which can be true or false.

Thus we could write the compound statement A and B and C’

claiming that the whole expression has the meta-property of being true independent which truth values the individual statement expressions B’ and ‘C’ are assumed to have.

This simultaneous occurrence of two different meta-levels in the description of boolean logic expressions raises the question, how these metal-levels are ‘interacting’? The discussion of this question will be postponed here until we have discussed what is called a ‘logical derivation’.

A ‘logical derivation rule’ tells us that if we have an expression like ‘A and B’ ‘assumed’ to be ‘true’ than we can ‘derive’ from this expression that the expression ‘A’ or ‘B’ alone is also ‘true’. Thus with our introductory example, that the expression A and B and C is assumed to be true, we could ‘logically derive’ that the expressions A, or B, or C taken ‘alone’ are true either. In the logical meta-language we could describe this derivation relation as

A and B and C  X A


A and B and C  X B


A and B and C  X C

where the sign X denotes the logical derivation operator (meta-level !) with the arguments (left side) A and B and C and (right side) A or B or C. The index sign ‘X’ represents the set of derivation rules. In this case we have only one rule, therefore X = {if we have an expression like ‘A and B’ ‘assumed’ to be ‘true’, than we can ‘derive’ from this expression that the expression ‘A’ or ‘B’ alone is also ‘true’}

Coming back to the question of the interplay between the meta-level assumption that the expression C is assumed to be false but can be derived from the compound statement A and B and C as being ‘true’ reveals that the property of being ‘false’ of an individual statement and the property of an individual statement ‘C’ to be in a logical derivation ‘true’ describes apparently two different properties.

A possible solution of this meta-problem can be to introduce the convention, that the ‘individual true-false qualification’ can be expressed by ‘C’ as encoding ‘C is true individually’ and ‘not C’ as encoding ‘C is false individually’. But this ‘convention’ will only work if it would be ‘done’ before’ a logical derivation (again a meta-level matter). Thus, if one assumes the individual true-false qualifications of the before mentioned truth-table as ‘given’, than we had to write the compound statement as

A and not B and not C

which could yield the following derivations

A and not B and not C  X A


A and not B and not C  X not B


A and not B and not C X not C

Thus we have presupposed ‘individual truth values’ and then one can logically derive either ‘B’ or ‘not B’ as ‘logically true’.

This discussion of ‘individual truth values’ compared to ‘logically derived truth values’ raises confusion. Indeed, boolean logic as such takes only names for expressions like ‘A’ or ‘B’ as arguments for their logical operators — ‘not’, ‘and’, … — being fed into a logical derivation relation — X — without taking into account individual truth values. This part is ‘delegated’ to the user of boolean logic; the possible ‘interpretation’ of boolean logic expressions’ with ‘real truth’ is ‘outside of boolean logic’!

The leading idea is therefore that the usage of a symbolic language has to be understood as an interaction of several ‘levels of meaning’ simultaneously. One single language expression can be seen from the perspective of ‘meaning’ (the adaptive built function in every human actor) as having several ‘levels of meaning’. In the case of boolean logic this are at least four levels.

More aspects of the case of boolean logic will be discussed in the following sections.

The Cognitive Setting
Figure 3: Simple outline of basic interactions between an empirical object with properties embedded in a situation and (human) actors with perception, meaning space, abstract structures functioning as cognitive models of possible real world somethings. An abstract structure usually includes more than one possible empirical situation thereby ‘transcending’ a perceptional ‘NOW’ into different possible (cognitive) states (encoding possible ‘future’ states). The internal meaning space with its manifold abstract structures allows lots of ‘logical derivations’ which are impossible looking only to utterances or to actual empirical settings.

In the following example we have a human actor being part of a traffic situation, who gives some fragments of a language description of what he is experiencing (in the next section this example will be given with the English language).

In a first situation the human actor would say:

“Die Ampel zeigt rot.”

Some seconds (or minutes) later he would state:

“Die Ampel zeigt orange.”

Again, after some seconds (or minutes) he would utter:

“Die Ampel zeigt grün.”

Then he would start to move away.

We could ‘name’ these expressions by abbreviation in the following way:

A := “Die Ampel zeigt rot.”

B := “Die Ampel zeigt orange.”

C := “Die Ampel zeigt grün.”

In the everyday situation where these statements will be uttered by a human actor this human actor would classify each statement as ‘being true’, because the ‘known meaning’ associated with these expressions is in that moment of being uttered in a ‘sufficient accordance’ with the perceived situation. Thus, one could classify the individual statements as ‘true’ while being ‘uttered’.

Using the abbreviations ‘A’, ‘B’, and ‘C’ we could apply the inference machinery of the boolean logic with

(1) A and B and C  X A or … B … or C

In the everyday situation where these statements have been uttered this logical inference would be wrong. If we would do it like in (1).

The reason for this insufficiency is grounded in the fact, that each statement from ‘A’, ‘B’, and ‘C’ is describing the property of a traffic light (being red, orange or green), and only one of these statements can be true at a certain point of time. Thus the ‘truth’ of these statements is ‘time dependent’! Furthermore works the traffic sign in an ‘action pattern’ which makes one ‘color’ ‘true’ and at the same time all other colors ‘false’. Thus a traffic light is a collection of statements like this:

(2) traffic light := {‘A and non B and non C’ or ‘non A and B and non C’ or ‘non A and non B and C’} (with ‘or’ as another boolean operator).

From this the following boolean derivations would be possible:

  • One of these statements can become true
  • If e.g. ‘A and non B and non C’ would become true, then one could derive that ‘A’ is true or ‘non B’ or ‘non C’. This would describe the case, where in the everyday world the red sign of the traffic light would be shining.

From the boolean derivation as such it would not be possible to decide, which of the possible variants would be the case in a certain moment. Because boolean logic in general has to assume a human actor (or any kind of actor with sufficient properties), who is able to associate the expressions with his internal meaning space, combined with the intention to classify which of the ‘logically possible variants’ is matching an ‘actual situation’, which offers those ‘meaning properties’, which are needed, to ‘make the expression an instance’ of this meaning case.

Naturally, it is a human actor who has to ‘invent’ the definition of a ‘traffic light’ in the format of (2), if he knows concrete examples of traffic lights in everyday situations. Because of this, because a human actor has an internal knowledge space with an internal meaning function μ, he ‘knows’ which kinds of properties are ‘related’ to that what is called a ‘traffic light’. And from this follows with ‘normal logic’ that

  1. If a traffic light shows a certain color, this is only valid in a certain time span (t,t’) and all the other colors of this traffic light are not active simultaneously.
  2. Thus uttering the statement ‘Die Ampel zeigt rot’ implies that this statement is true in that moment.
  3. By ‘normal logic’ every human actor — with the same meaning space — ‘knows implicitly’ that the other lights do not show their colors in that moment. To make the additional statements that ‘Die Ampel ist nicht gelb’ and ‘Die Ampel ist nicht grün’ are not necessary because every human actor would ‘derive’ these consequences ‘internally purely automatically’ (because our brains work in this fashion without explicitly asking whether they are allowed to do this).
Natural Logic

The foregoing comparison of derivations in a ‘boolean logic setting’ and in an ‘everyday language setting’ shows a remarkable difference: while the inventors of boolean logic focused on formal expressions only by cutting of all ‘natural meaning’, the ‘inventor’ of natural language — the whole biosphere! — created ‘normal language’ to be a ‘medium’ to encode the ‘internal states’of the sending brain into expressions which can be transmitted to other brains which as ‘receivers’ should be able to ‘decode’ these expressions into the internal states of the receiving brains. Thus ‘expressions as such’ are of nearly no help for the survival of brains. Survival needs cooperation between different brains and the only chance to enable such cooperation is communication of internal states by ‘encoded expressions’.

From this follows that ‘natural logic’ has to follow completely different patterns than ‘boolean logic’. Let us look to the example again.

We continue using the before introduced abbreviations A := ‘Die Ampel zeigt rot’ etc.

In figure 3 a simple ‘sequence of states’ has been outlined where the usual sequence of showing ‘red’, ‘orange’, and green is assumed. In certain types of cultures this is a typical everyday situation.

Thus we can assume a ‘state S1’ where the traffic light is showing ‘red’. This can be represented by the expression:


All participants know, that this expressions describes a real situation where the ‘learned meaning of this expression’ is in accordance with the actual ‘perceptions’ which are assumed to be in ‘accordance’ with some ‘real situation outside the brain and outside the body’. In that case the ‘speaker’ and ‘hearer’ of expression ‘A’ agree – under normal circumstances — that the ‘meaning’ associated with the expression ‘A’ is ‘true’. If the perception would provide ‘another concrete construct’ triggered by a traffic light showing ‘orange’ instead of ‘red’ then the learned meaning of expression ‘A’ would not match. In that mismatch situation speaker and hearer would agree – under normal circumstances — that the ‘meaning’ associated with the expression ‘A’ is ‘not true’, and this is a case of being classified as ‘false’.

Now, what could in such a situation be a ‘derivation’ in the context of a ‘natural logic’?

As has been mentioned before the ‘abstract structures’ of the meaning space are ‘dynamic constructions’ allocating many different properties of the perceived real world into ‘internal (neural = cognitive) clusters’ representing these properties within these structures. Thus, the abstract structure ‘traffic light’ is a structure possessing the typical collection of three different lights with their typical pattern of sequential activations.[19] A brain which has built up such abstract structures can use these to produce ‘forecasts’ by ‘reading its learned structures’!

Assume that the perceived situation is that state called S1 which can be described with the expression ‘A’:

S1 = {A}

From the learned abstract structure ‘traffic light’ the brain could ‘derive the rule’


IF there is a situation S which has a property described by an expressions ‘A’, THEN it can happen in a follow up state S’, that the expression ‘A’ does not any more match’, but expression ‘B’.

If we make the set of derivation rules X equal to the set comprising rule R1 with X = {R1} then we can built the following natural logic derivation:

S  X S’

with S ={A} and S’={B}.

This kind of derivation is radically different to a boolean logic derivation:

(i) While boolean logic can only derive something which is ‘already true’, natural logic can derive something which ‘could become true in the future’ by assuming, that the ‘learned meaning’ is ‘true’.

(ii) While boolean logic can only use derivation rules based on operations with expressions only, natural logic can exploit the vast amount of ‘learned meaning structures’ owned by an individual brain and which is partially ‘shared’ with other brains.

Based on (ii) a brain is always capable to ‘construct its own derivation rules’ simply by ‘exploiting’ its learned abstract (dynamical) structures. Thus every brain can ‘invent’ new types of logic only by using its ‘learned experience’.

From this follows directly that human actors which want to ‘think explicitly about some possible future’ should abandon boolean logic and instead should exercise to exploit their learned knowledge.

In a historical perspective it is very strange that the most advanced complex system of the whole known universe — the biosphere, and as part of it the homo sapiens population — decided to use as logic a system, which abandons all this fantastic inventions of more than 3.5 billion (10^9) years to select a system of sign operations, which is more than pure and of not too much help for survival. The construction of programmable machines (usually called computers) by using boolean logic has enabled an interesting tool, but only if we use this as ‘part of biological intelligence’.


In this section we repeat the everyday example from before, now with expressions from the English language. The situation is a human actor in front of a traffic light showing a ‘red’ light.

In this situation the human actor could say:

“The traffic light shows red.”

Some seconds (or minutes) later he would state:

“The traffic light shows orange.”

Again, after some seconds (or minutes) he would utter:

“The traffic light shows green.”

Then he would start to move away from the traffic light.

We could ‘name’ these expressions by abbreviation in the following way:

A := “The traffic light shows red.”

B := “The traffic light shows orange.”

C := “The traffic light shows green.”

After the introduction of these abbreviations this example looks completely as the example with the German expressions. And, indeed, it works completely similar. The reason for this is located in the ‘body system’ of a human actor with its special ‘brain’.

Figure 3b (For comments see figure 3)

Independently of the used expressions the ‘internal structures’ of two different human actors can work more or less the same as long as (i) the ‘physiological structures’ show no serious ‘handicaps’ and (ii) the ‘learning history’ for the everyday world experience is sufficiently similar (but by different languages!). An English speaker as well as a German speaker, both have the same perceptional input in a traffic-light-situation and their brains will ‘process’ these perceptions in interaction with their different cognitive structures (mainly the ‘memory system’) more or less the same. Thus if the English speaker would not understand German and vice versa the German speaker would not be able to speak English then — nevertheless — they could quickly come to an agreement about the meaning of their expressions only by associating the situation and their dynamic properties with the used expressions in their languages. Their inner states besides the expressions is — more or less — the same with regard to this situation.

This synchrony of the inner states of human actors with regard to the observable ‘outer (real) world’ of the bodies is the ground for any kind of shared meaning extended by the shared body structures which ‘surround’ the brains. ‘Pain’ in the teeth or some leg works highly similar in every body; thus having such kinds of pain — or having other needs like being ‘hungry’ or ‘thirsty’ — is something which ‘every’ human actor does experience and therefore the chance to ‘associate’ a certain expression with such an ‘body-common triggered experience’ is high.

Thus the often invoked ‘limits of the language’ are not the expressions as such (!) but only the way, how we use expressions of a language in an uttering situation: are there actual ‘experiences’ in the speaker available which can be used as ‘actual common trigger’ for some potential meaning or not. If you hear an expression and you do not understand already possible meanings of this expression you need some ‘key’ in your observational space which could ‘point out’ some part of this observational space as a possible ‘candidate’ for the meaning of the expression.

Figure 4: Outline of the kind of expressions which are used for the ‘usual’ ‘Predicate Logic’. As one can see in history, many different variants are possible.

So-called ‘predicate logic’ can be found since the classical Greek philosophy (cf. [7], chapter 2), but in the ‘old times’ not in the format which we know and are using since Frege, Russel & Whitehead and others since the 20th century.

What one can observe in the talking about predicate logic is a constant reduction of the properties of predicate logic as well as the circumstances of usage. While we can find in the collection of texts associated with Aristotle called ‘Organon’ different dimensions beyond the pure expressions — in a not complete systematic way — do modern texts restrict themselves more or less to expressions only …. in theory, not in practice.

To discuss the topic of predicate logic in an everyday setting we will start with predicate logic from the point of view of expressions only and then we will try a look to the ‘conditions of usage’.

In the outline presented in figure 4 we take as a common assumption that human actors are the main actors producing and using predicate logic. From these human actors we know that they are ‘complex dynamic systems’ living in a complex dynamic environment (with the human actors as part of this environment making it even more complex than without human actors). Furthermore it is a historical fact that the homo sapiens population demonstrates since its beginning (before about 300.000 years somewhere in Africa) the special ability that their brains — embedded in their bodies — can organize a ‘communication by symbolic means’ in a way which enables these individual distributed brains to ‘coordinate’ the ‘behavior’ of their bodies in a growing complex manner. History shows how the ‘technology of communication’ has changed constantly beginning with written symbols, texts, libraries, data bases, connected data bases within computer networks called ‘cyberspace’.

Besides many thousands of ‘ordinary (= normal) languages’ the brains of the homo sapiens population have invented many ‘specialized languages’ extending the normal languages in many directions. Such a specialized language’ is completely depending from the given normal language. Without the used natural language a specialized language cannot exist; a specialized language as such is ‘nothing’; with a normal language as starting point a specialized language can allow quite complex ‘artificial symbolic structures’ which — used in an ‘adequate manner’ — can help the acting brains to ‘describe possible meanings’ which can eventually help to understand some parts of the ‘perceivable world outside the brain’ (and thereby some behavior of the brain itself!)

True Statements

From the section about Boolean Logic we know, that there can be expressions called ‘statements’ which can be classified as being ‘true’ or ‘false’ without describing what ‘true’ or ‘false’ means. An ‘interpretation’ of a ‘possible meaning’ of the expressions ‘true’/’false’ is a property of the human actor dealing with these statements. We as human speakers ‘know’ by ‘experience’, that the classification of an expression as being ‘true’ or not depends from our ‘interpretation’ of the expression ‘A’ whereby the interpretation activates a ‘known meaning’ which can be related so some ‘assumed world of references’. Thus the usage of Boolean logic is a way of ‘short, condensed notation’ of a possible high complex ‘knowledge’ of the human actor using this notation. Without this assumed knowledge of the human user the notation makes no sense.

In the case of predicate logic the situation is similar, but also different. Predicate logic offers also a notation for expressions called statements which possibly can be classified as being ‘true’ or ‘false’, but in the case of predicate logic these notations are not only ‘names’ of some expressions but they show a minimal ‘expression-inherent structure’.

Figure 4 shows that the ‘minimal format’ of a predicate logic expression called statement includes at least one ‘predicate’ and at least one ‘term’, where the term is minimally represented by a ‘name’ of an ‘object-like something’, and this name is a ‘constant’. An expression is called a ‘constant’ when it is related to a ‘known reference’, which can be related to something concrete, which gives a human actor the possibility to ‘decide’ that there ‘exists’ an ‘observable something’ which can be understood as an ‘instance’ of the ‘known reference’. Thus one can see that in the case of predicate logic too one has to assume a sufficient ‘knowledge’ inside the human actor which enables a sufficient ‘interpretation’ along with the possibility to ‘decide’ whether this ‘name’ is a constant or not.

(Example 1) IS-RED(traffic-light-number-111)

Example 1 shows an example of a simple statement in a predicate logic format with the term ‘traffic-light-number-111’ as a name used as a constant pointing to some assumed decidable object-like something located somewhere in the city related to the predicate expression ‘IS-RED’ with the possible meaning of ‘showing the color red’.

Such an expression with an interpretable predicate expression as well an interpretable name as term can be classified as being ‘true’ if the ‘known meaning’ of this expression, which is assumed within an interpretation, can be related to some ‘observable object-like something’ which ‘matches’ the properties of the known meaning. In this sense the expression of Example 1 can be understood as a ‘notation’ which can be associated with a known meaning by interpretation, which in turn can be ‘verified’ or ‘falsified’, or not. In the last ‘undecidable case’ either there is no ‘observable instance’ available or there is no ‘clear knowledge’ available.

The expressions used here like ‘known meaning’ or ‘object-like something’ or ‘interpretation’ (and others) are not part of predicate logic itself but belong to the ‘meta theory of logic’ — short: meta-logic — which is rooted in the ‘general everyday knowledge’, which has to be assumed as ‘general condition’ for any special thinking. Either it is there and ‘works’ or not. If not, the human actors have no chance to discuss these topics in some way. This kind of ‘primary knowledge’ can be compared to the case of the ‘body’ and therein the ‘brain’ as a ‘something given’, which enables certain real processes which you can ‘use’ by ‘living these’, but without brain or body you are simply ‘not there’. Take it or leave it. If you ‘take it’ then you can do something, e.g. you can use a language associated with some ‘known meaning’ which enables you to ‘relate’ language expressions to ‘something else’ functioning as ‘reference’.

Another more complex format of a predicate logic statement is one where more than one simple predicate occurs:

(Example 2) IS-RED(traffic-light-number-111) AND NOT(IS-ORANGE(traffic-light-number-111)) AND NOT(IS-GREEN(traffic-light-number-111))

In this simple example do occur three simple predicates ‘IS-RED’, ‘IS-ORANGE’, and ‘IS-GREEN’, all related to the object name ‘traffic-light-number-111’, and logical expressions like ‘NOT’ and ‘AND’. The logical expression ‘NOT’ turns the meaning of an expression to the opposite: thus the expression ‘NOT(IS-ORANGE(traffic-light-number-111))’ generates the meaning that the object ‘traffic-light-number-111’ does not show the color ‘orange’ (leaving it undefined, what it could mean not to be ‘orange’! The space of possible other meanings is inherently ‘fuzzy’ and can be ‘large’) . The logic expression ‘AND’ generates a ‘compound meaning’ like ‘IS-RED(traffic-light-number-111) AND NOT(IS-ORANGE(traffic-light-number-111))’. This compound statement generates the known meaning, that the object ‘traffic-light-number-111’ shows the read light and at the same time ‘not’ the orange light. If this is the observable case, then this compound statement would be classified as ‘decidable true’, otherwise not.

If one would use within predicate logic expressions not ‘constants’ like ‘names’ but ‘variables’, then the situation changes.


A ‘variable’ as such has no known ‘meaning’ and therefore will never be able to be associated with a decidable observable something. Thus to turn a predicate logic expression with variables into a real candidate for being classified as ‘true’ or ‘false’ (or undefined), one has to offer a procedure how to replace the variables by expressions, which can become ‘truth candidates’. A common format for such a procedure is the ‘replacement’ (often called ‘substitution’) of the expression called ‘variable’ by an expression called ‘constant’ like ‘x’ will be replaced by ‘traffic-light-number-111’, written: (x/traffic-light-number-111).

In case of predicate logic there exists one more ‘formal element’ to modify the possible meaning: Quantifiers! To say ‘ALL (x)’ or ‘ONE (x)’ or ‘SOME (x)’ or ‘EXACT n (X)’ and the like gives some ‘clue’, to the assumed ‘number’ of object-like somethings which have to be shown to ‘be there’ in a ‘decidable manner’.

Thus it makes a difference whether one writes ‘ALL(x)’ in the case of ‘IS-RED(x) AND NOT(IS-ORANGE(y))’ or ‘ALL(x,y)’. If one in the first case ‘ALL(x)’ replaces (x/traffic-light-number-112) then one derives the expression ‘IS-RED(traffic-light-number-111) AND NOT(IS-ORANGE(y))’, where the variable ‘y’ is still undefined. In the second case with ‘ALL(x,y)’ one will derive by (x/traffic-light-number-112) and (y/traffic-light-number-113) the expression ‘IS-RED(traffic-light-number-111) AND NOT(IS-ORANGE(traffic-light-number-113))’; all variables have been replaced.

In case of Example 3, where the used variable {x,y,z} are as expressions ‘different’, one can speak potentially about three different traffic lights using the replacements (x/traffic-light-number-111), (y/traffic-light-number-112), (z/traffic-light-number-113):

(Example 3.1) IS-RED(traffic-light-number-111) AND NOT(IS-ORANGE(traffic-light-number-112)) AND NOT(IS-GREEN(traffic-light-number-113)

If these different traffic lights would be distributed at different places in the city then it could become more and more difficult if not even infeasable, to observe these objects in a decidable way simultaneously. To use technological means to solve the problem can work ‘in principle’ by using such ‘technological means’, but then the technological means have to be ‘proven’ to work ‘correctly’ (they have to be ‘certified’). Who can and will do this?

This example demonstrates that the formal status of an expression — having constants instead of variables — enables ‘principally’ a decision procedure between the actors, but by ‘practical conditions’ this ‘formal possibility’ can often not be resolved in the domain of ‘real usage’.

Such a case of ‘theoretical decidable’ but ‘practical undecidable’ is also given if one uses the quantifier ‘ALL (x, …)’ where the number of ‘possible real candidates’ is by practical reasons not really decidable, e.g. ‘All human persons are at 10:00 a.m. the upcoming Monday not hungry’, written as ‘ALL (x) HUMAN-PERSON(x) AND (NOT(IS-HUNGRY(x)) AND DATE(next(Monday)) AND CLOCK(10, a.m.)’. In this example ‘Monday’ is related to a ‘calendar’ and ‘next()’ is a function mapping the actual day in the calendar to the next available Monday. The possible real instances of the variable ‘x’ are assumed as ‘all living human persons on the planet earth 17.July 2022’. Actually we have no measurement procedure to decide all these statements.

If one uses the quantifier ‘ONE()’, then one introduces a ‘restriction’ to the umber of possible instances where the whole number of possible real candidates shall be ‘one’:

(Example 3.2) ONE(x) IS-RED(x) AND NOT(IS-ORANGE(x)) AND NOT(IS-GREEN(x))

The ‘meaning of the quantifier ‘ONE()’ is assumed here as ‘There must exist one object-like something, which is a candidate for the known meaning’.

If we assume the replacement (x/traffic-light-number-111) then we get the expression

(Example 3.2.1) ONE(x/traffic-light-number-111)(IS-READ(traffic-light-number-111) AND NOT(IS-ORANGE(traffic-light-number-111)) AND NOT(IS-GREEN(traffic-light-number-111))

This expression can become classified as ‘true’ observing the traffic light in place.

The final aspect of predicate logic expressions — which we already have used in the ‘next Monday’ example — are the ‘terms’ in predicate logic. A term is in the simple case (i) only one variable or a constant replacing the variable, or (ii) a ‘function’ — often called ‘operator’ — with some ‘arguments’ like ‘add(1,3)’ or ‘multiply(4,7)’ or ‘father-of(John)’ or ‘phone-number-of(Bill)’ or the like. A function is a ‘biased relation’ mapping some object-like things to other object-like things. Because a certain customer of a phone-company has usually exactly one phone-number one can resolve ‘phone-number-of(Bill)’ by looking to the list of phone-numbers of this company (or you know the number already). The expression ‘father-of()’ works similarly. ‘Multiplying’ two numbers is described in a part of mathematics giving strict rules how to multiply two numbers, thus following these rules you will get ‘one number’ as the ‘result’ of this operation like ‘multiply(4,7)=28’. Because a function applied to object-like somethings produces again an object-like something the function stays as a term in the realm of object-like somethings. Thus in an expression like ‘ONE(x) FATHER(father-of(x))’ one uses the function ‘father-of()’ to denote that one object-like something which is the father of x to make the statement, that this ‘father-of(x)’ has the property of being a ‘father’ written as ‘FATHER()’. While the function ‘father-of()’ determines exactly one biological object-like something the predicate ‘FATHER()’ can be applied to many different object-like somethings, e.g. ‘ONE(x) ONE(y) FATHER(father-of(x)) AND FATHER(father-of(y))’ , replacing (x/Bill), (y/Susan) then ‘FATHER(father-of(Bill)) AND FATHER(father-of(Susan))’ the function father-of() generates two different object-like somethings but the predicate FATHER() can be applied to both of them.

Formal Logic Inference: Preserving Truth

From the examples of boolean logic and predicate logic we can see that formal logic is operating with a set of expressions assumed to be true and then offers some rules how one can derive from this set of assumed true expressions some concrete expressions. The whole inference mechanism works strictly ‘conservative’ in the sense that it is not possible to ‘create’ by logical inference ‘something new’. Formal logical inference is preserving the truth.

Ordinary Language Inference: Preserving and Creating Truth

Because ordinary language is by construction the indispensable meta-language of every kind of a formal logical system it is clear that every kind of formal logic inference can be reproduced in ordinary language.

Thus we can easily reproduce a preserving inference in ordinary language like the following:

Example 4: It is known that all members of the country club are voting for the political party A. Then you hear, that Bill is a member of this country club. Then you — spontaneously — can infer, that Bill is voting for the political party A too.

In predicate logic you could write this as follows:


Replacing: (x/Bill)




Besides this truth-preserving inference we can observe in ordinary language usage different cases.

Example 5: Susan plans to go to Chicago (this is a ‘goal’). Actually she is in New York (this is a ‘given situation’). Now she thinks how to ‘realize her plan’. Either she ‘remembers’ by her memory that there are options like going by car, by train or by airplane, or she ask her colleagues in the office, what they can propose. Whatever she will do after ‘researching possibilities’ there will be some ‘outcome’: either no option or some options. Let us assume she ‘learned’, that there exists the option ‘going by train’ and by ‘personal reasons’ she decided to take this option ‘going by train’. Then we have the following kind of ordinary language inference:

Goal: I want to got to Chicago

Given: I am in New York

Learned Knowledge: You can go by train (and possibly other options)

Applying preferences: She has opted for the possibility ‘going by train’

Outcome: She goes by train from New York to Chicago.

This simple example shows interesting differences compared to the formal inferences. Usually you will not start with a given knowledge but with some ‘need to act’ (which can function as a ‘goal’). This goal will be related to the ‘given situation’ which determines conditions which have to be considered as ‘constraints’ for a possible solution. Having this — a goal and given constraints — you have to ‘explore’ what you know: either something which you can ‘remember’ or you can get by ‘communication with others’ or you have to start an ‘investigation’ of given data-bases or you have to make your own ‘experiments’. Thus the ‘given knowledge’ is a dynamic structure which has to be ‘determined on demand’.

In the case with the country club you had some knowledge ‘at hand’ and this induced some inference.

In the case with the ‘idea to go somewhere’ you had to clarify your constraints and to ‘activate’ ‘helpful knowledge’.

This ‘activation process’ can be ‘conservative’ by taking what you know already’ or it is to some degree ‘dynamic’ and ‘creative’ because you have to start a ‘process’ of ‘finding appropriate knowledge’. ‘Appropriate knowledge’ is knowledge which could be a ‘practical solution’ and which is ‘in agreement with your personal preferences’. Thus such a ‘knowledge creating process’ is a ‘process in everyday life’, which can not simply be ‘encoded’ in a formal inference only. In the full case we have a process with several participating actors (often thousands or several thousands or even more), which are communicating and thereby cooperating in complex patterns, whose ‘result’ can be some ordinary language expressions, but the ‘meaning’ of these language expressions is ‘encoded by the creating process as a whole’! Therefore the resulting expressions — as ordinary language expressions or as formal logic expressions — are rather secondary: they ‘live’ from the fact, that the acting actors are dealing dynamically with meaning structures which are ‘receiving’ ‘their life’ from this whole process.

Hidden Ontologies: Cognitively Real and Empirically Real

Instead to talk about the ‘meaning of expressions’ often the expression ‘ontologies’ is used. This language game traces back to the Greek philosophy and is vivid since then through all centuries in the world of philosophers (and today heavily borrowed by computer scientists without giving some foundation, what these ‘formal ontologies’ should be).

While the concept of ‘meaning’ is alluding to (i) those abstract (cognitive, mental, …) entities in our thinking which can be related to expressions, and (ii) mediated by this to those perceptions of object-like somethings assumed to exist in the ‘outer world’ of the brain, an ‘ontology’ assumes a ‘realm of objects’ without an existential qualification: the assumed realm of objects can be only an abstract realm (only by thinking) or an assumed ’empirical realm’ by ‘real objects’.

The philosophical discipline called ‘epistemology’ — today backed up by different empirical disciplines like e.g. experimental psychology associated with brain sciences — has clarified that so-called ’empirical reality’ is for a brain only given as an ‘abstract model triggered by different perceptional events’, where the abstract model is associated with ‘perceptional triggers’. A human actor can talk about ‘ontologies’ only in this ‘abstract mode’, which eventually has some ‘clues for perceptional triggers’ which ‘signal’ an ‘object-like something’ as a ‘possible instance’ of the used abstract structure in thinking. In this sense appears the language game of an ‘ontology’ to be an unnecessary doubling of the language game of ‘meaning’.


After all these preceding considerations we can conclude, that a logical inference is primarily a formal mechanism transforming some expressions (of a formal language) into other expressions. This mechanisms is completely independent of any kind of meaning. The so-called property of being a ‘true’ or ‘false’ expression is a purely technical device with no meaning either. You can translate this ‘abstract property of being true or false’ as a saying “It is unclear what you mean with ‘true’ or ‘false’, but if you will classify an expression in the light of your knowledge as being ‘true’ (or ‘false’), then this abstract property of ‘being true or false’ will be preserved by this inference procedure, independent of the assumed concrete meaning.”

In the light of the traffic-light example we can conclude further, that the classification of an expression as being ‘true’ or ‘false’ in an everyday context with human actors works differently. Saying “The traffic light is red” (abbreviated ‘A’) would allow the purely formal inference

(5) A X A

If ‘A’ is ‘abstract true’, then it follows, that ‘A’ is abstract true. But in the ‘real world’ of everyday life this property of being true is ‘time dependent’: at some time-interval it can be true, in another not. This results from the fact that the expression ‘The traffic light is red’ is in a human mind associated with a ‘known meaning’ and this known meaning can either actually be associated with concrete perceptions wich can be interpreted as a ‘real’ traffic light showing red or not. Because traffic lights are embedded in an observable ‘behavior’ which produces changes between ‘red’, ‘orange’, and ‘green’, the expression ‘The traffic light is red’ is ’empirically’ only true, if the traffic light behavior in that moment produces ‘red’, otherwise not.

Thus the difference between a ‘formal inference’ and an ’empirical forecast’ is rooted in the fact that human actors, which are using language expressions to communicate about possible ‘states of the world’, always are associating expressions with ‘learned meanings’ as part of their ‘inner knowledge’, and in some cases they can associate this ‘inner knowledge’ with ‘actual perceptions’ which are ‘matching’ some of this ‘inner knowledge’, which as such is always ‘abstract’. Thus if a human actor is able to use his inner abstract knowledge to ‘construct’ (= thinking) some abstract structures as possible ‘follow ups’ of assumed ‘given structures’, and this human actor translates this knowledge by meaning functions into ‘language expressions’, than this human actor can say to someone else “After showing orange the traffic light will show green”, which would represent a possible inference like

(6) ‘The traffic light is orange at time t’ X  ‘The traffic light is green at time t+c’

The sign ‘c’ represents here the time constant which gives the amount of time which is needed until the change from orange to green will happen. The sign ‘X’ represents the hidden knowledge of the speaking actor which is assumed to be shared with the hearer, because they live in the same environment and both have learned this environment and have learned the same meanings in their used language. Formally this is still a ‘logical inference’, but on account of the involved ‘meaning’ it can happen, that the derived expression ‘The traffic light is green at time t+c’ can either ‘become true (if the traffic light really is changing) or it ‘become false’ if it is not changing to green. In this sense the pattern (5) is not only a ‘logical inference’ but additionally it is a ‘communicative forecast’.

These two attributes ‘logical’ and ‘communicative’ are the keys to understand, that we have here two different paradigms: the ‘logic paradigm’ is restricted to formal expressions only combined with some abstract properties’ and ‘operations’ with these expressions. In the ‘communicative paradigm’ the ‘logic paradigm’ is embedded in a bigger framework of human actors in a real environment, where the human actors have a minimal ‘inner structure’ of ‘perceiving’ the environment, of having a ‘self-learning’ inner structure for ‘knowledge’ and for ‘language’, where the language can be ‘mapped’ onto the knowledge, and a ‘behavior’ part, which can translate ‘some inner states’ into observable properties of the actor body surface, which in turn can act in some sense onto the environment.

From this follows that a ‘common science’ which shall be ‘rooted in the empirical world’ and which will include ‘all aspects’ of the empirical world (we have not to judge what is world’!) has to be formatted according to this perspective of ‘communicative forecasting’.


Using the concepts ‘logical inference’ and ‘communicative forecast’ in the sense described before one has to clarify a bit more, how such a ‘communicative forecast’ has to be understood as a real process, enabling the participants (human actors) to ‘create’ ‘forecasts’ which can become ‘true’ or not. In doing this one has to take a ‘point of view’ of looking ‘from above’ on these processes. The question is ‘which view’ and ‘how much from above’?

Because the individual scientific disciplines represent each a kind of ‘scientific niche’ by using their own language, their own methods, their own kinds of models and ‘theories’, these individual disciplines are not well equipped to talk about other disciplines or even about the whole of science. How we can free ourselves of this ‘being trapped in a special view’? [29]

As has been explained in the preceding sections of this text the only meta-language for all languages — even for itself — is the everyday language. And the only ‘common experts’ are rooted in every human, every citizen as such. A ‘specialist’ is always defined ‘relativ to something else’, here to the ‘normal human person’. Thus digging into the specialties of our common world does not eliminate this common world, it only induces a look at some point with a special view into some possible ‘hole’ of reality. But ‘many individual holes’ do not give a ‘complete picture of the common reality’: neither physics, neither biology, neither chemistry, neither whatever individual view one is taking.

Thus the need for a ‘common view’ of all specialties ’embedded’ ‘within the ‘common view’ is always alive and will never end because the whole is a something with no clear boundaries. And every human actor — with the whole biosphere — is part of the whole and in the light of the ‘built in freedom’ is a potential ‘open horizon’. The other ‘open horizon’ seems to be the ‘whole universe’ as far as we can understand it today.

Thus the only chance humanity has to get a grip on some common view is the everyday language spreading through all nations. The only known project today trying such a demanding task is the Wikipedia project. (cf. [21]) One can criticize it from many points of view, but there is nothing better at the moment. We should try to improve this every day. Every individual science as such is no alternative. The solution can only be radical ‘cooperation’ between individual scientific disciplines and the common science of the everyday world. In a ‘war’ between ‘common science’ and individual scientific disciplines we all would be losers. The ‘noise’ caused by ‘information flooding’ will overtake all. Time is running.

Keeping this in mind I will take a short ‘side trip’ to Wikipedia to inspect in a loose way some articles around the concept ’empirical theory’.

Side Trip to Wikipedia

If one takes a side trip to different articles in the English Wikipedia then one will not find an article about ’empirical theory’ directly. Looking for the word ‘theory’ alone you can find an article talking about ‘rational thinking’ about ‘phenomena’, where the thinking can produce ‘assertions’, which can include ‘explanations’ how nature works. (cf. [3]) Further you are told that there are also ‘scientific theories’ which are based on ‘scientific methods’ fulfilling the ‘criteria of modern science’. ‘Scientific methods’ are using ‘scientific tests’. ‘Scientific theories are a form of ‘scientific knowledge’. ‘Testable empirical conjectures’ or ‘scientific laws’ are not yet ‘scientific theories’. (cf. [2]) Following the hint, that scientific methods’ are important for ‘scientific theories’, you can read some statements about a ‘scientific method’: it is an ’empirical method’ of ‘acquiring knowledge’. An ’empirical method’ uses ‘careful observations’ and creates — based on these observations — ‘hypotheses’ via ‘induction’. From these hypotheses one can draw ‘deductions’. Based on ‘experimental findings’ one can ‘refine’ or ‘eliminate’ hypotheses. These statements are called ‘principles of the scientific method’. (cf. [4c]). Additional it is explained, that the goal of an ‘experiment’ is to determine whether ‘observations’ do ‘agree’ with or ‘conflict’ with the ‘expectations’ ‘deduced’ from a ‘hypothesis’. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.(cf. [4c]) These remarks point to the further concept of ‘science’, which is characterized as a ‘systematic enterprise’ that ‘builds’ and ‘organizes’ ‘knowledge’  in the form of ‘testable explanations’ and ‘testable predictions’ about the ‘universe’ . (cf. [2]) Contemporary scientific research is further characterized by working highly ‘collaborative’, which is usually done by ‘teams’. The practical impact of scientific work has led to the emergence of ‘science policies’ that seek to influence the scientific enterprise. (cf. [2])

What can we do with these ‘fragments’ of a large discourse around ‘science’, ‘theory’, and ‘knowledge’?

In the following figure I have arranged the detected words in some ‘order’ which seems to me to make ‘some sense’. But clearly, because we have at every moment nowhere an ‘overall ordering’ accepted by ‘all’, every individual ordering will suffer a final argument. We can ‘try’ to find some ‘hidden structures’ in the realm of ‘phenomena’, but whether these ‘suggested orderings’ are really helpful this can only show the ongoing process of life itself framed by our different views.

Figure: Hypothetical graphical interpretation of some Wikipedia articles associated with the concepts ‘science’ and ‘theory’

Here some aspects of my findings looking into the cited Wikipedia articles.

  1. Following some links starting with the question for ’empirical theory’ I could find several articles associated with ‘theory’ in some sense.
  2. Within every article it was not quite clear what really is the reining perspective of an article. If one assumes — as I do — that a Wikipedia article is not reproducing an individual scientific discipline as such but some ‘common view’ and thereby is representing a ‘meta level view’, it is difficult to define the ‘method of a meta level view’. Where should it come from? Nowhere we have today an official discipline for ‘meta level views’ (historically this should be philosophy, but philosophy today is far away from doing this job sufficiently well).
  3. On the other side: there exists a common view ‘by fact’ because all human actors together represent ‘implicitly’ a common view before and above every special knowledge by their pure existence. Taking the everyday language communication seriously there exists everyday an ongoing ‘common talk’ of ‘common experts’ about everything which happens as ‘everyday experience’.
  4. But not every talk represents knowledge which can be shared and thereby (i) enabling cooperation and (ii) enabling decidable forecasts. This is the minimum we need and it is the maximum we can get.
  5. Wikipedia today is somehow in the ‘direction’ by enabling a little bit cooperation, but it clearly not yet enables forecasts.
  6. Focusing on the subject of an ’empirical theory’ I arranged the different citations of the Wikipedia articles around the main concept of ‘science’. This concept is associate with many additional concepts which — if arranged — are pointing slightly to different ‘views’ which I loosely classified as ‘history’, ‘society’, and ‘philosophy/ philosophy of science’. If one would set ‘philosophy’ as the main view then ‘sociology’ and ‘history’ are contributing special views as part of the ‘meta level view’. One can ask, whether there are other views available, which also have some importance.
  7. Within that what I identified as the ‘philosophical view’ one is associating ‘science’ with special kinds of ‘methods’, which allow ‘observations’ which in one direction can enable ‘hypotheses’ about the ‘phenomena’ in ‘observations’, and in another direction allow ‘deductions’ from these hypotheses, which then can be ‘tested’ with the aid of ‘experiments’. The ‘findings’ of an experiment can ‘confirm’ or ‘weaken’ a hypothesis. Because the hypotheses have the format of ‘language expressions’, which can be used as ‘assertions’, they can be understood by the experimenters as ‘explanations’ which can further be understood as ‘knowledge in a scientific format’.
  8. In the whole of these citations it is not really clear what is in fact a ‘theory’. The word ‘theory’ is used in these articles but there exists nevertheless no real definition. To speak about ‘scientific’ theories instead of the word ‘theory’ alone points to the explanations about ‘scientific methods’ which are explained by the ’empirical method’. But it stays open what then a ‘theory is’.
  9. From the point of view of philosophy (and by inspecting some more references) there are two approaches for a characterization of a ‘theory’:
  10. THEORY CONCEPT I: Looking primarily to the used language expressions only then a theory needs (i) those expressions which represent the hypotheses; (ii) a logical inference concept enabling inferences (deductions); (iii) the inferred inferences as candidates for forecasts; (iv) an experimental procedure to test whether one can find measurements which ‘confirm’ or ‘weaken’ a forecast.
  11. THEORY CONCEPT II: Looking in a wider context to the ‘theory producers’, their ‘environment’, and then to the ‘procedure’, how the theory producers really ‘built’ a ‘theory concept I’.
  12. Usually ‘theory concept I’ is applied, not ‘theory concept II’. But at that moment where one starts to analyze science and theories from a real meta level point of view one needs ‘theory concept II’. All known problems about theories and theory production can be discussed within ‘theory concept II’, but not with ‘theory concept I’.
  13. Although the word ’empirical theory’ is not found — yet — in Wikipedia articles, it makes sense to use this concept, because we have ‘theories’ also in logic and mathematics, but without a relationship to some empirical reality. Thus the expression ’empirical theory’ states from the ‘first outlook’ that it is a theory with a relationship to empirical reality.
  14. As one can see by reading this text as a whole I am using one more attribute named ‘sustainable’. Thus a ‘sustainable empirical theory’ (SET) is an empirical theory which fulfills even more requirements which then directly leads to the concept of a ‘common theory’.



While a ‘theory as such’ has no relationship to the the concepts ‘sustainability’ [24] or ‘sustainable development’ [25], there exists an increasing understanding of the central role of knowledge mediating all kinds of actions.(cf. [26]). This importance of knowledge is to some degree supported by the sustainable development goal 4 (SDG4).[27],[26]

Although there exist — as ever — multiple definitions of ‘sustainability’ or ‘sustainable development’, one can identify some core ideas which seem to be important.(cf. [23]). To get a ‘unified view’ of all the different aspects it is needed to establish a ‘meta view’ which does this job. And this requirement points to the dimension of SDG4 additionally expanded by the necessity of a knowledge sphere which can handle a maximum of diversity by a maximum of life-sustainable forecasts.

To match these far reaching requirements it will not be sufficient to refer solely to the concept of an ’empirical theory I’ — as described above –, but one has to extend the concept to an ’empirical theory II’. This is motivated by the fact, that the ‘knowledge sphere’ as such is not interacting with the real world external to the bodies. For such interactions the knowledge needs a body, and not only one body but as many bodies as possible, which physically are interacting with the body-external empirical world. Besides knowledge the bodies will by these interactions receive that support with materials which has to be ‘consumed’ because it is necessary for the life of bodies. The ‘increase in the number of bodies’ during the time has increased material effects of human bodies onto the biosphere as well as on the external world beyond the biosphere.

Guided by the knowledge sphere these bodies can ‘behave’ in a way, which keeps the body-external world ‘functioning’ as a ‘life-bubble’ which can be understood as a substantial part of the whole universe. Analogous to the phenomenon of language the human based knowledge too is always being more than a self-sustained entity; it is a phenomenon resulting from being distributed in many brains which partially are interacting with their bodies and thereby with each other. As history shows it was not language alone which enabled an increasing powerful communication between human brains. It was the appearance of cultural technologies like writing, preserving documents, books, libraries, printing technologies, the computer, and lately computer networks as cyberspace [28], which enabled an increasing exchange.

But communication technology as such can process only the ‘material side’ of communication, the diverse patterns of expressions which as such have no ‘meaning’! The phenomenon of ‘meaning’ is still rooted inside the brains. Until today there exists no kind of communication technology which enables to deal with ‘inside meaning’ while practicing ‘external communications’. Thus the quantitative increase in communication entities does increase a space of ‘possible meaning’ with regard to possible real brains if they would become connected to these communication entities. But real brains in real bodies have a ‘limited size’ with ‘limited processing capacities’. Thus the increase in external communication technologies associated with an increase in the ‘material amount’ of these is not automatically accompanied with an increased processing in the individual body-brain units. During 2006 the author of this text introduced for this phenomena the term ‘negative complexity’ (cf. [29], [30]). This points to the fact that an increase in external world complexity with regard to the processing of the communication entities can turn into a ‘negative complexity’ if the processing capacity is too small. Thus the shear increase in the amount of communication entities accompanied by distributed individual processing units is loosing its ‘integration’ with the ‘active meanings’ in these individual processing units. Because human brains beside their ‘cognitive dimension’ are also equipped with an ’emotional dimension’ this emotional dimension will usually react to this ‘diminishing cognitive ordering of things’ with different kinds of emotions; these will try to stabilize the cognitive dimension with cognitive states which ‘appear as order’ but indeed are ‘fake constructions’. These can navigate the real bodies in the real world in a way which can increase a growing ‘mismatch’ between ‘knowledge’ and the ‘real world’.

With these considerations a ‘picture of a human actor’ is induced which consists of (i) a body which has to ‘consume’ and which can ‘induce effects’ on the body-external world as well onto the brain in this body, (ii) a brain with at least two dimensions: (ii.1) an ‘emotional’ dimension which has the ‘primary management’ of the ‘human system’ , and (ii.2) a ‘cognitive’ dimension which has the job of ‘ordering’ all the many and diverse signals flowing into the brain from the body as well as to generate possible ‘reactions’ inside and outside the body, extended by the language component which enables an ‘encoding’ of cognitive entities in expressions. And, as we know today, (iii) the brain is a ‘system’ which includes an ‘ontogenetic development’ accompanied by a ‘continuous adaptation’ (often called ‘learning’) of the available signals by predefined processing patterns. Furthermore we know that about 99% of the brain activities are ‘unconscious’.

In the real world there are never individual systems alone but always ‘populations’ of individual systems which only together can survive. Thus human kind as a whole is acting in the body-external world of the even greater population of ‘all living biological species’ forming the ‘biosphere’ which is localized today on the planet earth. The planet earth is finite and follows certain patterns of change. What the earth is, how it ‘dynamically behaves’, which kind of ‘future’ the earth has — including the biosphere — is either somehow ‘encoded’ in the cognitive dimension of human brains or is not in existence. To get an ‘all-embracing picture’ of everything would require the integration of the cognitive dimension of all brains with their bodies.

Sustainable EMPIRICAL THEORY concept II

According to the short characterization of the concept of an ’empirical theory II’ above we have to take into account the ‘theory producers’, their ‘environment’, and the ‘procedure’, how the theory producers are ‘building’ a ‘theory concept I’. This theory concept I requires (i) those expressions which represent the hypotheses; (ii) a logical inference concept which enables inferences (deductions); (iii) the inferred inferences as candidates for forecasts; (iv) an experimental procedure to test whether one can find measurements which ‘confirm’ or ‘weaken’ a forecast.

If one accepts the idea of a population of brains which together have to find a ‘sustainable path’ into a ‘live-supporting future’ then the vision of a sustainable empirical theory can be reformatted as follows:

  1. As ‘theory producers’ the whole population of human actors is assumed.
  2. The ‘environment’ of these theory producers is the planet earth located in the solar system as part of the milky way galaxy in the universe.
  3. The ‘theory I producing procedure’ by which the theory producers are acting is characterized as follows:
    1. The theory producers are using a ‘common language’ whose possible ‘meanings’ are encoded in their brains.
    2. The ‘bodies’ of the brains are collecting ‘sensory data’ from the body-external environment and ‘sensory data’ from he bodies themselves in a process called ‘sensory perception’. The perceived signals are processed by the brains.
    3. The brains can ‘store’ processed sensory input, can process these stored — ‘cognitive’ — entities, and can map between ‘cognitive elements’ and ‘language related elements’. The cognitive referents of language expressions are called ‘meaning’. The whole ‘process of mapping’ is a ‘dynamic’ process (adaptation, learning).
    4. The brains can stimulate their bodies to ‘act’ in the body-external environment based on the the inner processes. The ‘observable’ acts are called in sum the ‘behavior’ of the actors.
    5. To ‘coordinate’ the behavior of the different human actors in a population the individual brains have to ‘synchronize’ their internal mappings between cognitive and language elements.
    6. The individual processing of perceptions, storing, cognitive processing, meaning generating mappings, and acting has in every individual actor ‘processing limits’ and ‘needs processing time’. The same holds for ‘synchronization processes’.
    7. The perception of the brain-external environment (bodies as well as body-external environments) as well as the communication by language can be enhanced by ‘artifacts’: certain observation patterns as well as certain communication tools. Such a usage should also by synchronized by a process called ‘standardization’.
  4. One outcome of such a ‘theory I producing process’ should be collections of expressions which are assumed by all participating actors as a ‘description’ of a ‘given situation (state)’ which is assumed to be ‘true by observation’. Such a collection of expressions can be understood as the primary ‘hypotheses’ of the theory process.
  5. As part of a ‘theory II producing process’ there has to be another set of expressions which by all participating actors is understood as the description of a ‘possible state in a possible future’, which these actors want to ‘achieve’ as their ‘goal’.
  6. To realize a ‘path’ from the given situation to the wanted future state the participating actors have to ‘remember’ or to ‘invent’ those actions which transform a given situation step by step into a situation, which is ‘judged’ by the participating actors as ‘including the wanted state’. These actions are called ‘inference rules’. These inference rules are telling how a given set of expressions can be transformed into another set of expressions.
  7. The exact process, how one can apply rules of inference onto a given set of expressions (the ‘assumptions’) to get a new set of expressions (the ‘inferences’, the ‘forecasts’) is called ‘inference mechanism’.
  8. To judge whether a reached forecast has already the wanted future state as part of itself can be decided by ‘observation’ of a given state together with a ‘comparison’ between the ‘perceived’ inner states with the ‘meaning associated’ inner states by all participating actors. If they agree, that the perceived given situation matches the ‘expected wanted situation’ sufficiently well then the process has reached its ‘goal’. The whole process to this final decision is called an ’empirical experiment’.
  9. If after some finite number of steps no situation can be reached which matches sufficiently well the ‘expectations’ encoded in the inference rules and the wanted future state the situation is ‘undefined’: it is not ‘true by observation’, but it is also not necessarily ‘false by observation’. The only clear statement can be that the situation is ‘after finite steps’ with the given conditions not yet ‘observational true’.

!!! — will be continued — !!!


Figure 4: A second view pointing to ‘science’ and ‘engineering’ in parallel embedded in a human ‘society’, which in turn is part of the ‘biosphere’ hosted on the ‘planet earth’.

— Will be continued !!! —


wkp-en := Englisch Wikipedia

/* Often people argue against the usage of the wikipedia encyclopedia as not ‘scientific’ because the ‘content’ of an entry in this encyclopedia can ‘change’. This presupposes the ‘classical view’ of scientific texts to be ‘stable’, which presupposes further, that such a ‘stable text’ describes some ‘stable subject matter’. But this view of ‘steadiness’ as the major property of ‘true descriptions’ is in no correspondence with real scientific texts! The reality of empirical science — even as in some special disciplines like ‘physics’ — is ‘change’. Looking to Aristotle’s view of nature, to Galileo Galilei, to Newton, to Einstein and many others, you will not find a ‘single steady picture’ of nature and science, and physics is only a very simple strand of science compared to the live-sciences and many others. Thus wikipedia is a real scientific encyclopedia give you the breath of world knowledge with all its strengths and limits at once. For another, more general argument, see In Favour for Wikipedia */

[*1] Meaning operator ‘…’ : In this text (and in nearly all other texts of this author) the ‘inverted comma’ is used quite heavily. In everyday language this is not common. In some special languages (theory of formal languages or in programming languages or in meta-logic) the inverted comma is used in some special way. In this text, which is primarily a philosophical text, the inverted comma sign is used as a ‘meta-language operator’ to raise the intention of the reader to be aware, that the ‘meaning’ of the word enclosed in the inverted commas is ‘text specific’: in everyday language usage the speaker uses a word and assumes tacitly that his ‘intended meaning’ will be understood by the hearer of his utterance as ‘it is’. And the speaker will adhere to his assumption until some hearer signals, that her understanding is different. That such a difference is signaled is quite normal, because the ‘meaning’ which is associated with a language expression can be diverse, and a decision, which one of these multiple possible meanings is the ‘intended one’ in a certain context is often a bit ‘arbitrary’. Thus, it can be — but must not — a meta-language strategy, to comment to the hearer (or here: the reader), that a certain expression in a communication is ‘intended’ with a special meaning which perhaps is not the commonly assumed one. Nevertheless, because the ‘common meaning’ is no ‘clear and sharp subject’, a ‘meaning operator’ with the inverted commas has also not a very sharp meaning. But in the ‘game of language’ it is more than nothing 🙂

[*1b] That the main stream ‘is biased’ is not an accident, not a ‘strange state’, not a ‘failure’, it is the ‘normal state’ based on the deeper structure how human actors are ‘built’ and ‘genetically’ and ‘cultural’ ‘programmed’. Thus the challenge to ‘survive’ as part of the ‘whole biosphere’ is not a ‘partial task’ to solve a single problem, but to solve in some sense the problem how to ‘shape the whole biosphere’ in a way, which enables a live in the universe for the time beyond that point where the sun is turning into a ‘red giant’ whereby life will be impossible on the planet earth (some billion years ahead)[22]. A remarkable text supporting this ‘complex view of sustainability’ can be found in Clark and Harvey, summarized at the end of the text. [23]

[*2] The meaning of the expression ‘normal’ is comparable to a wicked problem. In a certain sense we act in our everyday world ‘as if there exists some standard’ for what is assumed to be ‘normal’. Look for instance to houses, buildings: to a certain degree parts of a house have a ‘standard format’ assuming ‘normal people’. The whole traffic system, most parts of our ‘daily life’ are following certain ‘standards’ making ‘planning’ possible. But there exists a certain percentage of human persons which are ‘different’ compared to these introduced standards. We say that they have a ‘handicap’ compared to this assumed ‘standard’, but this so-called ‘standard’ is neither 100% true nor is the ‘given real world’ in its properties a ‘100% subject’. We have learned that ‘properties of the real world’ are distributed in a rather ‘statistical manner’ with different probabilities of occurrences. To ‘find our way’ in these varying occurrences we try to ‘mark’ the main occurrences as ‘normal’ to enable a basic structure for expectations and planning. Thus, if in this text the expression ‘normal’ is used it refers to the ‘most common occurrences’.

[*3] Thus we have here a ‘threefold structure’ embracing ‘perception events, memory events, and expression events’. Perception events represent ‘concrete events’; memory events represent all kinds of abstract events but they all have a ‘handle’ which maps to subsets of concrete events; expression events are parts of an abstract language system, which as such is dynamically mapped onto the abstract events. The main source for our knowledge about perceptions, memory and expressions is experimental psychology enhanced by many other disciplines.

[*4] Characterizing language expressions by meaning – the fate of any grammar: the sentence ” … ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like.” is pointing to a deep property of every ordinary language, which represents the real power of language but at the same time the great weakness too: expressions as such have no meaning. Hundreds, thousands, millions of words arranged in ‘texts’, ‘documents’ can show some statistical patterns’ and as such these patterns can give some hint which expressions occur ‘how often’ and in ‘which combinations’, but they never can give a clue to the associated meaning(s). During more than three-thousand years humans have tried to describe ordinary language in a more systematic way called ‘grammar’. Due to this radically gap between ‘expressions’ as ‘observable empirical facts’ and ‘meaning constructs’ hidden inside the brain it was all the time a difficult job to ‘classify’ expressions as representing a certain ‘type’ of expression like ‘nouns’, ‘predicates’, ‘adjectives’, ‘defining article’ and the like. Without regressing to the assumed associated meaning such a classification is not possible. On account of the fuzziness of every meaning ‘sharp definitions’ of such ‘word classes’ was never and is not yet possible. One of the last big — perhaps the biggest ever — project of a complete systematic grammar of a language was the grammar project of the ‘Akademie der Wissenschaften der DDR’ (‘Academy of Sciences of the GDR’) from 1981 with the title “Grundzüge einer Deutschen Grammatik” (“Basic features of a German grammar”). A huge team of scientists worked together using many modern methods. But in the preface you can read, that many important properties of the language are still not sufficiently well describable and explainable. See: Karl Erich Heidolph, Walter Flämig, Wolfgang Motsch et al.: Grundzüge einer deutschen Grammatik. Akademie, Berlin 1981, 1028 Seiten.

[*5] Differing opinions about a given situation manifested in uttered expressions are a very common phenomenon in everyday communication. In some sense this is ‘natural’, can happen, and it should be no substantial problem to ‘solve the riddle of being different’. But as you can experience, the ability of people to solve the occurrence of different opinions is often quite weak. Culture is suffering by this as a whole.

[1] Gerd Doeben-Henisch, 2022, From SYSTEMS Engineering to THEORYEngineering, see: At the time of citation this post was not yet finished, because there are other posts ‘corresponding’ with that post, which are too not finished. Knowledge is a dynamic network of interwoven views …).

[1d] ‘usual science’ is the game of science without having a sustainable format like in citizen science 2.0.

[2] Science, see e.g. wkp-en:

Citation = “Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1][2]

Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”

Citation = “New knowledge in science is advanced by research from scientists who are motivated by curiosity about the world and a desire to solve problems.[27][28] Contemporary scientific research is highly collaborative and is usually done by teams in academic and research institutions,[29] government agencies, and companies.[30][31] The practical impact of their work has led to the emergence of science policies that seek to influence the scientific enterprise by prioritizing the ethical and moral development of commercial productsarmamentshealth carepublic infrastructure, and environmental protection.”

[2b] History of science in wkp-en:

[3] Theory, see wkp-en:,or%20no%20discipline%20at%20all.

Citation = “A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory’s assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.”

[4] Scientific theory, see: wkp-en:

Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”

[4b] Empiricism in wkp-en:

[4c] Scientific method in wkp-en:

Citation =”The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3] [4c]


Citation = “The purpose of an experiment is to determine whether observations[A][a][b] agree with or conflict with the expectations deduced from a hypothesis.[6]: Book I, [6.54] pp.372, 408 [b] Experiments can take place anywhere from a garage to a remote mountaintop to CERN’s Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.[8][9]

[5] Gerd Doeben-Henisch, “Is Mathematics a Fake? No! Discussing N.Bourbaki, Theory of Sets (1968) – Introduction”, 2022,

[6] Logic, see wkp-en:

[7] W. C. Kneale, The Development of Logic, Oxford University Press (1962)

[8] Set theory, in wkp-en:

[9] N.Bourbaki, Theory of Sets , 1968, with a chapter about structures, see:

[10] = [5]

[11] Ludwig Josef Johann Wittgenstein ( 1889 – 1951):

[12] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: */

[13] Wikipedia EN, Speech acts:

[14] While the world view constructed in a brain is ‘virtual’ compared to the ‘real word’ outside the brain (where the body outside the brain is also functioning as ‘real world’ in relation to the brain), does the ‘virtual world’ in the brain function for the brain mostly ‘as if it is the real world’. Only under certain conditions can the brain realize a ‘difference’ between the triggering outside real world and the ‘virtual substitute for the real world’: You want to use your bicycle ‘as usual’ and then suddenly you have to notice that it is not at that place where is ‘should be’. …

[15] Propositional Calculus, see wkp-en:,of%20arguments%20based%20on%20them.

[16] Boolean algebra, see wkp-en:

[17] Boolean (or propositional) Logic: As one can see in the mentioned articles of the English wikipedia, the term ‘boolean logic’ is not common. The more logic-oriented authors prefer the term ‘boolean calculus’ [15] and the more math-oriented authors prefer the term ‘boolean algebra’ [16]. In the view of this author the general view is that of ‘language use’ with ‘logic inference’ as leading idea. Therefore the main topic is ‘logic’, in the case of propositional logic reduced to a simple calculus whose similarity with ‘normal language’ is widely ‘reduced’ to a play with abstract names and operators. Recommended: the historical comments in [15].

[18] Clearly, thinking alone can not necessarily induce a possible state which along the time line will become a ‘real state’. There are numerous factors ‘outside’ the individual thinking which are ‘driving forces’ to push real states to change. But thinking can in principle synchronize with other individual thinking and — in some cases — can get a ‘grip’ on real factors causing real changes.

[19] This kind of knowledge is not delivered by brain science alone but primarily from experimental (cognitive) psychology which examines observable behavior and ‘interprets’ this behavior with functional models within an empirical theory.

[20] Predicate Logic or First-Order Logic or … see: wkp-en:,%2C%20linguistics%2C%20and%20computer%20science.

[21] Gerd Doeben-Henisch, In Favour of Wikipedia,, 31 July 2022

[22] The sun, see wkp-ed (accessed 8 Aug 2022)

[23] By Clark, William C., and Alicia G. Harley –, Clark, William C., and Alicia G. Harley. 2020. “Sustainability Science: Toward a Synthesis.” Annual Review of Environment and Resources 45 (1): 331–86, CC BY-SA 4.0,

[24] Sustainability in wkp-en:

[25] Sustainable Development in wkp-en:

[26] Marope, P.T.M; Chakroun, B.; Holmes, K.P. (2015). Unleashing the Potential: Transforming Technical and Vocational Education and Training (PDF). UNESCO. pp. 9, 23, 25–26. ISBN978-92-3-100091-1.

[27] SDG 4 in wkp-en:

[28] Thomas Rid, Rise of the Machines. A Cybernetic History, W.W.Norton & Company, 2016, New York – London

[29] Doeben-Henisch, G., 2006, Reducing Negative Complexity by a Semiotic System In: Gudwin, R., & Queiroz, J., (Eds). Semiotics and Intelligent Systems Development. Hershey et al: Idea Group Publishing, 2006, pp.330-342

[30] Döben-Henisch, G.,  Reinforcing the global heartbeat: Introducing the planet earth simulator project, In M. Faßler & C. Terkowsky (Eds.), URBAN FICTIONS. Die Zukunft des Städtischen. München, Germany: Wilhelm Fink Verlag, 2006, pp.251-263

[29] The idea that individual disciplines are not good enough for the ‘whole of knowledge’ is expressed in a clear way in a video of the theoretical physicist and philosopher Carlo Rovell: Carlo Rovelli on physics and philosophy, June 1, 2022, Video from the Perimeter Institute for Theoretical Physics. Theoretical physicist, philosopher, and international bestselling author Carlo Rovelli joins Lauren and Colin for a conversation about the quest for quantum gravity, the importance of unlearning outdated ideas, and a very unique way to get out of a speeding ticket.

[] By Azote for Stockholm Resilience Centre, Stockholm University –, CC BY 4.0,

[]  Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) in wkp-en, UTL:

[] IPBES (2019): Global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondizio, J. Settele, S. Díaz, and H. T. Ngo (editors). IPBES secretariat, Bonn, Germany. 1148 pages.

[] Michaelis, L. & Lorek, S. (2004). “Consumption and the Environment in Europe: Trends and Futures.” Danish Environmental Protection Agency. Environmental Project No. 904.

[] Pezzey, John C. V.; Michael A., Toman (2002). “The Economics of Sustainability: A Review of Journal Articles” (PDF). . Archived from the original (PDF) on 8 April 2014. Retrieved 8 April 2014.

[] World Business Council for Sustainable Development (WBCSD)  in wkp-en:

[] Sierra Club in wkp-en:

[] Herbert Bruderer, Where is the Cradle of the Computer?, June 20, 2022, URL: (accessed: July 20, 2022)

[] UN. Secretary-GeneralWorld Commission on Environment and Development, 1987, Report of the World Commission on Environment and Development : note / by the Secretary-General., (accessed: July 20, 2022) (A more readable format: )

/* Comment: Gro Harlem Brundtland (Norway) has been the main coordinator of this document */

[] Chaudhuri, S.,et al.Neurosymbolic programming. Foundations and Trends in Programming Languages 7, 158-243 (2021).

[] Noam Chomsky, “A Review of B. F. Skinner’s Verbal Behavior”, in Language, 35, No. 1 (1959), 26-58.(Online:, accessed: July 21, 2022)

[] Churchman, C. West (December 1967). “Wicked Problems”Management Science. 14 (4): B-141–B-146. doi:10.1287/mnsc.14.4.B141.

[-] Yen-Chia Hsu, Illah Nourbakhsh, “When Human-Computer Interaction Meets Community Citizen Science“,Communications of the ACM, February 2020, Vol. 63 No. 2, Pages 31-34, 10.1145/3376892,

[] Yen-Chia Hsu, Ting-Hao ‘Kenneth’ Huang, Himanshu Verma, Andrea Mauri, Illah Nourbakhsh, Alessandro Bozzon, Empowering local communities using artificial intelligence, DOI:, CellPress, Patterns, VOLUME 3, ISSUE 3, 100449, MARCH 11, 2022

[] Nello Cristianini, Teresa Scantamburlo, James Ladyman, The social turn of artificial intelligence, in: AI & SOCIETY,

[] Carl DiSalvo, Phoebe Sengers, and Hrönn Brynjarsdóttir, Mapping the landscape of sustainable hci, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, page 1975–1984, New York, NY, USA, 2010. Association for Computing Machinery.

[] Claude Draude, Christian Gruhl, Gerrit Hornung, Jonathan Kropf, Jörn Lamla, Jan Marco Leimeister, Bernhard Sick, Gerd Stumme, Social Machines, in: Informatik Spektrum,

[] EU: High-Level Expert Group on AI (AI HLEG), A definition of AI: Main capabilities and scientific disciplines, European Commission communications published on 25 April 2018 (COM(2018) 237 final), 7 December 2018 (COM(2018) 795 final) and 8 April 2019 (COM(2019) 168 final). For our definition of Artificial Intelligence (AI), please refer to our document published on 8 April 2019:

[] EU: High-Level Expert Group on AI (AI HLEG), Policy and investment recommendations for trustworthy Artificial Intelligence, 2019,

[] European Union. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC General Data Protection Regulation; (Wirksam ab 25.Mai 2018) [26.2.2022]

[] C.S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1):1–23, 1973

[] J.A. Jacko and A. Sears, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 1st edition, 2003.

[] LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).

[] Lenat, D. What AI can learn from Romeo & Juliet.Forbes (2019)

[] Pierre Lévy, Collective Intelligence. mankind’s emerging world in cyberspace, Perseus books, Cambridge (M A), 1997 (translated from the French Edition 1994 by Robert Bonnono)

[] Lexikon der Nachhaltigkeit, ‘Starke Nachhaltigkeit‘, (acessed: July 21, 2022)

[] Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc:

[] Markus Luczak-Roesch, Kieron O’Hara, Ramine Tinati, Nigel Shadbolt, Socio-technical Computation, CSCW’15 Companion, March 14–18, 2015, Vancouver, BC, Canada, ACM 978-1-4503-2946-0/15/03,

[] Marcus, G.F., et al. Overregularization in language acquisition. Monographs of the Society for Research in Child Development 57 (1998).

[] Gary Marcus and Ernest Davis, Rebooting AI, Published by Pantheon,
Sep 10, 2019, 288 Pages

[] Gary Marcus, Deep Learning Is Hitting a Wall. What would it take for artificial intelligence to make real progress, March 10, 2022, URL: (accessed: July 20, 2022)

[] Kathryn Merrick. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41:38 – 55, 2017

[]  Illah Reza Nourbakhsh and Jennifer Keating, AI and Humanity, MIT Press, 2020 /* An examination of the implications for society of rapidly advancing artificial intelligence systems, combining a humanities perspective with technical analysis; includes exercises and discussion questions. */

[] Olazaran, M. , A sociological history of the neural network controversy. Advances in Computers 37, 335-425 (1993).

[] Karl Popper, „A World of Propensities“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1988, leicht erweitert neu abgedruckt 1990, repr. 1995)

[] Karl Popper, „Towards an Evolutionary Theory of Knowledge“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1989, ab gedruckt in 1990, repr. 1995)

[] Karl Popper, „All Life is Problem Solving“, Artikel, ursprünglich ein Vortrag 1991 auf Deutsch, erstmalig publiziert in dem Buch (auf Deutsch) „Alles Leben ist Problemlösen“ (1994), dann in dem Buch (auf Englisch) „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York

[] Rittel, Horst W.J.; Webber, Melvin M. (1973). “Dilemmas in a General Theory of Planning” (PDF). Policy Sciences. 4 (2): 155–169. doi:10.1007/bf01405730S2CID 18634229. Archived from the original (PDF) on 30 September 2007. [Reprinted in Cross, N., ed. (1984). Developments in Design Methodology. Chichester, England: John Wiley & Sons. pp. 135–144.]

[] Ritchey, Tom (2013) [2005]. “Wicked Problems: Modelling Social Messes with Morphological Analysis”Acta Morphologica Generalis2 (1). ISSN 2001-2241. Retrieved 7 October 2017.

[] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th US ed., 2021, URL: (accessed: July 20, 2022)

[] A. Sears and J.A. Jacko, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 2nd edition, 2008.

[] Skaburskis, Andrejs (19 December 2008). “The origin of “wicked problems””. Planning Theory & Practice9 (2): 277-280. doi:10.1080/14649350802041654. At the end of Rittel’s presentation, West Churchman responded with that pensive but expressive movement of voice that some may well remember, ‘Hmm, those sound like “wicked problems.”‘

[] Tonkinwise, Cameron (4 April 2015). “Design for Transitions – from and to what?” Retrieved 9 November 2017.

[] Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).

[] Wurm, Daniel; Zielinski, Oliver; Lübben, Neeske; Jansen, Maike; Ramesohl,
Stephan (2021) : Wege in eine ökologische Machine Economy: Wir brauchen eine ‘Grüne Governance der Machine Economy’, um das Zusammenspiel von Internet of Things, Künstlicher Intelligenz und Distributed Ledger Technology ökologisch zu gestalten, Wuppertal Report, No. 22, Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal,

[] Aimee van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of AI, in: AI and Ethics (2021) 1:213–218, see:

[-] Sarah West, Rachel Pateman, 2017, “How could citizen science support the Sustainable Development Goals?“, SEI Stockholm Environment Institut , 2017, see:

[] Wikipedia, ‘Weak and strong sustainability’, (accessed: July 21, 2022)