Category Archives: empirical

LOGIC. The Theory Of Inquiry (1938) by John Dewey – An oksimo Review – Part 3

eJournal: uffmm.org, ISSN 2567-6458, Aug 19-20, 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

SCOPE

In the uffmm review section the different papers and books are discussed from the point of view of the oksimo paradigm. [2] Here the author reads the book “Logic. The Theory Of Inquiry” by John Dewey, 1938. [1]

Part I – Chapter I

THE PROBLEM OF LOGICAL SUBJECT-MATTER

In this chapter Dewey tries to characterize the subject-matter of logic. From the year 1938 backwards one can look  into a long history of thoughts  with  at least 2500 years dealing in one or another sense with what has been called ‘logic’. His rough judgment is that the participants of the logic language  game  “proximate subject-matter of logic” seem to be widely in agreement what it is, but in the case of the  “ultimate subject-matter of logic” language game  there seem to exist different or even conflicting opinions.(cf. p.8)

Logic as a philosophic theory

Dewey illustrates the variety of views about the ultimate subject-matter of logic by citing several different positions.(cf. p.10) Having done this Dewey puts all these views together into a kind of a ‘meta-view’ stating that logic “is a branch of philosophic theory and therefore can express different philosophies.”(p.10) But  exercising  philosophy  ” itself must satisfy logical requirements.”(p.10)

And in general he thinks that  “any statement that logic is so-and-so, can … be offered only as a hypothesis and an indication of a position to be developed.”(p.11)

Thus we see here that Dewey declares the ultimate logical subject-matter grounded in some philosophical perspective which should be able “to order and account for what has been called the proximate subject-matter.”(p.11)  But the philosophical theory “must possess the property of verifiable existence in some domain, no matter how hypothetical it is in reference to the field in which it is proposed to apply it.”(p.11) This is an interesting point because this implies the question in which sense a philosophical foundation of logic can offer a verifiable existence.

Inquiry

Dewey gives some  hint for a possible answer by stating “that all logical forms …  arise within the operation of inquiry and are concerned with control of inquiry so that it may yield warranted assertions.”(p.11) While the inquiry as a process is  real, the emergence of logical forms has to be located in the different kinds of interactions between the researchers and some additional environment  in the process. Here should some verifiable reality be involved which is reflected in accompanying language expressions used by the researchers for communication.  This implies further that the used language expressions — which can even talk about other language expressions — are associated with propositions which can be shown to be valid.[4]

And  — with some interesting similarity with the modern concept of ‘diversity’ — he claims that in avoidance of any kind of dogmatism  “any hypothesis, no matter how unfamiliar, should have a fair chance and be judged by its results.”(p.12)

While Dewey is quite clear to use the concept of inquiry as a process leading to some results which are  depending from the starting point and the realized processes, he mentions additionally concepts like  ‘methods’, ‘norms’, ‘instrumentalities’, and  ‘procedures’, but these concepts are rather fuzzy. (cf. p.14f)

Warranted assertibility

Part of an inquiry are the individual actors which have psychological states like ‘doubt’ or ‘belief’ or  ‘understanding’ (knowledge).(p.15) But from these concepts follows nothing about needed  logical forms or rules.(cf.p.16f)  Instead Dewey repeats his requirement with the words “In scientific inquiry, the criterion of what is taken to be settled, or to be knowledge, is being so settled that it is available as a resource in further inquiry; not being settled in such a way as not to be subject to revision in further inquiry.”(p.17) And therefore, instead of using fuzzy concepts like (subjective) ‘doubt’, ‘believe’ or ‘knowledge’, prefers to use the concept “warranted assertibility”. This says not only, that you can assert something, but  that you can assert it also with ‘warranty’ based on the known process which has led to this result.(cf. p.10)

Introducing rationality

At this point the story takes a first ‘new turn’ because Dewey introduces now a first characterization of  the concept ‘rationality’ (which is for him synonymous with ‘reasonableness’). While the basic terms of the descriptions in an inquiry process are at least partially descriptive (empirical)  expressions, they are not completely “devoid of rational standing”.(cf. p.17) Furthermore the classification of final situations in an inquiry as ‘results’ which can be understood as ‘confirmations’ of initial  assumptions, questions or problems,  is only given in relations talking about the whole process and thereby they are talking about matters which are not rooted in  limited descriptive facts only. Or, as Dewey states it, “relations which exist between means (methods) employed and conclusions attained as their consequence.”(p.17) Therefore the following practical principle is valid: “It is reasonable to search for and select the means that will, with the maximum probability, yield the consequences which are intended.”(p.18)  And: “Hence,… the descriptive statement of methods that achieve progressively stable beliefs, or warranted assertibility, is also a rational statement in case the relation between them as means and assertibility as consequence is ascertained.”(p.18)

Suggested framework for ‘rationality’

Although Dewey does not exactly define the format of relations between selected means and successful consequences it seems ‘intuitively’ clear that the researchers have to have some ‘idea’ of such a relation which serves then as a new ‘ground for abstract meaning’ in their ‘thinking’. Within the oksimo paradigm [2] one could describe the problem at hand as follows:

  1. The researchers participating in an inquiry process have perceptions of the process.
  2. They have associated cognitive processing as well as language processing, where both are bi-directional mapped into each other, but not 1-to-1.
  3. They can describe the individual properties, objects, actors, actions etc. which are part of the process in a timely order.
  4. They can with their cognitive processing build more abstract concepts based on these primary concepts.
  5. They can encode these more abstract cognitive structures and processes in propositions (and expressions) which correspond to these more abstract cognitive entities.
  6. They can construct rule-like cognitive structures (within the oksimo paradigm  called ‘change rules‘) with corresponding propositions (and expressions).
  7. They can evaluate those change rules whether they describe ‘successful‘ consequences.
  8. Change rules with successful consequences can become building blocks for those rules, which can be used for inferences/ deductions.

Thus one can look to the formal aspect of formal relations which can be generated by an inference mechanism, but such a formal inference must not necessarily yield results which are empirically sound. Whether this will be the case is a job on its own dealing with the encoded meaning of the inferred expressions and the outcome of the inquiry.(cf. p.19,21)

Limitations of formal logic

From this follows that the concrete logical operators as part of the inference machinery have to be qualified by their role within the more general relation between goals, means and success. The standard operators of modern formal logic are only a few and they are designed for a domain where you have a meaning space  with only two objects: ‘being true’, being false’. In the real world of everyday experience we have a nearly infinite space of meanings. To describe this everyday large meaning space the standard logic of today is too limited. Normal language teaches us, how we can generate as many operators as we need  only by using normal language. Inferring operators directly from normal language is not only more powerful but at the same time much, much easier to apply.[2]

Inquiry process – re-formulated

Let us fix a first hypothesis here. The ideas of Dewey can be re-framed with the following assumptions:

  1. By doing an inquiry process with some problem  (question,…) at the start and proceeding with clearly defined actions, we can reach final states which either are classified as being a positive answer (success) of the problem of the beginning or not.
  2. If there exists a repeatable inquiry process with positive answers the whole process can be understood as a new ‘recipe’ (= complex operation, procedure, complex method, complex rule,law,  …) how to get positive answers for certain kinds of questions.
  3. If a recipe is available from preceding experiments one can use this recipe to ‘plan’ a new process to reach a certain ‘result’ (‘outcome’, ‘answer’, …).
  4. The amount of failures as part of the whole number of trials in applying a recipe can be used to get some measure for the probability and quality of the recipe.
  5. The description of a recipe needs a meta-level of ‘looking at’ the process. This meta-level description is sound (‘valid’) by the interaction with reality but as such the description  includes some abstraction which enables a minimal rationality.
Habit

At this point Dewey introduces another term ‘habit’ which is not really very clear and which not really does explain more, but — for whatever reason — he introduces such a term.(cf. p.21f)

The intuition behind the term ‘habit’ is that independent of the language dimension there exists the real process driven by real actors doing real actions. It is further — tacitly —  assumed that these real actors have some ‘internal processing’ which is ‘causing’ the observable actions. If these observable actions can be understood/ interpreted as an ‘inquiry process’ leading to some ‘positive answers’ then Dewey calls the underlying processes all together a ‘habit’: “Any habit is a way or manner of action, not a particular act or deed. “(p.20) If one observes such a real process one can describe it with language expressions; then it gets the format of a ‘rule’, a principle’ or a ‘law’.(cf. p.20)

If one would throw away the concept  ‘habit’, nothing would be missing. Whichever  internal processes are assumed, a description of these will be bound to its observability and will depend of some minimal  language mechanisms. These must be explained. Everything beyond these is not necessary to explain rational behavior.[5]

At the end of chapter I Dewey points to some additional aspects in the context of logic. One aspect is the progressive character of logic as discipline in the course of history.(cf. p.22)[6]

Operational

Another aspect is introduced by his statement “The subject-matter of logic is determined operationally.”(p.22) And he characterizes the meaning of the term ‘operational’ as representing the “conditions by which subject-matter is (1) rendered fit to serve as means and (2) actually functions as such means in effecting the objective transformation which is the end of the inquiry.”(p.22) Thus, again, the concept of inquiry is the general framework organizing means to get to a successful end. This inquiry has an empirical material (or ‘existential‘) basis which additionally can be described symbolically. The material basis can be characterized by parts of it called ‘means’ which are necessary to enable objective transformations leading to the end of the inquiry.(cf. p.22f)

One has to consider at this point that the fact of the existential (empirical) basis of every inquiry process should not mislead to the view that this can work without a symbolic dimension! Besides extremely simple processes every process needs for its coordination between different brains a symbolic communication which has to use certain expressions of a language. Thus   the cognitive concepts of the empirical  means and the followed rules can only get ‘fixed’ and made ‘clear’ with the usage of accompanying symbolic expressions.

Postulational logic

Another aspect mentioned by Dewey is given by the statement: “Logical forms are postulational.“(p.24) Embedded in the framework of an inquiry Dewey identifies requirements (demands, postulates, …) in the beginning of the inquiry which have to be fulfilled through the inquiry process. And Dewey sees such requirements as part of the inquiry process itself.(cf. p.24f) If during such an inquiry process some kinds of logical postulates will be used they have no right on their own independent of the real process! They can only be used as long as they are in agreement with the real process.  With the words of Dewey: “A postulate is thus neither arbitrary nor externally a priori. It is not the former because it issues from the relation of means to the end to be reached. It is not the latter, because it is not imposed upon inquiry from without, but is an acknowledgement of that to which the undertaking of inquiry commits us.”(p.26)  .

Logic naturalistic

Dewey comments further on the topic that “Logic is a naturalistic theory.“(p.27 In some sense this is trivial because humans are biological systems and therefore every process is a biological (natural) process, also logical thinking as part of it.

Logic is social

Dewey mentions further that “Logic is a social discipline.“(p.27) This follows from the fact that “man is naturally a being that lives in association with others in communities possessing language, and therefore enjoying a transmitted culture. Inquiry is a mode of activity that is socially conditioned and that has cultural consequences.”(p.27)  And therefore: “Any theory of logic has to take some stand on the question whether symbols are ready-made clothing for meanings that subsist independently, or whether they are necessary conditions for the existence of meanings —  in terms often used, whether language is the dress of ‘thought’ or is something without which ‘thought’ cannot be.” (27f) This can be put also in the following  general formula by Dewey: “…in every interaction that involves intelligent direction, the physical environment is part of a more inclusive social or cultural environment.” (p.28) The central means of culture is Language, which “is the medium in which culture exists and through which it is transmitted. Phenomena that are not recorded cannot be even discussed. Language is the record that perpetuates occurrences and renders them amenable to public consideration. On the other hand, ideas or meanings that exist only in symbols that are not communicable are fantastic beyond imagination”.(p.28)

Autonomous logic

The final aspect about logic which is mentioned by Dewey looks to the position which states that “Logic is autonomous“.(p.29) Although the position of the autonomy of logic — in various varieties — is very common in history, but Dewey argues against this position. The main point is — as already discussed before — that the open framework of an inquiry gives the main point of reference and logic must fit to this framework.[7]

SOME DISCUSSION

For a discussion of these ideas of Dewey see the next uocoming post.

COMMENTS

[1] John Dewey, Logic. The Theory Of Inquiry, New York, Henry Holt and Company, 1938  (see: https://archive.org/details/JohnDeweyLogicTheTheoryOfInquiry with several formats; I am using the kindle (= mobi) format: https://archive.org/download/JohnDeweyLogicTheTheoryOfInquiry/%5BJohn_Dewey%5D_Logic_-_The_Theory_of_Inquiry.mobi . This is for the direct work with a text very convenient.  Additionally I am using a free reader ‘foliate’ under ubuntu 20.04: https://github.com/johnfactotum/foliate/releases/).  The page numbers in the text of the review — like (p.13) — are the page numbers of the ebook as indicated in the ebook-reader foliate.(There exists no kindle-version for linux (although amazon couldn’t work without linux servers!))

[2] Gerd Doeben-Henisch, 2021, uffmm.org, THE OKSIMO PARADIGM
An Introduction (Version 2), https://www.uffmm.org/wp-content/uploads/2021/03/oksimo-v1-part1-v2.pdf

[3] The new oksimo paradigm does exactly this. See oksimo.org

[4] For the conceptual framework for the term ‘proposition’ see the preceding part 2, where the author describes the basic epistemological assumptions of the oksimo paradigm.

[5] Clearly it is possible and desirable to extend our knowledge about the internal processing of human persons. This is mainly the subject-matter of biology, brain research, and physiology. Other disciplines are close by like Psychology, ethology, linguistics, phonetics etc. The main problem with all these disciplines is that they are methodologically disconnected: a really integrated theory is not yet possible and not in existence. Examples of integrations like Neuro-Psychology are far from what  they should be.

[6] A very good overview about the development of logic can be found in the book The Development of Logic by William and Martha Kneale. First published 1962 with many successive corrected reprints by Clarendon Press, Oxford (and other cities.)

[7] Today we have the general problem that the concept of formal logic has developed the concept of logical inference in so many divergent directions that it is not a simple problem to evaluate all these different ‘kinds of logic’.

MEDIA

This is another unplugged recording dealing with the main idea of Dewey in chapter I: what is logic and how relates logic to a scientific inquiry.

HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, March 2, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 2, 2021 13:59h (Minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 3: Actor Story and  Theories

Context

This text is preceded by the following texts:

Introduction

Having a vision is that moment  where something really new in the whole universe is getting an initial status in some real brain which can enable other neural events which  can possibly be translated in bodily events which finally can change the body-external outside world. If this possibility is turned into reality than the outside world has been changed.

When human persons (groups of homo sapiens specimens) as experts — here acting as stakeholder and intended users as one but in different roles! — have stated a problem and a vision document, then they have to translate these inevitably more fuzzy than clear ideas into the concrete terms of an everyday world, into something which can really work.

To enable a real cooperation  the experts have to generate a symbolic description of their vision (called specification) — using an everyday language, possibly enhanced by special expressions —  in a way that  it can became clear to the whole group, which kind of real events, actions and processes are intended.

In the general case an engineering specification describes concrete forms of entanglements of human persons which enable  these human persons to cooperate   in a real situation. Thereby the translation of  the vision inside the brain  into the everyday body-external reality happens. This is the language of life in the universe.

WRITING A STORY

To elaborate a usable specification can metaphorically be understood  as the writing of a new story: which kinds of actors will do something in certain situations, what kinds of other objects, instruments etc. will be used, what kinds of intrinsic motivations and experiences are pushing individual actors, what are possible outcomes of situations with certain actors, which kind of cooperation is  helpful, and the like. Such a story is  called here  Actor Story [AS].

COULD BE REAL

An Actor Story must be written in a way, that all participating experts can understand the language of the specification in a way that   the content, the meaning of the specification is either decidable real or that it eventually can become real.  At least the starting point of the story should be classifiable as   being decidable actual real. What it means to be decidable actual real has to be defined and agreed between the participating experts before they start writing the Actor Story.

ACTOR STORY [AS]

An Actor Story assumes that the described reality is classifiable as a set of situations (states) and  a situation as part of the Actor Story — abbreviated: situationAS — is understood  as a set of expressions of some everyday language. Every expression being part of an situationAS can be decided as being real (= being true) in the understood real situation.

If the understood real situation is changing (by some event), then the describing situationAS has to be changed too; either some expressions have to be removed or have to be added.

Every kind of change in the real situation S* has to be represented in the actor story with the situationAS S symbolically in the format of a change rule:

X: If condition  C is satisfied in S then with probability π  add to S Eplus and remove from  S Eminus.

or as a formula:

S’π = S + Eplus – Eminus

This reads as follows: If there is an situationAS S and there is a change rule X, then you can apply this change rule X with probability π onto S if the condition of X is satisfied in S. In that case you have to add Eplus to S and you have to remove Eminus from S. The result of these operations is the new (successor) state S’.

The expression C is satisfied in S means, that all elements of C are elements of S too, written as C ⊆ S. The expression add Eplus to S means, that the set Eplus is unified with the set S, written as Eplus ∪ S (or here: Eplus + S). The expression remove Eminus from S means, that the set Eminus is subtracted from the set S, written as S – Eminus.

The concept of apply change rule X to a given state S resulting in S’ is logically a kind of a derivation. Given S,X you will derive by applicating X the new  S’. One can write this as S,X ⊢X S’. The ‘meaning’ of the sign ⊢  is explained above.

Because every successor state S’ can become again a given state S onto which change rules X can be applied — written shortly as X(S)=S’, X(S’)=S”, … — the repeated application of change rules X can generate a whole sequence of states, written as SQ(S,X) = <S’, S”, … Sgoal>.

To realize such a derivation in the real world outside of the thinking of the experts one needs a machine, a computer — formally an automaton — which can read S and X documents and can then can compute the derivation leading to S’. An automaton which is doing such a job is often called a simulator [SIM], abbreviated here as ∑. We could then write with more information:

S,X ⊢ S’

This will read: Given a set S of many states S and a set X of change rules we can derive by an actor story simulator ∑ a successor state S’.

A Model M=<S,X>

In this context of a set S and a set of change rules X we can speak of a model M which is defined by these two sets.

A Theory T=<M,>

Combining a model M with an actor story simulator enables a theory T which allows a set of derivations based on the model, written as SQ(S,X,⊢) = <S’, S”, … Sgoal>. Every derived final state Sgoal in such a derivation is called a theorem of T.

An Empirical Theory Temp

An empirical theory Temp is possible if there exists a theory T with a group of experts which are using this theory and where these experts can interpret the expressions used in theory T by their built-in meaning functions in a way that they always can decide whether the expressions are related to a real situation or not.

Evaluation [ε]

If one generates an Actor Story Theory [TAS] then it can be of practical importance to get some measure how good this theory is. Because measurement is always an operation of comparison between the subject x to be measured and some agreed standard s one has to clarify which kind of a standard for to be good is available. In the general case the only possible source of standards are the experts themselves. In the context of an Actor Story the experts have agreed to some vision [V] which they think to be a better state than a  given state S classified as a problem [P]. These assumptions allow a possible evaluation of a given state S in the ‘light’ of an agreed vision V as follows:

ε: V x S —> |V ⊆ S|[%]
ε(V,S) = |V ⊆ S|[%]

This reads as follows: the evaluation ε is a mapping from the sets V and S into the number of elements from the set V included in the set S converted in the percentage of the number of elements included. Thus if no  element of V is included in the set S then 0% of the vision is realized, if all elements are included then 100%, etc. As more ‘fine grained’ the set V is as more ‘fine grained’  the evaluation can be.

An Evaluated Theory Tε=<M,,ε>

If one combines the concept of a  theory T with the concept of evaluation ε then one can use the evaluation in combination with the derivation in the way that every  state in a derivation SQ(S,X,⊢) = <S’, S”, … Sgoal> will additionally be evaluated, thus one gets sequences of pairs as follows:

SQ(S,X,⊢∑,ε) = <(S’,ε(V,S’)), (S”,ε(V,S”)), …, (Sgoal, ε(V,Sgoal))>

In the ideal case Sgoal is evaluated to 100% ‘good’. In real cases 100% is only an ideal value which usually will only  be approximated until some threshold.

An Evaluated Theory Tε with Algorithmic Intelligence Tε,α=<M,,ε,α>

Because every theory defines a so-called problem space which is here enhanced by some evaluation function one can add an additional operation α (realized by an algorithm) which can repeat the simulator based derivations enhanced with the evaluations to identify those sets of theorems which are qualified as the best theorems according to some criteria given. This operation α is here called algorithmic intelligence of an actor story AS]. The existence of such an algorithmic intelligence of an actor story [αAS] allows the introduction of another derivation concept:

S,X ⊢∑,ε,α S* ⊆  S’

This reads as follows: Given a set S and a set X an evaluated theory with algorithmic intelligence Tε,α can derive a subset S* of all possible theorems S’ where S* matches certain given criteria within V.

WHERE WE ARE NOW

As it should have become clear now the work of HMI analysis is the elaboration of a story which can be done in the format of different kinds of theories all of which can be simulated and evaluated. Even better, the only language you have to know is your everyday language, your mother tongue (mathematics is understood here as a sub-language of the everyday language, which in some special cases can be of some help). For this theory every human person — in all ages! — can be a valuable  colleague to help you in understanding better possible futures. Because all parts of an actor story theory are plain texts, everybody ran read and understand everything. And if different groups of experts have investigated different  aspects of a common field you can merge all texts by only ‘pressing a button’ and you will immediately see how all these texts either work together or show discrepancies. The last effect is a great opportunity  to improve learning and understanding! Together we represent some of the power of life in the universe.

CONTINUATION

See here.

 

 

 

 

 

 

 

 

REVIEW OF MASLOW (1966) The Psychology of Science, Part II

eJournal: uffmm.org,
ISSN 2567-6458,
8.-21.June 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

In this review I discuss the ideas of the book  The Psychology of Science (1966) from A.Maslow. His book is in a certain sense  outstanding  because the point of view is in one respect inspired by an artificial borderline between the mainstream-view of empirical science and the mainstream-view of psychotherapy. In another respect the book discusses a possible  integrated view of empirical science with psychotherapy as an integral part. The point of view of the reviewer is the new paradigm of a  Generative Cultural Anthropology[GCA]. Part II of this review reports some considerations reflecting the relationship of the point of view of Maslow and the point of view of GCA.

This review is part of the general review section of the uffmm.org blog.

More extended version (21.June 2020): reviews-maslow1966-II-v09

See here (8.Juni 2020): reviews-maslow1966-II-v08

See here (7.June 2020): reviews-maslow1966-II-v07

 

REVIEW OF MASLOW (1966) The Psychology of Science

eJournal: uffmm.org,
ISSN 2567-6458, 1.June 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is part of the Review-Section of the uffmm-Blog.

ABSTRACT

In this review I discuss the ideas of the book The Psychology of Science (1966) from A.Maslow. His book is in a certain sense outstanding because the point of view is in one respect inspired by an artificial borderline between the mainstream-view of empirical science and the mainstream-view of psychotherapy. In another respect the book discusses a possible integrated view of empirical science with psychotherapy as an integral part. The point of view of the reviewer is the new paradigm of a Generative Cultural Anthropology[GCA]. Part I of this review gives a summary of the content of the book as understood by the reviewer and part II reports some considerations reflecting the relationship of the point of view of Maslow and the point of view of GCA.

Part I (1.June 2020): reviews-maslow1966-v0.5

CASE STUDIES

eJournal: uffmm.org
ISSN 2567-6458, 4.May  – 16.March   2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

In this section several case studies will  be presented. It will be shown, how the DAAI paradigm can be applied to many different contexts . Since the original version of the DAAI-Theory in Jan 18, 2020 the concept has been further developed centering around the concept of a Collective Man-Machine Intelligence [CM:MI] to address now any kinds of experts for any kind of simulation-based development, testing and gaming. Additionally the concept  now can be associated with any kind of embedded algorithmic intelligence [EAI]  (different to the mainstream concept ‘artificial intelligence’). The new concept can be used with every normal language; no need for any special programming language! Go back to the overall framework.

COLLECTION OF PAPERS

There exists only a loosely  order  between the  different papers due to the character of this elaboration process: generally this is an experimental philosophical process. HMI Analysis applied for the CM:MI paradigm.

 

JANUARY 2021 – OCTOBER 2021

  1. HMI Analysis for the CM:MI paradigm. Part 1 (Febr. 25, 2021)(Last change: March 16, 2021)
  2. HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision (Febr. 27, 2021)
  3. HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories (March 2, 2021)
  4. HMI Analysis for the CM:MI paradigm. Part 4. Tool Based Development with Testing and Gaming (March 3-4, 2021, 16:15h)

APRIL 2020 – JANUARY 2021

  1. From Men to Philosophy, to Empirical Sciences, to Real Systems. A Conceptual Network. (Last Change Nov 8, 2020)
  2. FROM DAAI to GCA. Turning Engineering into Generative Cultural Anthropology. This paper gives an outline how one can map the DAAI paradigm directly into the GCA paradigm (April-19,2020): case1-daai-gca-v1
  3. CASE STUDY 1. FROM DAAI to ACA. Transforming HMI into ACA (Applied Cultural Anthropology) (July 28, 2020)
  4. A first GCA open research project [GCA-OR No.1].  This paper outlines a first open research project using the GCA. This will be the framework for the first implementations (May-5, 2020): GCAOR-v0-1
  5. Engineering and Society. A Case Study for the DAAI Paradigm – Introduction. This paper illustrates important aspects of a cultural process looking to the acting actors  where  certain groups of people (experts of different kinds) can realize the generation, the exploration, and the testing of dynamical models as part of a surrounding society. Engineering is clearly  not  separated from society (April-9, 2020): case1-population-start-part0-v1
  6. Bootstrapping some Citizens. This  paper clarifies the set of general assumptions which can and which should be presupposed for every kind of a real world dynamical model (April-4, 2020): case1-population-start-v1-1
  7. Hybrid Simulation Game Environment [HSGE]. This paper outlines the simulation environment by combing a usual web-conference tool with an interactive web-page by our own  (23.May 2020): HSGE-v2 (May-5, 2020): HSGE-v0-1
  8. The Observer-World Framework. This paper describes the foundations of any kind of observer-based modeling or theory construction.(July 16, 2020)
  9. CASE STUDY – SIMULATION GAMES – PHASE 1 – Iterative Development of a Dynamic World Model (June 19.-30., 2020)
  10. KOMEGA REQUIREMENTS No.1. Basic Application Scenario (last change: August 11, 2020)
  11. KOMEGA REQUIREMENTS No.2. Actor Story Overview (last change: August 12, 2020)
  12. KOMEGA REQUIREMENTS No.3, Version 1. Basic Application Scenario – Editing S (last change: August 12, 2020)
  13. The Simulator as a Learning Artificial Actor [LAA]. Version 1 (last change: August 23, 2020)
  14. KOMEGA REQUIREMENTS No.4, Version 1 (last change: August 26, 2020)
  15. KOMEGA REQUIREMENTS No.4, Version 2. Basic Application Scenario (last change: August 28, 2020)
  16. Extended Concept for Meaning Based Inferences. Version 1 (last change: 30.April 2020)
  17. Extended Concept for Meaning Based Inferences – Part 2. Version 1 (last change: 1.September 2020)
  18. Extended Concept for Meaning Based Inferences – Part 2. Version 2 (last change: 2.September 2020)
  19. Actor Epistemology and Semiotics. Version 1 (last change: 3.September 2020)
  20. KOMEGA REQUIREMENTS No.4, Version 3. Basic Application Scenario (last change: 4.September 2020)
  21. KOMEGA REQUIREMENTS No.4, Version 4. Basic Application Scenario (last change: 10.September 2020)
  22. KOMEGA REQUIREMENTS No.4, Version 5. Basic Application Scenario (last change: 13.September 2020)
  23. KOMEGA REQUIREMENTS: From the minimal to the basic Version. An Overview (last change: Oct 18, 2020)
  24. KOMEGA REQUIREMENTS: Basic Version with optional on-demand Computations (last change: Nov 15,2020)
  25. KOMEGA REQUIREMENTS:Interactive Simulations (last change: Nov 12,2020)
  26. KOMEGA REQUIREMENTS: Multi-Group Management (last change: December 13, 2020)
  27. KOMEGA-REQUIREMENTS: Start with a Political Program. (last change: November 28, 2020)
  28. OKSIMO SW: Minimal Basic Requirements (last change: January 8, 2021)

 

 

AAI THEORY V2 –A Philosophical Framework

eJournal: uffmm.org,
ISSN 2567-6458, 22.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 23.February 2019 (continued the text)

Last change: 24.February 2019 (extended the text)

CONTEXT

In the overview of the AAI paradigm version 2 you can find this section  dealing with the philosophical perspective of the AAI paradigm. Enjoy reading (or not, then send a comment :-)).

THE DAILY LIFE PERSPECTIVE

The perspective of Philosophy is rooted in the everyday life perspective. With our body we occur in a space with other bodies and objects; different features, properties  are associated with the objects, different kinds of relations an changes from one state to another.

From the empirical sciences we have learned to see more details of the everyday life with regard to detailed structures of matter and biological life, with regard to the long history of the actual world, with regard to many interesting dynamics within the objects, within biological systems, as part of earth, the solar system and much more.

A certain aspect of the empirical view of the world is the fact, that some biological systems called ‘homo sapiens’, which emerged only some 300.000 years ago in Africa, show a special property usually called ‘consciousness’ combined with the ability to ‘communicate by symbolic languages’.

General setting of the homo sapiens species (simplified)
Figure 1: General setting of the homo sapiens species (simplified)

As we know today the consciousness is associated with the brain, which in turn is embedded in the body, which  is further embedded in an environment.

Thus those ‘things’ about which we are ‘conscious’ are not ‘directly’ the objects and events of the surrounding real world but the ‘constructions of the brain’ based on actual external and internal sensor inputs as well as already collected ‘knowledge’. To qualify the ‘conscious things’ as ‘different’ from the assumed ‘real things’ ‘outside there’ it is common to speak of these brain-generated virtual things either as ‘qualia’ or — more often — as ‘phenomena’ which are  different to the assumed possible real things somewhere ‘out there’.

PHILOSOPHY AS FIRST PERSON VIEW

‘Philosophy’ has many facets.  One enters the scene if we are taking the insight into the general virtual character of our primary knowledge to be the primary and irreducible perspective of knowledge.  Every other more special kind of knowledge is necessarily a subspace of this primary phenomenological knowledge.

There is already from the beginning a fundamental distinction possible in the realm of conscious phenomena (PH): there are phenomena which can be ‘generated’ by the consciousness ‘itself’  — mostly called ‘by will’ — and those which are occurring and disappearing without a direct influence of the consciousness, which are in a certain basic sense ‘given’ and ‘independent’,  which are appearing  and disappearing according to ‘their own’. It is common to call these independent phenomena ’empirical phenomena’ which represent a true subset of all phenomena: PH_emp  PH. Attention: These empirical phenomena’ are still ‘phenomena’, virtual entities generated by the brain inside the brain, not directly controllable ‘by will’.

There is a further basic distinction which differentiates the empirical phenomena into those PH_emp_bdy which are controlled by some processes in the body (being tired, being hungry, having pain, …) and those PH_emp_ext which are controlled by objects and events in the environment beyond the body (light, sounds, temperature, surfaces of objects, …). Both subsets of empirical phenomena are different: PH_emp_bdy PH_emp_ext = 0. Because phenomena usually are occurring  associated with typical other phenomena there are ‘clusters’/ ‘pattern’ of phenomena which ‘represent’ possible events or states.

Modern empirical science has ‘refined’ the concept of an empirical phenomenon by introducing  ‘standard objects’ which can be used to ‘compare’ some empirical phenomenon with such an empirical standard object. Thus even when the perception of two different observers possibly differs somehow with regard to a certain empirical phenomenon, the additional comparison with an ’empirical standard object’ which is the ‘same’ for both observers, enhances the quality, improves the precision of the perception of the empirical phenomena.

From these considerations we can derive the following informal definitions:

  1. Something is ‘empirical‘ if it is the ‘real counterpart’ of a phenomenon which can be observed by other persons in my environment too.
  2. Something is ‘standardized empirical‘ if it is empirical and can additionally be associated with a before introduced empirical standard object.
  3. Something is ‘weak empirical‘ if it is the ‘real counterpart’ of a phenomenon which can potentially be observed by other persons in my body as causally correlated with the phenomenon.
  4. Something is ‘cognitive‘ if it is the counterpart of a phenomenon which is not empirical in one of the meanings (1) – (3).

It is a common task within philosophy to analyze the space of the phenomena with regard to its structure as well as to its dynamics.  Until today there exists not yet a complete accepted theory for this subject. This indicates that this seems to be some ‘hard’ task to do.

BRIDGING THE GAP BETWEEN BRAINS

As one can see in figure 1 a brain in a body is completely disconnected from the brain in another body. There is a real, deep ‘gap’ which has to be overcome if the two brains want to ‘coordinate’ their ‘planned actions’.

Luckily the emergence of homo sapiens with the new extended property of ‘consciousness’ was accompanied by another exciting property, the ability to ‘talk’. This ability enabled the creation of symbolic languages which can help two disconnected brains to have some exchange.

But ‘language’ does not consist of sounds or a ‘sequence of sounds’ only; the special power of a language is the further property that sequences of sounds can be associated with ‘something else’ which serves as the ‘meaning’ of these sounds. Thus we can use sounds to ‘talk about’ other things like objects, events, properties etc.

The single brain ‘knows’ about the relationship between some sounds and ‘something else’ because the brain is able to ‘generate relations’ between brain-structures for sounds and brain-structures for something else. These relations are some real connections in the brain. Therefore sounds can be related to ‘something  else’ or certain objects, and events, objects etc.  can become related to certain sounds. But these ‘meaning relations’ can only ‘bridge the gap’ to another brain if both brains are using the same ‘mapping’, the same ‘encoding’. This is only possible if the two brains with their bodies share a real world situation RW_S where the perceptions of the both brains are associated with the same parts of the real world between both bodies. If this is the case the perceptions P(RW_S) can become somehow ‘synchronized’ by the shared part of the real world which in turn is transformed in the brain structures P(RW_S) —> B_S which represent in the brain the stimulating aspects of the real world.  These brain structures B_S can then be associated with some sound structures B_A written as a relation  MEANING(B_S, B_A). Such a relation  realizes an encoding which can be used for communication. Communication is using sound sequences exchanged between brains via the body and the air of an environment as ‘expressions’ which can be recognized as part of a learned encoding which enables the receiving brain to identify a possible meaning candidate.

DIFFERENT MODES TO EXPRESS MEANING

Following the evolution of communication one can distinguish four important modes of expressing meaning, which will be used in this AAI paradigm.

VISUAL ENCODING

A direct way to express the internal meaning structures of a brain is to use a ‘visual code’ which represents by some kinds of drawing the visual shapes of objects in the space, some attributes of  shapes, which are common for all people who can ‘see’. Thus a picture and then a sequence of pictures like a comic or a story board can communicate simple ideas of situations, participating objects, persons and animals, showing changes in the arrangement of the shapes in the space.

Pictorial expressions representing aspects of the visual and the auditory sens modes
Figure 2: Pictorial expressions representing aspects of the visual and the auditory sens modes

Even with a simple visual code one can generate many sequences of situations which all together can ‘tell a story’. The basic elements are a presupposed ‘space’ with possible ‘objects’ in this space with different positions, sizes, relations and properties. One can even enhance these visual shapes with written expressions of  a spoken language. The sequence of the pictures represents additionally some ‘timely order’. ‘Changes’ can be encoded by ‘differences’ between consecutive pictures.

FROM SPOKEN TO WRITTEN LANGUAGE EXPRESSIONS

Later in the evolution of language, much later, the homo sapiens has learned to translate the spoken language L_s in a written format L_w using signs for parts of words or even whole words.  The possible meaning of these written expressions were no longer directly ‘visible’. The meaning was now only available for those people who had learned how these written expressions are associated with intended meanings encoded in the head of all language participants. Thus only hearing or reading a language expression would tell the reader either ‘nothing’ or some ‘possible meanings’ or a ‘definite meaning’.

A written textual version in parallel to a pictorial version
Figure 3: A written textual version in parallel to a pictorial version

If one has only the written expressions then one has to ‘know’ with which ‘meaning in the brain’ the expressions have to be associated. And what is very special with the written expressions compared to the pictorial expressions is the fact that the elements of the pictorial expressions are always very ‘concrete’ visual objects while the written expressions are ‘general’ expressions allowing many different concrete interpretations. Thus the expression ‘person’ can be used to be associated with many thousands different concrete objects; the same holds for the expression ‘road’, ‘moving’, ‘before’ and so on. Thus the written expressions are like ‘manufacturing instructions’ to search for possible meanings and configure these meanings to a ‘reasonable’ complex matter. And because written expressions are in general rather ‘abstract’/ ‘general’ which allow numerous possible concrete realizations they are very ‘economic’ because they use minimal expressions to built many complex meanings. Nevertheless the daily experience with spoken and written expressions shows that they are continuously candidates for false interpretations.

FORMAL MATHEMATICAL WRITTEN EXPRESSIONS

Besides the written expressions of everyday languages one can observe later in the history of written languages the steady development of a specialized version called ‘formal languages’ L_f with many different domains of application. Here I am  focusing   on the formal written languages which are used in mathematics as well as some pictorial elements to ‘visualize’  the intended ‘meaning’ of these formal mathematical expressions.

Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)
Fig. 4: Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)

One prominent concept in mathematics is the concept of a ‘graph’. In  the basic version there are only some ‘nodes’ (also called vertices) and some ‘edges’ connecting the nodes.  Formally one can represent these edges as ‘pairs of nodes’. If N represents the set of nodes then N x N represents the set of all pairs of these nodes.

In a more specialized version the edges are ‘directed’ (like a ‘one way road’) and also can be ‘looped back’ to a node   occurring ‘earlier’ in the graph. If such back-looping arrows occur a graph is called a ‘cyclic graph’.

Directed cyclic graph extended to represent 'states of affairs'
Fig.5: Directed cyclic graph extended to represent ‘states of affairs’

If one wants to use such a graph to describe some ‘states of affairs’ with their possible ‘changes’ one can ‘interpret’ a ‘node’ as  a state of affairs and an arrow as a change which turns one state of affairs S in a new one S’ which is minimally different to the old one.

As a state of affairs I  understand here a ‘situation’ embedded in some ‘context’ presupposing some common ‘space’. The possible ‘changes’ represented by arrows presuppose some dimension of ‘time’. Thus if a node n’  is following a node n indicated by an arrow then the state of affairs represented by the node n’ is to interpret as following the state of affairs represented in the node n with regard to the presupposed time T ‘later’, or n < n’ with ‘<‘ as a symbol for a timely ordering relation.

Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token
Fig.6: Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token

The space can be any kind of a space. If one assumes as an example a 2-dimensional space configured as a grid –as shown in figure 6 — with two tokens at certain positions one can introduce a language to describe the ‘facts’ which constitute the state of affairs. In this example one needs ‘names for objects’, ‘properties of objects’ as well as ‘relations between objects’. A possible finite set of facts for situation 1 could be the following:

  1. TOKEN(T1), BLACK(T1), POSITION(T1,1,1)
  2. TOKEN(T2), WHITE(T2), POSITION(T2,2,1)
  3. NEIGHBOR(T1,T2)
  4. CELL(C1), POSITION(1,2), FREE(C1)

‘T1’, ‘T2’, as well as ‘C1’ are names of objects, ‘TOKEN’, ‘BACK’ etc. are names of properties, and ‘NEIGHBOR’ is a relation between objects. This results in the equation:

S1 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), TOKEN(T2), WHITE(T2), POSITION(T2,2,1), NEIGHBOR(T1,T2), CELL(C1), POSITION(1,2), FREE(C1)}

These facts describe the situation S1. If it is important to describe possible objects ‘external to the situation’ as important factors which can cause some changes then one can describe these objects as a set of facts  in a separated ‘context’. In this example this could be two players which can move the black and white tokens and thereby causing a change of the situation. What is the situation and what belongs to a context is somewhat arbitrary. If one describes the agriculture of some region one usually would not count the planets and the atmosphere as part of this region but one knows that e.g. the sun can severely influence the situation   in combination with the atmosphere.

Change of a state of affairs given as a state which will be enhanced by a new object
Fig.7: Change of a state of affairs given as a state which will be enhanced by a new object

Let us stay with a state of affairs with only a situation without a context. The state of affairs is     a ‘state’. In the example shown in figure 6 I assume a ‘change’ caused by the insertion of a new black token at position (2,2). Written in the language of facts L_fact we get:

  1. TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)

Thus the new state S2 is generated out of the old state S1 by unifying S1 with the set of new facts: S2 = S1 {TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)}. All the other facts of S1 are still ‘valid’. In a more general manner one can introduce a change-expression with the following format:

<S1, S2, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)})>

This can be read as follows: The follow-up state S2 is generated out of the state S1 by adding to the state S1 the set of facts { … }.

This layout of a change expression can also be used if some facts have to be modified or removed from a state. If for instance  by some reason the white token should be removed from the situation one could write:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)})>

Another notation for this is S2 = S1 – {TOKEN(T2), WHITE(T2), POSITION(2,1)}.

The resulting state S2 would then look like:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1)}

And a combination of subtraction of facts and addition of facts would read as follows:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)}, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would result in the final state S2:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1),TOKEN(T3), BLACK(T3), POSITION(2,2)}

These simple examples demonstrate another fact: while facts about objects and their properties are independent from each other do relational facts depend from the state of their object facts. The relation of neighborhood e.g. depends from the participating neighbors. If — as in the example above — the object token T2 disappears then the relation ‘NEIGHBOR(T1,T2)’ no longer holds. This points to a hierarchy of dependencies with the ‘basic facts’ at the ‘root’ of a situation and all the other facts ‘above’ basic facts or ‘higher’ depending from the basic facts. Thus ‘higher order’ facts should be added only for the actual state and have to be ‘re-computed’ for every follow-up state anew.

If one would specify a context for state S1 saying that there are two players and one allows for each player actions like ‘move’, ‘insert’ or ‘delete’ then one could make the change from state S1 to state S2 more precise. Assuming the following facts for the context:

  1. PLAYER(PB1), PLAYER(PW1), HAS-THE-TURN(PB1)

In that case one could enhance the change statement in the following way:

<S1, S2, PB1,insert(TOKEN(T3,2,2)),add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would read as follows: given state S1 the player PB1 inserts a  black token at position (2,2); this yields a new state S2.

With or without a specified context but with regard to a set of possible change statements it can be — which is the usual case — that there is more than one option what can be changed. Some of the main types of changes are the following ones:

  1. RANDOM
  2. NOT RANDOM, which can be specified as follows:
    1. With PROBABILITIES (classical, quantum probability, …)
    2. DETERMINISTIC

Furthermore, if the causing object is an actor which can adapt structurally or even learn locally then this actor can appear in some time period like a deterministic system, in different collected time periods as an ‘oscillating system’ with different behavior, or even as a random system with changing probabilities. This make the forecast of systems with adaptive and/ or learning systems rather difficult.

Another aspect results from the fact that there can be states either with one actor which can cause more than one action in parallel or a state with multiple actors which can act simultaneously. In both cases the resulting total change has eventually to be ‘filtered’ through some additional rules telling what  is ‘possible’ in a state and what not. Thus if in the example of figure 6 both player want to insert a token at position (2,2) simultaneously then either  the rules of the game would forbid such a simultaneous action or — like in a computer game — simultaneous actions are allowed but the ‘geometry of a 2-dimensional space’ would not allow that two different tokens are at the same position.

Another aspect of change is the dimension of time. If the time dimension is not explicitly specified then a change from some state S_i to a state S_j does only mark the follow up state S_j as later. There is no specific ‘metric’ of time. If instead a certain ‘clock’ is specified then all changes have to be aligned with this ‘overall clock’. Then one can specify at what ‘point of time t’ the change will begin and at what point of time t*’ the change will be ended. If there is more than one change specified then these different changes can have different timings.

THIRD PERSON VIEW

Up until now the point of view describing a state and the possible changes of states is done in the so-called 3rd-person view: what can a person perceive if it is part of a situation and is looking into the situation.  It is explicitly assumed that such a person can perceive only the ‘surface’ of objects, including all kinds of actors. Thus if a driver of a car stears his car in a certain direction than the ‘observing person’ can see what happens, but can not ‘look into’ the driver ‘why’ he is steering in this way or ‘what he is planning next’.

A 3rd-person view is assumed to be the ‘normal mode of observation’ and it is the normal mode of empirical science.

Nevertheless there are situations where one wants to ‘understand’ a bit more ‘what is going on in a system’. Thus a biologist can be  interested to understand what mechanisms ‘inside a plant’ are responsible for the growth of a plant or for some kinds of plant-disfunctions. There are similar cases for to understand the behavior of animals and men. For instance it is an interesting question what kinds of ‘processes’ are in an animal available to ‘navigate’ in the environment across distances. Even if the biologist can look ‘into the body’, even ‘into the brain’, the cells as such do not tell a sufficient story. One has to understand the ‘functions’ which are enabled by the billions of cells, these functions are complex relations associated with certain ‘structures’ and certain ‘signals’. For this it is necessary to construct an explicit formal (mathematical) model/ theory representing all the necessary signals and relations which can be used to ‘explain’ the obsrvable behavior and which ‘explains’ the behavior of the billions of cells enabling such a behavior.

In a simpler, ‘relaxed’ kind of modeling  one would not take into account the properties and behavior of the ‘real cells’ but one would limit the scope to build a formal model which suffices to explain the oservable behavior.

This kind of approach to set up models of possible ‘internal’ (as such hidden) processes of an actor can extend the 3rd-person view substantially. These models are called in this text ‘actor models (AM)’.

HIDDEN WORLD PROCESSES

In this text all reported 3rd-person observations are called ‘actor story’, independent whether they are done in a pictorial or a textual mode.

As has been pointed out such actor stories are somewhat ‘limited’ in what they can describe.

It is possible to extend such an actor story (AS)  by several actor models (AM).

An actor story defines the situations in which an actor can occur. This  includes all kinds of stimuli which can trigger the possible senses of the actor as well as all kinds of actions an actor can apply to a situation.

The actor model of such an actor has to enable the actor to handle all these assumed stimuli as well as all these actions in the expected way.

While the actor story can be checked whether it is describing a process in an empirical ‘sound’ way,  the actor models are either ‘purely theoretical’ but ‘behavioral sound’ or they are also empirically sound with regard to the body of a biological or a technological system.

A serious challenge is the occurrence of adaptiv or/ and locally learning systems. While the actor story is a finite  description of possible states and changes, adaptiv or/ and locally learning systeme can change their behavior while ‘living’ in the actor story. These changes in the behavior can not completely be ‘foreseen’!

COGNITIVE EXPERT PROCESSES

According to the preceding considerations a homo sapiens as a biological system has besides many properties at least a consciousness and the ability to talk and by this to communicate with symbolic languages.

Looking to basic modes of an actor story (AS) one can infer some basic concepts inherently present in the communication.

Without having an explicit model of the internal processes in a homo sapiens system one can infer some basic properties from the communicative acts:

  1. Speaker and hearer presuppose a space within which objects with properties can occur.
  2. Changes can happen which presuppose some timely ordering.
  3. There is a disctinction between concrete things and abstract concepts which correspond to many concrete things.
  4. There is an implicit hierarchy of concepts starting with concrete objects at the ‘root level’ given as occurence in a concrete situation. Other concepts of ‘higher levels’ refer to concepts of lower levels.
  5. There are different kinds of relations between objects on different conceptual levels.
  6. The usage of language expressions presupposes structures which can be associated with the expressions as their ‘meanings’. The mapping between expressions and their meaning has to be learned by each actor separately, but in cooperation with all the other actors, with which the actor wants to share his meanings.
  7. It is assume that all the processes which enable the generation of concepts, concept hierarchies, relations, meaning relations etc. are unconscious! In the consciousness one can  use parts of the unconscious structures and processes under strictly limited conditions.
  8. To ‘learn’ dedicated matters and to be ‘critical’ about the quality of what one is learnig requires some disciplin, some learning methods, and a ‘learning-friendly’ environment. There is no guaranteed method of success.
  9. There are lots of unconscious processes which can influence understanding, learning, planning, decisions etc. and which until today are not yet sufficiently cleared up.