OKSIMO MEETS POPPER. Popper’s Position

eJournal: uffmm.org
ISSN 2567-6458, 31.March – 31.March  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

POPPERs POSITION IN THE CHAPTERS 1-17

In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.

Scientific Theory

A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference , which allows the deduction of a certain theorem t  if one makes  some additional concrete assumptions H.

Example: Theory T1 = <AX1,>

AX1= {Birds can fly}

H1= {Peter is  a bird}

: Peter can fly

Because  there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can  fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.

One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completely true. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds   like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)

The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:

AX1= {Birds can fly}

H2= {Doxy is  a bird, Doxy cannot fly}

: ‘Doxy can fly’ & ~’Doxy can fly’

If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

In this case the set {AX1, H2} is called inconsistent.

If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.

Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof  a falsification of a theory T can destroy the ability  of this  theory to distinguish between true and false statements.

Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try  it.(cf. p.33, 63)

Meta Theory, Logic of Scientific Discovery, Philosophy of Science

Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.

Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person  like Popper.

Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:

A scientific theory from a point of  view of Philosophy of Science represents a structure like the following one (minimal version):

MT=<S, A[μ], E, L, AX, , ET, E+, E-, true, false, contradiction, inconsistent>

In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L.  Based on their built-in adaptive meaning function μ the human actors A can relate  properties of the situation S with expressions E of L.  Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions  E-. Both sets of expressions are true subsets of E: E+ ⊂ E  and E- ⊂ E. Additionally the experts can define some special  set of expressions called axioms  AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T  ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either  only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions  that this unified set  confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions  can not  distinguish between true and false expressions.

Popper gives some general requirements for the axioms of a theory (cf. p.71):

  1. Axioms must be free from contradiction.
  2. The axioms  must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
  3. The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.

While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.

Empirical Interpretation(s)

Popper assumes that the universal statements  of scientific theories   are linguistic representations, and this means  they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning.  Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true),  or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).

Examples:

  1. Two human actors in a situation S agree about the  fact, that there is ‘something’ which  they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is  some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract  concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one).  To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
  2. These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’.  If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
  3. The  expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
  4. To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’  they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations

The basic idea with a system of axioms AX is — according to Popper —  that the axioms as universal expressions represent  a system of equations where  the  general terms   should be able to be substituted by certain values. The set of admissible values is different from the set of  inadmissible values. The relation between those values which can be substituted for the terms  is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)

But Popper has difficulties with an axiomatic system interpreted as a system of equations  since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is,  that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)

On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)

But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.

Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.

As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E  of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists  together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and  are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!

Example:

Lets us have a look to another  example.

If we take the system of axioms AX as the following schema:  AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m  as follows: m={<1,2,3>, <2,3,5>}. The values are again given as  a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers   which can satisfy the general concept of the equation.  In this secondary interpretation m is becoming  a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.

It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing  objects; the individual contributions will be  reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derived forecast of the theory 1+2 = 3  and the empirically measured value 1+2 = 2. If one would  define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with  ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief  the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …

Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.

SOURCES

[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by  Basic Books, New York (more editions have been published  later; I am using the eBook version of Routledge (2002))

 

 

THE OKSIMO CASE as SUBJECT FOR PHILOSOPHY OF SCIENCE. Part 1

eJournal: uffmm.org
ISSN 2567-6458, 22.March – 23.March 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE OKSIMO EVENT SPACE

The characterization of the oksimo software paradigm starts with an informal characterization  of the oksimo software event space.

EVENT SPACE

An event space is a space which can be filled up by observable events fitting to the species-specific internal processed environment representations [1], [2] here called internal environments [ENVint]. Thus the same external environment [ENV] can be represented in the presence of  10 different species  in 10 different internal formats. Thus the expression ‘environment’ [ENV] is an abstract concept assuming an objective reality which is common to all living species but indeed it is processed by every species in a species-specific way.

In a human culture the usual point of view [ENVhum] is simultaneous with all the other points of views [ENVa] of all the other other species a.

In the ideal case it would be possible to translate all species-specific views ENVa into a symbolic representation which in turn could then be translated into the human point of view ENVhum. Then — in the ideal case — we could define the term environment [ENV] as the sum of all the different species-specific views translated in a human specific language: ∑ENVa = ENV.

But, because such a generalized view of the environment is until today not really possible by  practical reasons we will use here for the beginning only expressions related to the human specific point of view [ENVhum] using as language an ordinary language [L], here  the English language [LEN]. Every scientific language — e.g. the language of physics — is understood here as a sub language of the ordinary language.

EVENTS

An event [EV] within an event space [ENVa] is a change [X] which can be observed at least from the  members of that species [SP] a which is part of that environment ENV which enables  a species-specific event space [ENVa]. Possibly there can be other actors around in the environment ENV from different species with their specific event space [ENVa] where the content of the different event spaces  can possible   overlap with regard to  certain events.

A behavior is some observable movement of the body of some actor.

Changes X can be associated with certain behavior of certain actors or with non-actor conditions.

Thus when there are some human or non-human  actors in an environment which are moving than they show a behavior which can eventually be associated with some observable changes.

CHANGE

Besides being   associated with observable events in the (species specific) environment the expression  change is understood here as a kind of inner state in an actor which can compare past (stored) states Spast with an actual state SnowIf the past and actual state differ in some observable aspect Diff(Spast, Snow) ≠ 0, then there exists some change X, or Diff(Spast, Snow) = X. Usually the actor perceiving a change X will assume that this internal structure represents something external to the brain, but this must not necessarily be the case. It is of help if there are other human actors which confirm such a change perception although even this does not guarantee that there really is a  change occurring. In the real world it is possible that a whole group of human actors can have a wrong interpretation.

SYMBOLIC COMMUNICATION AND MEANING

It is a specialty of human actors — to some degree shared by other non-human biological actors — that they not only can built up internal representations ENVint of the reality external to the  brain (the body itself or the world beyond the body) which are mostly unconscious, partially conscious, but also they can built up structures of expressions of an internal language Lint which can be mimicked to a high degree by expressions in the body-external environment ENV called expressions of an ordinary language L.

For this to work one  has  to assume that there exists an internal mapping from internal representations ENVint into the expressions of the internal language   Lint as

meaning : ENVint <—> Lint.

and

speaking: Lint —> L

hearing: Lint <— L

Thus human actors can use their ordinary language L to activate internal encodings/ decodings with regard to the internal representations ENVint  gained so far. This is called here symbolic communication.

NO SPEECH ACTS

To classify the occurrences of symbolic expressions during a symbolic communication  is a nearly infinite undertaking. First impressions of the unsolvability of such a classification task can be gained if one reads the Philosophical Investigations of Ludwig Wittgenstein. [5] Later trials from different philosophers and scientists  — e.g. under the heading of speech acts [4] — can  not fully convince until today.

Instead of assuming here a complete scientific framework to classify  occurrences of symbolic expressions of an ordinary language L we will only look to some examples and discuss these.

KINDS OF EXPRESSIONS

In what follows we will look to some selected examples of symbolic expressions and discuss these.

(Decidable) Concrete Expressions [(D)CE]

It is assumed here that two human actors A and B  speaking the same ordinary language L  are capable in a concrete situation S to describe objects  OBJ and properties PROP of this situation in a way, that the hearer of a concrete expression E can decide whether the encoded meaning of that expression produced by the speaker is part of the observable situation S or not.

Thus, if A and B are together in a room with a wooden  white table and there is a enough light for an observation then   B can understand what A is saying if he states ‘There is a white wooden table.

To understand means here that both human actors are able to perceive the wooden white table as an object with properties, their brains will transform these external signals into internal neural signals forming an inner — not 1-to-1 — representation ENVint which can further be mapped by the learned meaning function into expressions of the inner language Lint and mapped further — by the speaker — into the external expressions of the learned ordinary language L and if the hearer can hear these spoken expressions he can translate the external expressions into the internal expressions which can be mapped onto the learned internal representations ENVint. In everyday situations there exists a high probability that the hearer then can respond with a spoken ‘Yes, that’s true’.

If this happens that some human actor is uttering a symbolic expression with regard to some observable property of the external environment  and the other human actor does respond with a confirmation then such an utterance is called here a decidable symbolic expression of the ordinary language L. In this case one can classify such an expression  as being true. Otherwise the expression  is classified as being not true.

The case of being not true is not a simple case. Being not true can mean: (i) it is actually simply not given; (ii) it is conceivable that the meaning could become true if the external situation would be  different; (iii) it is — in the light of the accessible knowledge — not conceivable that the meaning could become true in any situation; (iv) the meaning is to fuzzy to decided which case (i) – (iii) fits.

Cognitive Abstraction Processes

Before we talk about (Undecidable) Universal Expressions [(U)UE] it has to clarified that the internal mappings in a human actor are not only non-1-to-1 mappings but they are additionally automatic transformation processes of the kind that concrete perceptions of concrete environmental matters are automatically transformed by the brain into different kinds of states which are abstracted states using the concrete incoming signals as a  trigger either to start a new abstracted state or to modify an existing abstracted state. Given such abstracted states there exist a multitude of other neural processes to process these abstracted states further embedded  in numerous  different relationships.

Thus the assumed internal language Lint does not map the neural processes  which are processing the concrete events as such but the processed abstracted states! Language expressions as such can never be related directly to concrete material because this concrete material  has no direct  neural basis.  What works — completely unconsciously — is that the brain can detect that an actual neural pattern nn has some similarity with a  given abstracted structure NN  and that then this concrete pattern nn  is internally classified as an instance of NN. That means we can recognize that a perceived concrete matter nn is in ‘the light of’ our available (unconscious) knowledge an NN, but we cannot argue explicitly why. The decision has been processed automatically (unconsciously), but we can become aware of the result of this unconscious process.

Universal (Undecidable) Expressions [U(U)E]

Let us repeat the expression ‘There is a white wooden table‘ which has been used before as an example of a concrete decidable expression.

If one looks to the different parts of this expression then the partial expressions ‘white’, ‘wooden’, ‘table’ can be mapped by a learned meaning function φ into abstracted structures which are the result of internal processing. This means there can be countable infinite many concrete instances in the external environment ENV which can be understood as being white. The same holds for the expressions ‘wooden’ and ‘table’. Thus the expressions ‘white’, ‘wooden’, ‘table’ are all related to abstracted structures and therefor they have to be classified as universal expressions which as such are — strictly speaking —  not decidable because they can be true in many concrete situations with different concrete matters. Or take it otherwise: an expression with a meaning function φ pointing to an abstracted structure is asymmetric: one expression can be related to many different perceivable concrete matters but certain members of  a set of different perceived concrete matters can be related to one and the same abstracted structure on account of similarities based on properties embedded in the perceived concrete matter and being part of the abstracted structure.

In a cognitive point of view one can describe these matters such that the expression — like ‘table’ — which is pointing to a cognitive  abstracted structure ‘T’ includes a set of properties Π and every concrete perceived structure ‘t’ (caused e.g. by some concrete matter in our environment which we would classify as a ‘table’) must have a ‘certain amount’ of properties Π* that one can say that the properties  Π* are entailed in the set of properties Π of the abstracted structure T, thus Π* ⊆ Π. In what circumstances some speaker-hearer will say that something perceived concrete ‘is’ a table or ‘is not’ a table will depend from the learning history of this speaker-hearer. A child in the beginning of learning a language L can perhaps call something   a ‘chair’ and the parents will correct the child and will perhaps  say ‘no, this is table’.

Thus the expression ‘There is a white wooden table‘ as such is not true or false because it is not clear which set of concrete perceptions shall be derived from the possible internal meaning mappings, but if a concrete situation S is given with a concrete object with concrete properties then a speaker can ‘translate’ his/ her concrete perceptions with his learned meaning function φ into a composed expression using universal expressions.  In such a situation where the speaker is  part of  the real situation S he/ she  can recognize that the given situation is an  instance of the abstracted structures encoded in the used expression. And recognizing this being an instance interprets the universal expression in a way  that makes the universal expression fitting to a real given situation. And thereby the universal expression is transformed by interpretation with φ into a concrete decidable expression.

SUMMING UP

Thus the decisive moment of turning undecidable universal expressions U(U)E into decidable concrete expressions (D)CE is a human actor A behaving as a speaker-hearer of the used  language L. Without a speaker-hearer every universal expressions is undefined and neither true nor false.

makedecidable :  S x Ahum x E —> E x {true, false}

This reads as follows: If you want to know whether an expression E is concrete and as being concrete is  ‘true’ or ‘false’ then ask  a human actor Ahum which is part of a concrete situation S and the human actor shall  answer whether the expression E can be interpreted such that E can be classified being either ‘true’ or ‘false’.

The function ‘makedecidable()’ is therefore  the description (like a ‘recipe’) of a real process in the real world with real actors. The important factors in this description are the meaning functions inside the participating human actors. Although it is not possible to describe these meaning functions directly one can check their behavior and one can define an abstract model which describes the observable behavior of speaker-hearer of the language L. This is an empirical model and represents the typical case of behavioral models used in psychology, biology, sociology etc.

SOURCES

[1] Jakob Johann Freiherr von Uexküll (German: [ˈʏkskʏl])(1864 – 1944) https://en.wikipedia.org/wiki/Jakob_Johann_von_Uexk%C3%BCll

[2] Jakob von Uexküll, 1909, Umwelt und Innenwelt der Tiere. Berlin: J. Springer. (Download: https://ia802708.us.archive.org/13/items/umweltundinnenwe00uexk/umweltundinnenwe00uexk.pdf )

[3] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act

[4] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein

[5] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */

HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, February 27-March 16, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 16, 2021 (minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 2: Problem & Vision

Context

This text is preceded by the following texts:

Introduction

Before one starts the HMI analysis  some stakeholder  — in our case are the users stakeholder as well as  users in one role —  have to present some given situation — classifiable as a ‘problem’ — to depart from and a vision as the envisioned goal to be realized.

Here we give a short description of the problem for the CM:MI paradigm and the vision, what should be gained.

Problem: Mankind on the Planet Earth

In this project  the mankind  on the planet earth is  understood as the primary problem. ‘Mankind’ is seen here  as the  life form called homo sapiens. Based on the findings of biological evolution one can state that the homo sapiens has — besides many other wonderful capabilities — at least two extraordinary capabilities:

Outside to Inside

The whole body with the brain is  able to convert continuously body-external  events into internal, neural events. And  the brain inside the body receives many events inside the body as external events too. Thus in the brain we can observe a mixup of body-external (outside 1) and body-internal events (outside 2), realized as set of billions of neural processes, highly interrelated.  Most of these neural processes are unconscious, a small part is conscious. Nevertheless  these unconscious and conscious events are  neurally interrelated. This overall conversion from outside 1 and outside 2 into neural processes  can be seen as a mapping. As we know today from biology, psychology and brain sciences this mapping is not a 1-1 mapping. The brain does all the time a kind of filtering — mostly unconscious — sorting out only those events which are judged by the brain to be important. Furthermore the brain is time-slicing all its sensory inputs, storing these time-slices (called ‘memories’), whereby these time-slices again are no 1-1 copies. The storing of time-sclices is a complex (unconscious) process with many kinds of operations like structuring, associating, abstracting, evaluating, and more. From this one can deduce that the content of an individual brain and the surrounding reality of the own body as well as the world outside the own body can be highly different. All kinds of perceived and stored neural events which can be or can become conscious are  here called conscious cognitive substrates or cognitive objects.

Inside to Outside (to Inside)

Generally it is known that the homo sapiens can produce with its body events which have some impact on the world outside the body.  One kind of such events is the production of all kinds of movements, including gestures, running, grasping with hands, painting, writing as well as sounds by his voice. What is of special interest here are forms of communications between different humans, and even more specially those communications enabled by the spoken sounds of a language as well as the written signs of a language. Spoken sounds as well as written signs are here called expressions associated with a known language. Expressions as such have no meaning (A non-speaker of a language L can hear or see expressions of the language L but he/she/x  never will understand anything). But as everyday experience shows nearly every child  starts very soon to learn which kinds of expressions belong to a language and with what kinds of shared experiences they can be associated. This learning is related to many complex neural processes which map expressions internally onto — conscious and unconscious — cognitive objects (including expressions!). This mapping builds up an internal  meaning function from expressions into cognitive objects and vice versa. Because expressions have a dual face (being internal neural structures as well as being body-outside events by conversions from the inside to body-outside) it is possible that a homo sapiens  can transmit its internal encoding of cognitive objects into expressions from his  inside to the outside and thereby another homo sapiens can perceive the produced outside expression and  can map this outside expression into an intern expression. As far as the meaning function of of the receiving homo sapiens  is sufficiently similar to the meaning function of  the sending homo sapiens there exists some probability that the receiving homo sapiens can activate from its memory cognitive objects which have some similarity with those of  the sending  homo sapiens.

Although we know today of different kinds of animals having some form of language, there is no species known which is with regard to language comparable to  the homo sapiens. This explains to a large extend why the homo sapiens population was able to cooperate in a way, which not only can include many persons but also can stretch through long periods of time and  can include highly complex cognitive objects and associated behavior.

Negative Complexity

In 2006 I introduced the term negative complexity in my writings to describe the fact that in the world surrounding an individual person there is an amount of language-encoded meaning available which is beyond the capacity of an  individual brain to be processed. Thus whatever kind of experience or knowledge is accumulated in libraries and data bases, if the negative complexity is higher and higher than this knowledge can no longer help individual persons, whole groups, whole populations in a constructive usage of all this. What happens is that the intended well structured ‘sound’ of knowledge is turned into a noisy environment which crashes all kinds of intended structures into nothing or badly deformed somethings.

Entangled Humans

From Quantum Mechanics we know the idea of entangled states. But we must not dig into quantum mechanics to find other phenomena which manifest entangled states. Look around in your everyday world. There exist many occasions where a human person is acting in a situation, but the bodily separateness is a fake. While sitting before a laptop in a room the person is communicating within an online session with other persons. And depending from the  social role and the  membership in some social institution and being part of some project this person will talk, perceive, feel, decide etc. with regard to the known rules of these social environments which are  represented as cognitive objects in its brain. Thus by knowledge, by cognition, the individual person is in its situation completely entangled with other persons which know from these roles and rules  and following thereby  in their behavior these rules too. Sitting with the body in a certain physical location somewhere on the planet does not matter in this moment. The primary reality is this cognitive space in the brains of the participating persons.

If you continue looking around in your everyday world you will probably detect that the everyday world is full of different kinds of  cognitively induced entangled states of persons. These internalized structures are functioning like protocols, like scripts, like rules in a game, telling everybody what is expected from him/her/x, and to that extend, that people adhere to such internalized protocols, the daily life has some structure, has some stability, enables planning of behavior where cooperation between different persons  is necessary. In a cognitively enabled entangled state the individual person becomes a member of something greater, becoming a super person. Entangled persons can do things which usually are not possible as long you are working as a pure individual person.[1]

Entangled Humans and Negative Complexity

Although entangled human persons can principally enable more complex events, structures,  processes, engineering, cultural work than single persons, human entanglement is still limited by the brain capacities as well as by the limits of normal communication. Increasing the amount of meaning relevant artifacts or increasing the velocity of communication events makes things even more worse. There are objective limits for human processing, which can run into negative complexity.

Future is not Waiting

The term ‘future‘ is cognitively empty: there exists nowhere an object which can  be called ‘future’. What we have is some local actual presence (the Now), which the body is turning into internal representations of some kind (becoming the Past), but something like a future does not exist, nowhere. Our knowledge about the future is radically zero.

Nevertheless, because our bodies are part of a physical world (planet, solar system, …) and our entangled scientific work has identified some regularities of this physical world which can be bused for some predictions what could happen with some probability as assumed states where our clocks are showing a different time stamp. But because there are many processes running in parallel, composed of billions of parameters which can be tuned in many directions, a really good forecast is not simple and depends from so many presuppositions.

Since the appearance of homo sapiens some hundred thousands years ago in Africa the homo sapiens became a game changer which makes all computations nearly impossible. Not in the beginning of the appearance of the homo sapiens, but in the course of time homo sapiens enlarged its number, improved its skills in more and more areas, and meanwhile we know, that homo sapiens indeed has started to crash more and more  the conditions of its own life. And principally thinking points out, that homo sapiens could even crash more than only planet earth. Every exemplar of a homo sapiens has a built-in freedom which allows every time to decide to behave in a different way (although in everyday life we are mostly following some protocols). And this built-in freedom is guided by actual knowledge, by emotions, and by available resources. The same child can become a great musician, a great mathematician, a philosopher, a great political leader, an engineer, … but giving the child no resources, depriving it from important social contexts,  giving it the wrong knowledge, it can not manifest its freedom in full richness. As human population we need the best out of all children.

Because  the processing of the planet, the solar system etc.  is going on, we are in need of good forecasts of possible futures, beyond our classical concepts of sharing knowledge. This is where our vision enters.

VISION: DEVELOPING TOGETHER POSSIBLE FUTURES

To find possible and reliable shapes of possible futures we have to exploit all experiences, all knowledge, all ideas, all kinds of creativity by using maximal diversity. Because present knowledge can be false — as history tells us –, we should not rule out all those ideas, which seem to be too crazy at a first glance. Real innovations are always different to what we are used to at that time. Thus the following text is a first rough outline of the vision:

  1. Find a format
  2. which allows any kinds of people
  3. for any kind of given problem
  4. with at least one vision of a possible improvement
  5. together
  6. to search and to find a path leading from the given problem (Now) to the envisioned improved state (future).
  7. For all needed communication any kind of  everyday language should be enough.
  8. As needed this everyday language should be extendable with special expressions.
  9. These considerations about possible paths into the wanted envisioned future state should continuously be supported  by appropriate automatic simulations of such a path.
  10. These simulations should include automatic evaluations based on the given envisioned state.
  11. As far as possible adaptive algorithms should be available to support the search, finding and identification of the best cases (referenced by the visions)  within human planning.

REFERENCES or COMMENTS

[1] One of the most common entangled state in daily life is the usage of normal language! A normal language L works only because the rules of usage of this language L are shared by all speaker-hearer of this language, and these rules are explicit cognitive structures (not necessarily conscious, mostly unconscious!).

Continuation

Yes, it will happen 🙂 Here.

 

 

 

 

 

 

KOMEGA REQUIREMENTS No.3, Version 1. Basic Application Scenario – Editing S

ISSN 2567-6458, 26.July – 12.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF DOCUMENT

requirements-no3-v1-12Aug2020 (Last update: August 12, 2020)

REVIEWING TARSKI’s SEMANTIC and MODEL CONCEPT. 85 Years Later …

eJournal: uffmm.org, ISSN 2567-6458,
8.August  2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

85 Years Later

The two papers of Tarski, which I do discuss here, have been published in 1936. Occasionally I have already read these paper many years ago but at that time I could not really work with these papers. Formally they seemed to be ’correct’, but in the light of my ’intuition’ the message appeared to me somehow ’weird’, not really in conformance with my experience of how knowledge and language are working in the real world. But at that time I was not able to explain my intuition to myself sufficiently. Nevertheless, I kept these papers – and some more texts of Tarski – in my bookshelves for an unknown future when my understanding would eventually change…
This happened the last days.

review-tarski-semantics-models-v1-printed

BACK TO REVIEWING SECTION

Here

 

The Observer-World Framework. Part of Case-Studies Phase 1

Observer-World Framework. Part of Case-Studies Phase 1

ISSN 2567-6458, 16.July 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

To work within the Generative Cultural Anthropology [GCA] Theory one needs a practical tool which allows the construction of dynamic world models, the storage of these models, their usage within a simulation game environment together with an evaluation tool. Basic requirements for such
a tool will be described here with the example called a Hybrid Simulation Game Environment [HSGE]. To prepare a simulation game one needs an iterative development process which follows some general assumptions. In this paper the subject of discussion is the observer-world-framework.

PDF:observer-world-framework-v3 (Corrected Version UTC 08:40 + 2 for the author)

Go back to the Case-Study Collection.

CASE STUDY – SIMULATION GAMES – PHASE 1 – Iterative Development of a Dynamic World Model

ISSN 2567-6458, 19.-30.June 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

To work within the Generative Cultural Anthropology [GCA] Theory one needs a practical tool which allows the construction of dynamic world models, the storage of these models, their usage within a simulation game environment together with an evaluation tool.  To prepare a simulation game within a Hybrid Simulation Game Environment [HSGE] one needs an
iterative development process which is described below.

CASE STUDY – SIMULATION GAMES – PHASE 1: Iterative Development of a Dynamic World Model – Part of the Generative Cultural Anthropology [GCA] Theory

Contents
1 Overview of the Whole Development Process
2 Cognitive Aspects of Symbolic Expressions
3 Symbolic Representations and Transformations
4 Abstract-Concrete Concepts
5 Implicit Structures Embedded in Experience
5.1 Example 1

daai-analysis-simgame-development-v3 (June-30, 2020)

daai-analysis-simgame-development-v2 (June-20, 2020)

daai-analysis-simgame-development-v1 (June-19,2020)

Going back to the section Case Studies.

AAI THEORY V2 –A Philosophical Framework

eJournal: uffmm.org,
ISSN 2567-6458, 22.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 23.February 2019 (continued the text)

Last change: 24.February 2019 (extended the text)

CONTEXT

In the overview of the AAI paradigm version 2 you can find this section  dealing with the philosophical perspective of the AAI paradigm. Enjoy reading (or not, then send a comment :-)).

THE DAILY LIFE PERSPECTIVE

The perspective of Philosophy is rooted in the everyday life perspective. With our body we occur in a space with other bodies and objects; different features, properties  are associated with the objects, different kinds of relations an changes from one state to another.

From the empirical sciences we have learned to see more details of the everyday life with regard to detailed structures of matter and biological life, with regard to the long history of the actual world, with regard to many interesting dynamics within the objects, within biological systems, as part of earth, the solar system and much more.

A certain aspect of the empirical view of the world is the fact, that some biological systems called ‘homo sapiens’, which emerged only some 300.000 years ago in Africa, show a special property usually called ‘consciousness’ combined with the ability to ‘communicate by symbolic languages’.

General setting of the homo sapiens species (simplified)
Figure 1: General setting of the homo sapiens species (simplified)

As we know today the consciousness is associated with the brain, which in turn is embedded in the body, which  is further embedded in an environment.

Thus those ‘things’ about which we are ‘conscious’ are not ‘directly’ the objects and events of the surrounding real world but the ‘constructions of the brain’ based on actual external and internal sensor inputs as well as already collected ‘knowledge’. To qualify the ‘conscious things’ as ‘different’ from the assumed ‘real things’ ‘outside there’ it is common to speak of these brain-generated virtual things either as ‘qualia’ or — more often — as ‘phenomena’ which are  different to the assumed possible real things somewhere ‘out there’.

PHILOSOPHY AS FIRST PERSON VIEW

‘Philosophy’ has many facets.  One enters the scene if we are taking the insight into the general virtual character of our primary knowledge to be the primary and irreducible perspective of knowledge.  Every other more special kind of knowledge is necessarily a subspace of this primary phenomenological knowledge.

There is already from the beginning a fundamental distinction possible in the realm of conscious phenomena (PH): there are phenomena which can be ‘generated’ by the consciousness ‘itself’  — mostly called ‘by will’ — and those which are occurring and disappearing without a direct influence of the consciousness, which are in a certain basic sense ‘given’ and ‘independent’,  which are appearing  and disappearing according to ‘their own’. It is common to call these independent phenomena ’empirical phenomena’ which represent a true subset of all phenomena: PH_emp  PH. Attention: These empirical phenomena’ are still ‘phenomena’, virtual entities generated by the brain inside the brain, not directly controllable ‘by will’.

There is a further basic distinction which differentiates the empirical phenomena into those PH_emp_bdy which are controlled by some processes in the body (being tired, being hungry, having pain, …) and those PH_emp_ext which are controlled by objects and events in the environment beyond the body (light, sounds, temperature, surfaces of objects, …). Both subsets of empirical phenomena are different: PH_emp_bdy PH_emp_ext = 0. Because phenomena usually are occurring  associated with typical other phenomena there are ‘clusters’/ ‘pattern’ of phenomena which ‘represent’ possible events or states.

Modern empirical science has ‘refined’ the concept of an empirical phenomenon by introducing  ‘standard objects’ which can be used to ‘compare’ some empirical phenomenon with such an empirical standard object. Thus even when the perception of two different observers possibly differs somehow with regard to a certain empirical phenomenon, the additional comparison with an ’empirical standard object’ which is the ‘same’ for both observers, enhances the quality, improves the precision of the perception of the empirical phenomena.

From these considerations we can derive the following informal definitions:

  1. Something is ‘empirical‘ if it is the ‘real counterpart’ of a phenomenon which can be observed by other persons in my environment too.
  2. Something is ‘standardized empirical‘ if it is empirical and can additionally be associated with a before introduced empirical standard object.
  3. Something is ‘weak empirical‘ if it is the ‘real counterpart’ of a phenomenon which can potentially be observed by other persons in my body as causally correlated with the phenomenon.
  4. Something is ‘cognitive‘ if it is the counterpart of a phenomenon which is not empirical in one of the meanings (1) – (3).

It is a common task within philosophy to analyze the space of the phenomena with regard to its structure as well as to its dynamics.  Until today there exists not yet a complete accepted theory for this subject. This indicates that this seems to be some ‘hard’ task to do.

BRIDGING THE GAP BETWEEN BRAINS

As one can see in figure 1 a brain in a body is completely disconnected from the brain in another body. There is a real, deep ‘gap’ which has to be overcome if the two brains want to ‘coordinate’ their ‘planned actions’.

Luckily the emergence of homo sapiens with the new extended property of ‘consciousness’ was accompanied by another exciting property, the ability to ‘talk’. This ability enabled the creation of symbolic languages which can help two disconnected brains to have some exchange.

But ‘language’ does not consist of sounds or a ‘sequence of sounds’ only; the special power of a language is the further property that sequences of sounds can be associated with ‘something else’ which serves as the ‘meaning’ of these sounds. Thus we can use sounds to ‘talk about’ other things like objects, events, properties etc.

The single brain ‘knows’ about the relationship between some sounds and ‘something else’ because the brain is able to ‘generate relations’ between brain-structures for sounds and brain-structures for something else. These relations are some real connections in the brain. Therefore sounds can be related to ‘something  else’ or certain objects, and events, objects etc.  can become related to certain sounds. But these ‘meaning relations’ can only ‘bridge the gap’ to another brain if both brains are using the same ‘mapping’, the same ‘encoding’. This is only possible if the two brains with their bodies share a real world situation RW_S where the perceptions of the both brains are associated with the same parts of the real world between both bodies. If this is the case the perceptions P(RW_S) can become somehow ‘synchronized’ by the shared part of the real world which in turn is transformed in the brain structures P(RW_S) —> B_S which represent in the brain the stimulating aspects of the real world.  These brain structures B_S can then be associated with some sound structures B_A written as a relation  MEANING(B_S, B_A). Such a relation  realizes an encoding which can be used for communication. Communication is using sound sequences exchanged between brains via the body and the air of an environment as ‘expressions’ which can be recognized as part of a learned encoding which enables the receiving brain to identify a possible meaning candidate.

DIFFERENT MODES TO EXPRESS MEANING

Following the evolution of communication one can distinguish four important modes of expressing meaning, which will be used in this AAI paradigm.

VISUAL ENCODING

A direct way to express the internal meaning structures of a brain is to use a ‘visual code’ which represents by some kinds of drawing the visual shapes of objects in the space, some attributes of  shapes, which are common for all people who can ‘see’. Thus a picture and then a sequence of pictures like a comic or a story board can communicate simple ideas of situations, participating objects, persons and animals, showing changes in the arrangement of the shapes in the space.

Pictorial expressions representing aspects of the visual and the auditory sens modes
Figure 2: Pictorial expressions representing aspects of the visual and the auditory sens modes

Even with a simple visual code one can generate many sequences of situations which all together can ‘tell a story’. The basic elements are a presupposed ‘space’ with possible ‘objects’ in this space with different positions, sizes, relations and properties. One can even enhance these visual shapes with written expressions of  a spoken language. The sequence of the pictures represents additionally some ‘timely order’. ‘Changes’ can be encoded by ‘differences’ between consecutive pictures.

FROM SPOKEN TO WRITTEN LANGUAGE EXPRESSIONS

Later in the evolution of language, much later, the homo sapiens has learned to translate the spoken language L_s in a written format L_w using signs for parts of words or even whole words.  The possible meaning of these written expressions were no longer directly ‘visible’. The meaning was now only available for those people who had learned how these written expressions are associated with intended meanings encoded in the head of all language participants. Thus only hearing or reading a language expression would tell the reader either ‘nothing’ or some ‘possible meanings’ or a ‘definite meaning’.

A written textual version in parallel to a pictorial version
Figure 3: A written textual version in parallel to a pictorial version

If one has only the written expressions then one has to ‘know’ with which ‘meaning in the brain’ the expressions have to be associated. And what is very special with the written expressions compared to the pictorial expressions is the fact that the elements of the pictorial expressions are always very ‘concrete’ visual objects while the written expressions are ‘general’ expressions allowing many different concrete interpretations. Thus the expression ‘person’ can be used to be associated with many thousands different concrete objects; the same holds for the expression ‘road’, ‘moving’, ‘before’ and so on. Thus the written expressions are like ‘manufacturing instructions’ to search for possible meanings and configure these meanings to a ‘reasonable’ complex matter. And because written expressions are in general rather ‘abstract’/ ‘general’ which allow numerous possible concrete realizations they are very ‘economic’ because they use minimal expressions to built many complex meanings. Nevertheless the daily experience with spoken and written expressions shows that they are continuously candidates for false interpretations.

FORMAL MATHEMATICAL WRITTEN EXPRESSIONS

Besides the written expressions of everyday languages one can observe later in the history of written languages the steady development of a specialized version called ‘formal languages’ L_f with many different domains of application. Here I am  focusing   on the formal written languages which are used in mathematics as well as some pictorial elements to ‘visualize’  the intended ‘meaning’ of these formal mathematical expressions.

Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)
Fig. 4: Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)

One prominent concept in mathematics is the concept of a ‘graph’. In  the basic version there are only some ‘nodes’ (also called vertices) and some ‘edges’ connecting the nodes.  Formally one can represent these edges as ‘pairs of nodes’. If N represents the set of nodes then N x N represents the set of all pairs of these nodes.

In a more specialized version the edges are ‘directed’ (like a ‘one way road’) and also can be ‘looped back’ to a node   occurring ‘earlier’ in the graph. If such back-looping arrows occur a graph is called a ‘cyclic graph’.

Directed cyclic graph extended to represent 'states of affairs'
Fig.5: Directed cyclic graph extended to represent ‘states of affairs’

If one wants to use such a graph to describe some ‘states of affairs’ with their possible ‘changes’ one can ‘interpret’ a ‘node’ as  a state of affairs and an arrow as a change which turns one state of affairs S in a new one S’ which is minimally different to the old one.

As a state of affairs I  understand here a ‘situation’ embedded in some ‘context’ presupposing some common ‘space’. The possible ‘changes’ represented by arrows presuppose some dimension of ‘time’. Thus if a node n’  is following a node n indicated by an arrow then the state of affairs represented by the node n’ is to interpret as following the state of affairs represented in the node n with regard to the presupposed time T ‘later’, or n < n’ with ‘<‘ as a symbol for a timely ordering relation.

Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token
Fig.6: Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token

The space can be any kind of a space. If one assumes as an example a 2-dimensional space configured as a grid –as shown in figure 6 — with two tokens at certain positions one can introduce a language to describe the ‘facts’ which constitute the state of affairs. In this example one needs ‘names for objects’, ‘properties of objects’ as well as ‘relations between objects’. A possible finite set of facts for situation 1 could be the following:

  1. TOKEN(T1), BLACK(T1), POSITION(T1,1,1)
  2. TOKEN(T2), WHITE(T2), POSITION(T2,2,1)
  3. NEIGHBOR(T1,T2)
  4. CELL(C1), POSITION(1,2), FREE(C1)

‘T1’, ‘T2’, as well as ‘C1’ are names of objects, ‘TOKEN’, ‘BACK’ etc. are names of properties, and ‘NEIGHBOR’ is a relation between objects. This results in the equation:

S1 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), TOKEN(T2), WHITE(T2), POSITION(T2,2,1), NEIGHBOR(T1,T2), CELL(C1), POSITION(1,2), FREE(C1)}

These facts describe the situation S1. If it is important to describe possible objects ‘external to the situation’ as important factors which can cause some changes then one can describe these objects as a set of facts  in a separated ‘context’. In this example this could be two players which can move the black and white tokens and thereby causing a change of the situation. What is the situation and what belongs to a context is somewhat arbitrary. If one describes the agriculture of some region one usually would not count the planets and the atmosphere as part of this region but one knows that e.g. the sun can severely influence the situation   in combination with the atmosphere.

Change of a state of affairs given as a state which will be enhanced by a new object
Fig.7: Change of a state of affairs given as a state which will be enhanced by a new object

Let us stay with a state of affairs with only a situation without a context. The state of affairs is     a ‘state’. In the example shown in figure 6 I assume a ‘change’ caused by the insertion of a new black token at position (2,2). Written in the language of facts L_fact we get:

  1. TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)

Thus the new state S2 is generated out of the old state S1 by unifying S1 with the set of new facts: S2 = S1 {TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)}. All the other facts of S1 are still ‘valid’. In a more general manner one can introduce a change-expression with the following format:

<S1, S2, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)})>

This can be read as follows: The follow-up state S2 is generated out of the state S1 by adding to the state S1 the set of facts { … }.

This layout of a change expression can also be used if some facts have to be modified or removed from a state. If for instance  by some reason the white token should be removed from the situation one could write:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)})>

Another notation for this is S2 = S1 – {TOKEN(T2), WHITE(T2), POSITION(2,1)}.

The resulting state S2 would then look like:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1)}

And a combination of subtraction of facts and addition of facts would read as follows:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)}, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would result in the final state S2:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1),TOKEN(T3), BLACK(T3), POSITION(2,2)}

These simple examples demonstrate another fact: while facts about objects and their properties are independent from each other do relational facts depend from the state of their object facts. The relation of neighborhood e.g. depends from the participating neighbors. If — as in the example above — the object token T2 disappears then the relation ‘NEIGHBOR(T1,T2)’ no longer holds. This points to a hierarchy of dependencies with the ‘basic facts’ at the ‘root’ of a situation and all the other facts ‘above’ basic facts or ‘higher’ depending from the basic facts. Thus ‘higher order’ facts should be added only for the actual state and have to be ‘re-computed’ for every follow-up state anew.

If one would specify a context for state S1 saying that there are two players and one allows for each player actions like ‘move’, ‘insert’ or ‘delete’ then one could make the change from state S1 to state S2 more precise. Assuming the following facts for the context:

  1. PLAYER(PB1), PLAYER(PW1), HAS-THE-TURN(PB1)

In that case one could enhance the change statement in the following way:

<S1, S2, PB1,insert(TOKEN(T3,2,2)),add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would read as follows: given state S1 the player PB1 inserts a  black token at position (2,2); this yields a new state S2.

With or without a specified context but with regard to a set of possible change statements it can be — which is the usual case — that there is more than one option what can be changed. Some of the main types of changes are the following ones:

  1. RANDOM
  2. NOT RANDOM, which can be specified as follows:
    1. With PROBABILITIES (classical, quantum probability, …)
    2. DETERMINISTIC

Furthermore, if the causing object is an actor which can adapt structurally or even learn locally then this actor can appear in some time period like a deterministic system, in different collected time periods as an ‘oscillating system’ with different behavior, or even as a random system with changing probabilities. This make the forecast of systems with adaptive and/ or learning systems rather difficult.

Another aspect results from the fact that there can be states either with one actor which can cause more than one action in parallel or a state with multiple actors which can act simultaneously. In both cases the resulting total change has eventually to be ‘filtered’ through some additional rules telling what  is ‘possible’ in a state and what not. Thus if in the example of figure 6 both player want to insert a token at position (2,2) simultaneously then either  the rules of the game would forbid such a simultaneous action or — like in a computer game — simultaneous actions are allowed but the ‘geometry of a 2-dimensional space’ would not allow that two different tokens are at the same position.

Another aspect of change is the dimension of time. If the time dimension is not explicitly specified then a change from some state S_i to a state S_j does only mark the follow up state S_j as later. There is no specific ‘metric’ of time. If instead a certain ‘clock’ is specified then all changes have to be aligned with this ‘overall clock’. Then one can specify at what ‘point of time t’ the change will begin and at what point of time t*’ the change will be ended. If there is more than one change specified then these different changes can have different timings.

THIRD PERSON VIEW

Up until now the point of view describing a state and the possible changes of states is done in the so-called 3rd-person view: what can a person perceive if it is part of a situation and is looking into the situation.  It is explicitly assumed that such a person can perceive only the ‘surface’ of objects, including all kinds of actors. Thus if a driver of a car stears his car in a certain direction than the ‘observing person’ can see what happens, but can not ‘look into’ the driver ‘why’ he is steering in this way or ‘what he is planning next’.

A 3rd-person view is assumed to be the ‘normal mode of observation’ and it is the normal mode of empirical science.

Nevertheless there are situations where one wants to ‘understand’ a bit more ‘what is going on in a system’. Thus a biologist can be  interested to understand what mechanisms ‘inside a plant’ are responsible for the growth of a plant or for some kinds of plant-disfunctions. There are similar cases for to understand the behavior of animals and men. For instance it is an interesting question what kinds of ‘processes’ are in an animal available to ‘navigate’ in the environment across distances. Even if the biologist can look ‘into the body’, even ‘into the brain’, the cells as such do not tell a sufficient story. One has to understand the ‘functions’ which are enabled by the billions of cells, these functions are complex relations associated with certain ‘structures’ and certain ‘signals’. For this it is necessary to construct an explicit formal (mathematical) model/ theory representing all the necessary signals and relations which can be used to ‘explain’ the obsrvable behavior and which ‘explains’ the behavior of the billions of cells enabling such a behavior.

In a simpler, ‘relaxed’ kind of modeling  one would not take into account the properties and behavior of the ‘real cells’ but one would limit the scope to build a formal model which suffices to explain the oservable behavior.

This kind of approach to set up models of possible ‘internal’ (as such hidden) processes of an actor can extend the 3rd-person view substantially. These models are called in this text ‘actor models (AM)’.

HIDDEN WORLD PROCESSES

In this text all reported 3rd-person observations are called ‘actor story’, independent whether they are done in a pictorial or a textual mode.

As has been pointed out such actor stories are somewhat ‘limited’ in what they can describe.

It is possible to extend such an actor story (AS)  by several actor models (AM).

An actor story defines the situations in which an actor can occur. This  includes all kinds of stimuli which can trigger the possible senses of the actor as well as all kinds of actions an actor can apply to a situation.

The actor model of such an actor has to enable the actor to handle all these assumed stimuli as well as all these actions in the expected way.

While the actor story can be checked whether it is describing a process in an empirical ‘sound’ way,  the actor models are either ‘purely theoretical’ but ‘behavioral sound’ or they are also empirically sound with regard to the body of a biological or a technological system.

A serious challenge is the occurrence of adaptiv or/ and locally learning systems. While the actor story is a finite  description of possible states and changes, adaptiv or/ and locally learning systeme can change their behavior while ‘living’ in the actor story. These changes in the behavior can not completely be ‘foreseen’!

COGNITIVE EXPERT PROCESSES

According to the preceding considerations a homo sapiens as a biological system has besides many properties at least a consciousness and the ability to talk and by this to communicate with symbolic languages.

Looking to basic modes of an actor story (AS) one can infer some basic concepts inherently present in the communication.

Without having an explicit model of the internal processes in a homo sapiens system one can infer some basic properties from the communicative acts:

  1. Speaker and hearer presuppose a space within which objects with properties can occur.
  2. Changes can happen which presuppose some timely ordering.
  3. There is a disctinction between concrete things and abstract concepts which correspond to many concrete things.
  4. There is an implicit hierarchy of concepts starting with concrete objects at the ‘root level’ given as occurence in a concrete situation. Other concepts of ‘higher levels’ refer to concepts of lower levels.
  5. There are different kinds of relations between objects on different conceptual levels.
  6. The usage of language expressions presupposes structures which can be associated with the expressions as their ‘meanings’. The mapping between expressions and their meaning has to be learned by each actor separately, but in cooperation with all the other actors, with which the actor wants to share his meanings.
  7. It is assume that all the processes which enable the generation of concepts, concept hierarchies, relations, meaning relations etc. are unconscious! In the consciousness one can  use parts of the unconscious structures and processes under strictly limited conditions.
  8. To ‘learn’ dedicated matters and to be ‘critical’ about the quality of what one is learnig requires some disciplin, some learning methods, and a ‘learning-friendly’ environment. There is no guaranteed method of success.
  9. There are lots of unconscious processes which can influence understanding, learning, planning, decisions etc. and which until today are not yet sufficiently cleared up.

 

 

 

 

 

 

 

 

ACTOR-ACTOR INTERACTION ANALYSIS – A rough Outline of the Blueprint

eJournal: uffmm.org,
ISSN 2567-6458, 13.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last corrections: 14.February 2019 (add some more keywords; added  emphasizes for central words)

Change: 5.May 2019 (adding the the aspect of simulation and gaming; extending the view of the driving actors)

CONTEXT

An overview to the enhanced AAI theory  version 2 you can find here.  In this post we talk about the blueprint  of the whole  AAI analysis process. Here I leave out the topic of actor models (AM); the aspect of  simulation and gaming is mentioned only shortly. For these topics see other posts.

THE AAI ANALYSIS BLUEPRINT

Blueprint of the whole AAI analysis process including the epistemological assumptions. Not shown here is the whole topic of actor models (AM) and as well simulation.
Blueprint of the whole AAI analysis process including the epistemological assumptions. Not shown here is the whole topic of actor models (AM) and as well simulation.

The Actor-Actor Interaction (AAI) analysis is understood here as part of an  embracing  systems engineering process (SEP), which starts with the statement of a problem (P) which includes a vision (V) of an improved alternative situation. It has then to be analyzed how such a new improved situation S+ looks like; how one can realize certain tasks (T)  in an improved way.

DRIVING ACTORS

The driving actors for such an AAI analysis are at least one  stakeholder (STH) which communicates a problem P and an envisioned solution (ES) to an  expert (EXPaai) with a sufficient AAI experience. This expert will take   the lead in the process of transforming the problem and the envisioned  solution into a working solution (WS).

In the classical industrial case the stakeholder can be a group of managers from some company and the expert is also represented by a whole team of experts from different disciplines, including the AAI perspective as leading perspective.

In another case which  I will call here the  communal case — e.g. a whole city —      the stakeholder as well as the experts are members of the communal entity.   As   in the before mentioned cases there is some commonly accepted problem P combined  with a first envisioned solution ES, which shall be analyzed: what is needed to make it working? Can it work at all? What are costs? And many other questions can arise. The challenge to include all relevant experience and knowledge from all participants is at the center of the communication and to transform this available knowledge into some working solution which satisfies all stated requirements for all participants is a central  condition for the success of the project.

EPISTEMOLOGY

It has to be taken into account that the driving actors are able to do this job because they  have in their bodies brains (BRs) which in turn include  some consciousness (CNS). The processes and states beyond the consciousness are here called ‘unconscious‘ and the set of all these unconscious processes is called ‘the Unconsciousness’ (UCNS).

For more details to the cognitive processes see the post to the philosophical framework as well as the post bottom-up process. Both posts shall be integrated into one coherent view in the future.

SEMIOTIC SUBSYSTEM

An important set of substructures of the unconsciousness are those which enable symbolic language systems with so-called expressions (L) on one side and so-called non-expressions (~L) on the other. Embedded in a meaning relation (MNR) does the set of non-expressions ~L  function as the meaning (MEAN) of the expressions L, written as a mapping MNR: L <—> ~L. Depending from the involved sensors the expressions L can occur either as acoustic events L_spk, or as visual patterns written L_txt or visual patterns as pictures L_pict or even in other formats, which will not discussed here. The non-expressions can occur in every format which the brain can handle.

While written (symbolic) expressions L are only associated with the intended meaning through encoded mappings in the brain,  the spoken expressions L_spk as well as the pictorial ones L_pict can show some similarities with the intended meaning. Within acoustic  expressions one can ‘imitate‘ some sounds which are part of a meaning; even more can the pictorial expressions ‘imitate‘ the visual experience of the intended meaning to a high degree, but clearly not every kind of meaning.

DEFINING THE MAIN POINT OF REFERENCE

Because the space of possible problems and visions it nearly infinite large one has to define for a certain process the problem of the actual process together with the vision of a ‘better state of the affairs’. This is realized by a description of he problem in a problem document D_p as well as in a vision statement D_v. Because usually a vision is not without a given context one has to add all the constraints (C) which have to be taken into account for the possible solution.  Examples of constraints are ‘non-functional requirements’ (NFRs) like “safety” or “real time” or “without barriers” (for handicapped people). Part of the non-functional requirements are also definitions of win-lose states as part of a game.

AAI ANALYSIS – BASIC PROCEDURE

If the AAI check has been successful and there is at least one task T to be done in an assumed environment ENV and there are at least one executing actor A_exec in this task as well as an assisting actor A_ass then the AAI analysis can start.

ACTOR STORY (AS)

The main task is to elaborate a complete description of a process which includes a start state S* and a goal state S+, where  the participating executive actors A_exec can reach the goal state S+ by doing some actions. While the imagined process p_v  is a virtual (= cognitive/ mental) model of an intended real process p_e, this intended virtual model p_e can only be communicated by a symbolic expressions L embedded in a meaning relation. Thus the elaboration/ construction of the intended process will be realized by using appropriate expressions L embedded in a meaning relation. This can be understood as a basic mapping of sensor based perceptions of the supposed real world into some abstract virtual structures automatically (unconsciously) computed by the brain. A special kind of this mapping is the case of measurement.

In this text especially three types of symbolic expressions L will be used: (i) pictorial expressions L_pict, (ii) textual expressions of a natural language L_txt, and (iii) textual expressions of a mathematical language L_math. The meaning part of these symbolic expressions as well as the expressions itself will be called here an actor story (AS) with the different modes  pictorial AS (PAS), textual AS (TAS), as well as mathematical AS (MAS).

The basic elements of an  actor story (AS) are states which represent sets of facts. A fact is an expression of some defined language L which can be decided as being true in a real situation or not (the past and the future are special cases for such truth clarifications). Facts can be identified as actors which can act by their own. The transformation from one state to a follow up state has to be described with sets of change rules. The combination of states and change rules defines mathematically a directed graph (G).

Based on such a graph it is possible to derive an automaton (A) which can be used as a simulator. A simulator allows simulations. A concrete simulation takes a start state S0 as the actual state S* and computes with the aid of the change rules one follow up state S1. This follow up state becomes then the new actual state S*. Thus the simulation constitutes a continuous process which generally can be infinite. To make the simulation finite one has to define some stop criteria (C*). A simulation can be passive without any interruption or interactive. The interactive mode allows different external actors to select certain real values for the available variables of the actual state.

If in the problem definition certain win-lose states have been defined then one can turn an interactive simulation into a game where the external actors can try to manipulate the process in a way as to reach one of the defined win-states. As soon as someone (which can be a team) has reached a win-state the responsible actor (or team) has won. Such games can be repeated to allow accumulation of wins (or loses).

Gaming allows a far better experience of the advantages or disadvantages of some actor story as a rather lose simulation. Therefore the probability to detect aspects of an actor story with their given constraints is by gaming quite high and increases the probability to improve the whole concept.

Based on an actor story with a simulator it is possible to increase the cognitive power of exploring the future even more.  There exists the possibility to define an oracle algorithm as well as different kinds of intelligent algorithms to support the human actor further. This has to be described in other posts.

TAR AND AAR

If the actor story is completed (in a certain version v_i) then one can extract from the story the input-output profiles of every participating actor. This list represents the task-induced actor requirements (TAR).  If one is looking for concrete real persons for doing the job of an executing actor the TAR can be used as a benchmark for assessing candidates for this job. The profiles of the real persons are called here actor-actor induced requirements (AAR), that is the real profile compared with the ideal profile of the TAR. If the ‘distance’ between AAR and TAR is below some threshold then the candidate has either to be rejected or one can offer some training to improve his AAR; the other option is to  change the conditions of the TAR in a way that the TAR is more closer to the AARs.

The TAR is valid for the executive actors as well as for the assisting actors A_ass.

CONSTRAINTS CHECK

If the actor story has in some version V_i a certain completion one has to check whether the different constraints which accompany the vision document are satisfied through the story: AS_vi |- C.

Such an evaluation is only possible if the constraints can be interpreted with regard to the actor story AS in version vi in a way, that the constraints can be decided.

For many constraints it can happen that the constraints can not or not completely be decided on the level of the actor story but only in a later phase of the systems engineering process, when the actor story will be implemented in software and hardware.

MEASURING OF USABILITY

Using the actor story as a benchmark one can test the quality of the usability of the whole process by doing usability tests.

 

 

 

 

 

 

 

 

 

 

 

AAI THEORY V2 –EPISTEMOLOGY OF THE AAI-EXPERTS

eJournal: uffmm.org,
ISSN 2567-6458, 26.Januar 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

An overview to the enhanced AAI theory  version 2 you can find here.  In this post we talk about the fourth chapter dealing with the epistemology of actors within an AAI analysis process.

EPISTEMOLOGY AND THE EMPIRICAL SCIENCES

Epistemology is a sub-discipline of general philosophy. While a special discipline in empirical science is defined by a certain sub-set of the real world RW  by empirical measurement methods generating empirical data which can be interpreted by a formalized theory,  philosophy  is not restricted to a sub-field of the real world. This is important because an empirical discipline has no methods to define itself.  Chemistry e.g. can define by which kinds of measurement it is gaining empirical data   and it can offer different kinds of formal theories to interpret these data including inferences to forecast certain reactions given certain configurations of matters, but chemistry is not able  to explain the way how a chemist is thinking, how the language works which a chemist is using etc. Thus empirical science presupposes a general framework of bodies, sensors, brains, languages etc. to be able to do a very specialized  — but as such highly important — job. One can define ‘philosophy’ then as that kind of activity which tries to clarify all these  conditions which are necessary to do science as well as how cognition works in the general case.

Given this one can imagine that philosophy is in principle a nearly ‘infinite’ task. To get not lost in this conceptual infinity it is recommended to start with concrete processes of communications which are oriented to generate those kinds of texts which can be shown as ‘related to parts of the empirical world’ in a decidable way. This kind of texts   is here called ’empirically sound’ or ’empirically true’. It is to suppose that there will be texts for which it seems to be clear that they are empirically sound, others will appear ‘fuzzy’ for such a criterion, others even will appear without any direct relation to empirical soundness.

In empirical sciences one is using so-called empirical measurement procedures as benchmarks to decided whether one has empirical data or not, and it is commonly assumed that every ‘normal observer’ can use these data as every other ‘normal observer’. But because individual, single data have nearly no meaning on their own one needs relations, sets of relations (models) and even more complete theories, to integrate the data in a context, which allows some interpretation and some inferences for forecasting. But these relations, models, or theories can not directly be inferred from the real world. They have to be created by the observers as ‘working hypotheses’ which can fit with the data or not. And these constructions are grounded in  highly complex cognitive processes which follow their own built-in rules and which are mostly not conscious. ‘Cognitive processes’ in biological systems, especially in human person, are completely generated by a brain and constitute therefore a ‘virtual world’ on their own.  This cognitive virtual world  is not the result of a 1-to-1 mapping from the real world into the brain states.  This becomes important in that moment where the brain is mapping this virtual cognitive world into some symbolic language L. While the symbols of a language (sounds or written signs or …) as such have no meaning the brain enables a ‘coding’, a ‘mapping’ from symbolic expressions into different states of the brain. In the light’ of such encodings the symbolic expressions have some meaning.  Besides the fact that different observers can have different encodings it is always an open question whether the encoded meaning of the virtual cognitive space has something to do with some part of the empirical reality. Empirical data generated by empirical measurement procedures can help to coordinate the virtual cognitive states of different observers with each other, but this coordination is not an automatic process. Empirically sound language expressions are difficult to get and therefore of a high value for the survival of mankind. To generate empirically sound formal theories is even more demanding and until today there exists no commonly accepted concept of the right format of an empirically sound theory. In an era which calls itself  ‘scientific’ this is a very strange fact.

EPISTEMOLOGY OF THE AAI-EXPERTS

Applying these general considerations to the AAI experts trying to construct an actor story to describe at least one possible path from a start state to a goal state, one can pick up the different languages the AAI experts are using and asking back under which conditions these languages have some ‘meaning’ and under which   conditions these meanings can be called ’empirically sound’?

In this book three different ‘modes’ of an actor story will be distinguished:

  1. A textual mode using some ordinary everyday language, thus using spoken language (stored in an audio file) or written language as a text.
  2. A pictorial mode using a ‘language of pictures’, possibly enhanced by fragments of texts.
  3. A mathematical mode using graphical presentations of ‘graphs’ enhanced by symbolic expressions (text) and symbolic expressions only.

For every mode it has to be shown how an AAI expert can generate an actor story out of the virtual cognitive world of his brain and how it is possible to decided the empirical soundness of the actor story.

 

 

ACTOR-ACTOR INTERACTION [AAI] WITHIN A SYSTEMS ENGINEERING PROCESS (SEP). An Actor Centered Approach to Problem Solving

eJournal: uffmm.org, ISSN 2567-6458
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

ATTENTION: The actual Version  you will find HERE.

Draft version 22.June 2018

Update 26.June 2018 (Chapter AS-AM Summary)

Update 4.July 2018 (Chapter 4 Actor Model; improving the terminology of environments with actors, actors as input-output systems, basic and real interface, a first typology of input-output systems…)

Update 17.July 2018 (Preface, Introduction new)

Update 19.July 2018 (Introduction final paragraph!, new chapters!)

Update 20.July 2018 (Disentanglement of chapter ‘Simulation & Verification’ into two independent chapters; corrections in the chapter ‘Introduction’; corrections in chapter ‘AAI Analysis’; extracting ‘Simulation’ from chapter ‘Actor Story’ to new chapter ‘Simulation’; New chapter ‘Simulation’; Rewriting of chapter ‘Looking Forward’)

Update 22.July 2018 (Rewriting the beginning of the chapter ‘Actor Story (AS)’, not completed; converting chapter ‘AS+AM Summary’ to ‘AS and AM Philosophy’, not completed)

Update 23.July 2018 (Attaching a new chapter with a Case Study illustrating an actor story (AS). This case study is still unfinished. It is a case study of  a real project!)

Update 7.August 2018 (Modifying chapter Actor Story, the introduction)

Update 8.August 2018 (Modifying chapter  AS as Text, Comic, Graph; especially section about the textual mode and the pictorial mode; first sketch for a mapping from the textual mode into the pictorial mode)

Update 9.August 2018 (Modification of the section ‘Mathematical Actor Story (MAS) in chapter 4).

Update 11.August 2018 (Improving chapter 3 ‘Actor Story; nearly complete rewriting of chapter 4 ‘AS as text, comic, graph’.)

Update 12.August 2018 (Minor corrections in the chapters 3+4)

Update 13.August 2018 (I am still catched by the chapters 3+4. In chapter  the cognitive structure of the actors has been further enhanced; in chapter 4 a complete example of a mathematical actor story could now been attached.)

Update 14.August 2018 (minor corrections to chapter 4 + 5; change-statements define for each state individual combinatorial spaces (a little bit like a quantum state); whether and how these spaces will be concretized/ realized depends completely from the participating actors)

Update 15.August 2018 (Canceled the appendix with the case study stub and replaced it with an overview for  a supporting software tool which is needed for the real usage of this theory. At the moment it is open who will write the software.)

Update 2.October 2018 (Configuring the whole book now with 3 parts: I. Theory, II. Application, III. Software. Gerd has his focus on part I, Zeynep will focus on part II and ‘somebody’ will focus on part III (in the worst case we will — nevertheless — have a minimal version :-)). For a first quick overview about everything read the ‘Preface’ and the ‘Introduction’.

Update 4.November 2018 (Rewriting the Introduction (and some minor corrections in the Preface). The idea of the rewriting was to address all the topics which will be discussed in the book and pointing out to the logical connections between them. This induces some wrong links in the following chapters, which are not yet updated. Some chapters are yet completely missing. But to improve the clearness of the focus and the logical inter-dependencies helps to elaborate the missing texts a lot. Another change is the wording of the title. Until now it is difficult to find a title which is exactly matching the content. The new proposal shows the focus ‘AAI’ but lists the keywords of the main topics within AAA analysis because these topics are usually not necessarily associated with AAI.)

ACTOR-ACTOR INTERACTION [AAI]. An Actor Centered Approach to Problem Solving. Combining Engineering and Philosophy

by

GERD DOEBEN-HENISCH in cooperation with  LOUWRENCE ERASMUS, ZEYNEP TUNCER

LATEST  VERSION AS PDF

BACKGROUND INFORMATION 19.Dec.2018: Application domain ‘Communal Planning and e-Gaming’

BACKGROUND INFORMATION 24.Dec.2018: The AAI-paradigm and Quantum Logic

PRE-VIEW: NEW EXPANDED AAI THEORY 23.January 2019: Outline of the new expanded  AAI Paradigm. Before re-writing the main text with these ideas the new advanced AAI theory will first be tested during the summer 2019 within a lecture with student teams as well as in  several workshops outside the Frankfurt University of Applied Sciences with members of different institutions.

AASE – Actor-Actor Systems Engineering. Theory & Applications. Micro-Edition (Vers.9)

eJournal: uffmm.org, ISSN 2567-6458
13.June  2018
Email: info@uffmm.org
Authors: Gerd Doeben-Henisch, Zeynep Tuncer,  Louwrence Erasmus
Email: doeben@fb2.fra-uas.de
Email: gerd@doeben-henisch.de

PDF

CONTENTS

1 History: From HCI to AAI …
2 Different Views …
3 Philosophy of the AAI-Expert …
4 Problem (Document) …
5 Check for Analysis …
6 AAI-Analysis …
6.1 Actor Story (AS) . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 Textual Actor Story (TAS) . . . . . . . . . . . . . . .
6.1.2 Pictorial Actor Story (PAT) . . . . . . . . . . . . . .
6.1.3 Mathematical Actor Story (MAS) . . . . . . . . . . .
6.1.4 Simulated Actor Story (SAS) . . . . . . . . . . . . .
6.1.5 Task Induced Actor Requirements (TAR) . . . . . . .
6.1.6 Actor Induced Actor Requirements (UAR) . . . . . .
6.1.7 Interface-Requirements and Interface-Design . . . .
6.2 Actor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Actor and Actor Story . . . . . . . . . . . . . . . . .
6.2.2 Actor Model . . . . . . . . . . . . . . . . . . . . . .
6.2.3 Actor as Input-Output System . . . . . . . . . . . .
6.2.4 Learning Input-Output Systems . . . . . . . . . . . .
6.2.5 General AM . . . . . . . . . . . . . . . . . . . . . .
6.2.6 Sound Functions . . . . . . . . . . . . . . . . . . .
6.2.7 Special AM . . . . . . . . . . . . . . . . . . . . . .
6.2.8 Hypothetical Model of a User – The GOMS Paradigm
6.2.9 Example: An Electronically Locked Door . . . . . . .
6.2.10 A GOMS Model Example . . . . . . . . . . . . . . .
6.2.11 Further Extensions . . . . . . . . . . . . . . . . . .
6.2.12 Design Principles; Interface Design . . . . . . . . .
6.3 Simulation of Actor Models (AMs) within an Actor Story (AS) .
6.4 Assistive Actor-Demonstrator . . . . . . . . . . . . . . . . . .
6.5 Approaching an Optimum Result . . . . .
7 What Comes Next: The Real System
7.1 Logical Design, Implementation, Validation . . . .
7.2 Conceptual Gap In Systems Engineering? . . .
8 The AASE-Paradigm …
References

Abstract

This text is based on the the paper “AAI – Actor-Actor Interaction. A Philosophy of Science View” from 3.Oct.2017 and version 11 of the paper “AAI – Actor-Actor Interaction. An Example Template” and it   transforms these views in the new paradigm ‘Actor- Actor Systems Engineering’ understood as a theory as well as a paradigm for and infinite set of applications. In analogy to the slogan ’Object-Oriented Software Engineering (OO SWE)’ one can understand the new acronym AASE as a systems engineering approach where the actor-actor interactions are the base concepts for the whole engineering process. Furthermore it is a clear intention to view the topic AASE explicitly from the point of view of a theory (as understood in Philosophy of Science) as well as from the point of view of possible applications (as understood in systems engineering). Thus the classical term of Human-Machine Interaction (HMI) or even the older Human-Computer Interaction (HCI) is now embedded within the new AASE approach. The same holds for the fuzzy discipline of Artificial Intelligence (AI) or the subset of AI called Machine Learning (ML). Although the AASE-approach is completely in its beginning one can already see how powerful this new conceptual framework  is.

 

 

ACTOR-ACTOR INTERACTION. Philosophy of the Actor

eJournal: uffmm.org, ISSN 2567-6458
16.March 2018
Email: info@uffmm.org
Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Frankfurt University of Applied Sciences (FRA-UAS)
Institut for New Media (INM, Frankfurt)

PDF

CONTENTS

I   A Vision as a Problem to be Solved … 1
II   Language, Meaning & Ontology …  2
     II-A   Language Levels . . . . . . . . .  . . 2
     II-B  Common Empirical Matter .  . . . . . 2
     II-C   Perceptual Levels . . . . . . .  . . . . 3
     II-D   Space & Time . . . . . . . .  . . . . . 4
     II-E    Different Language Modes . . . 4
     II-F    Meaning of Expressions & Ontology … 4
     II-G   True Expressions . . . . . . .  . . . .  5
     II-H   The Congruence of Meaning  . . . .  5
III   Actor Algebra … 6
IV   World Algebra  … 7
V    How to continue … 8
VI References … 8

Abstract

As preparation for this text one should read the chapter about the basic layout of an Actor-Actor Analysis (AAA) as part of an systems engineering process (SEP). In this text it will be described which internal conditions one has to assume for an actor who uses a language to talk about his observations oft he world to someone else in a verifiable way. Topics which are explained in this text are e.g. ’language’,’meaning’, ’ontology’, ’consciousness’, ’true utterance’, ’synonymous expression.

AAI – Actor-Actor Interaction. A Toy-Example, No.1

eJournal: uffmm.org, ISSN 2567-6458
13.Dec.2017
Email: info@uffmm.org

Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Contents

1 Problem ….. 3
2 AAI-Check ….. 3
3 Actor-Story (AS) …..  3
3.1 AS as a Text . . . . . . . . . . . . . . . . . .3
3.2 Translation of a Textual AS into a Formal AS …… 4
3.3 AS as a Formal Expression . . . . . . . . . .4
3.4 Translation of a Formal AS into a Pictorial AS… 5
4 Actor-Model (AM) …..  5
4.1 AM for the User as a Text . . . . . . . . . . . . . . . . . . . . . . . . . .  . . . .6
4.2 AM for the System as a Text . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Combined AS and AM as a Text …..  6
5.1 AM as an Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
6 Simulation …..  7
6.1 Simulating the AS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
6.2 Simulating the AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
6.3 Simulating AS with AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
7 Appendix: Formalisms ….. 8
7.1 Set of Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
7.2 Predicate Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
8 Appendix: The Meaning of Expressions …11
8.1 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
8.2 Changes by Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Abstract

Following the general concepts of the paper ’AAI – Actor-Actor Interaction. A Philosophy of Science View’ from 3.Oct.2017 this paper illustrates a simple application where the difference as well as the
interaction between an actor story and several actor models is shown. The details of interface-design as well as the usability-testing are not part of this example.(This example replaces the paper with the title
’AAI – Case Study Actor Story with Actor Model. Simple Grid-Environment’ from 15.Nov.2017). One special point is the meaning of the formal expressions of the actor story.

Attention: This toy example is not yet in fully conformance with the newly published Case-Study-Template

To read the full text see PDF

Clearly, one can debate whether a ‘toy-example’ makes sens, but the complexity of the concepts in this AAI-approach is to great to illustrate these in the beginning  with a realistic example without loosing the idea. The author of the paper has tried many — also very advanced — versions in the last years and this is the first time that he himself has the feeling that at least the idea is now clear enough. And from teaching students it is very clear, if you cannot explain an idea in a toy-example you never will be able to apply it to real big problems…

 

AAI – Actor-Actor Interaction. A Philosophy of Science View

AAI – Actor-Actor Interaction.
A Philosophy of Science View
eJournal: uffmm.org, ISSN 2567-6458

Gerd Doeben-Henisch
info@uffmm.org
gerd@doeben-henisch.de

PDF

ABSTRACT

On the cover page of this blog you find a first general view on the subject matter of an integrated engineering approach for the future. Here we give a short description of the main idea of the analysis phase of systems engineering how this will be realized within the actor-actor interaction paradigm as described in this text.

INTRODUCTION

Overview of the analysis phase of systems engineering as realized within an actor-actor interaction paradigm
Overview of the analysis phase of systems engineering as realized within an actor-actor interaction paradigm

As you can see in figure Nr.1 there are the following main topics within the Actor-Actor Interaction (AAI) paradigm as used in this text (Comment: The more traditional formula is known as Human-Machine Interaction (HMI)):

Triggered by a problem document D_p from the problem phase (P) of the engineering process the AAI-experts have to analyze, what are the potential requirements following from this document, all the time also communicating with the stakeholder to keep in touch with the hidden intentions of the stakeholder.

The idea is to identify at least one task (T) with at least one goal state (G) which shall be arrived after running a task.

A task is assumed to represent a sequence of states (at least a start state and a goal state) which can have more than one option in every state, not excluding repetitions.

Every task presupposes some context (C) which gives the environment for the task.

The number of tasks and their length is in principle not limited, but their can be certain constraints (CS) given which have to be fulfilled required by the stakeholder or by some other important rules/ laws. Such constraints will probably limit the number of tasks as well as their length.

Actor Story

Every task as a sequence of states can be viewed as a story which describes a process. A story is a text (TXT) which is static and hides the implicit meaning in the brains of the participating actors. Only if an actor has some (learned) understanding of the used language then the actor is able to translate the perceptions of the process in an appropriate text and vice versa the text into corresponding perceptions or equivalently ‘thoughts’ representing the perceptions.

In this text it is assumed that a story is describing only the observable behavior of the participating actors, not their possible internal states (IS). For to describe the internal states (IS) it is further assumed that one describes the internal states in a new text called actor model (AM). The usual story is called an actor story (AS). Thus the actor story (AS) is the environment for the actor models (AM).

In this text three main modes of actor stories are distinguished:

  1. An actor story written in some everyday language L_0 called AS_L0 .
  2. A translation of the everyday language L_0 into a mathematical language L_math which can represent graphs, called AS_Lmath.
  3. A translation of the hidden meaning which resides in the brains of the AAI-experts into a pictorial language L_pict (like a comic strip), called AS_Lpict.

To make the relationship between the graph-version AS_Lmath and the pictorial version AS_Lpict visible one needs an explicit mapping Int from one version into the other one, like: Int : AS_Lmath <—> AS_Lpict. This mapping Int works like a lexicon from one language into another one.

From a philosophy of science point of view one has to consider that the different kinds of actor stories have a meaning which is rooted in the intended processes assumed to be necessary for the realization of the different tasks. The processes as such are dynamic, but the stories as such are static. Thus a stakeholder (SH) or an AAI-expert who wants to get some understanding of the intended processes has to rely on his internal brain simulations associated with the meaning of these stories. Because every actor has its own internal simulation which can not be perceived from the other actors there is some probability that the simulations of the different actors can be different. This can cause misunderstandings, errors, and frustrations.(Comment: This problem has been discussed in [DHW07])

One remedy to minimize such errors is the construction of automata (AT) derived from the math mode AS_Lmath of the actor stories. Because the math mode represents a graph one can derive Der from this version directly (and automatically) the description of an automaton which can completely simulate the actor story, thus one can assume Der(AS_Lmath) = AT_AS_Lmath.

But, from the point of view of Philosophy of science this derived automaton AT_AS_Lmath is still only a static text. This text describes the potential behavior of an automaton AT. Taking a real computer (COMP) one can feed this real computer with the description of the automaton AT AT_AS_Lmath and make the real computer behave like the described automaton. If we did this then we have a real simulation (SIM) of the theoretical behavior of the theoretical automaton AT realized by the real computer COMP. Thus we have SIM = COMP(AT_AS_Lmath). (Comment: These ideas have been discussed in [EDH11].)

Such a real simulation is dynamic and visible for everybody. All participating actors can see the same simulation and if there is some deviation from the intention of the stakeholder then this can become perceivable for everybody immediately.

Actor Model

As mentioned above the actor story (AS) describes only the observable behavior of some actor, but not possible internal states (IS) which could be responsible for the observable behavior.

If necessary it is possible to define for every actor an individual actor model; indeed one can define more than one model to explore the possibilities of different internal structures to enable a certain behavior.

The general pattern of actor models follows in this text the concept of input-output systems (IOSYS), which are in principle able to learn. What the term ‘learning’ designates concretely will be explained in later sections. The same holds of the term ‘intelligent’ and ‘intelligence’.

The basic assumptions about input-output systems used here reads a follows:

Def: Input-Output System (IOSYS)

IOSYS(x) iff x=< I, O, IS, phi>
phi : I x IS —> IS x O
I := Input
O := Output
IS := Internal

As in the case of the actor story (AS) the primary descriptions of actor models (AM) are static texts. To make the hidden meanings of these descriptions ‘explicit’, ‘visible’ one has again to convert the static texts into descriptions of automata, which can be feed into real computers which in turn then simulate the behavior of these theoretical automata as a real process.

Combining the real simulation of an actor story with the real simulations of all the participating actors described in the actor models can show a dynamic, impressive process which is full visible to all collaborating stakeholders and AAI-experts.

Testing

Having all actor stories and actor models at hand, ideally implemented as real simulations, one has to test the interaction of the elaborated actors with real actors, which are intended to work within these explorative stories and models. This is done by actor tests (former: usability tests) where (i) real actors are confronted with real tasks and have to perform in the intended way; (ii) real actors are interviewed with questionnaires about their subjective feelings during their task completion.

Every such test will yield some new insights how to change the settings a bit to gain eventually some improvements. Repeating these cycles of designing, testing, and modifying can generate a finite set of test-results T where possibly one subset is the ‘best’ compared to all the others. This can give some security that this design is probably the ‘relative best design’ with regards to T.

Further Readings:

  1. Analysis
  2. Simulation
  3. Testing
  4. User Modeling
  5. User Modeling and AI

For a newer version of the AAi-text see HERE..

REFERENCES

[DHW07] G. Doeben-Henisch and M. Wagner. Validation within safety critical systems engineering from a computation semiotics point of view.
Proceedings of the IEEE Africon2007 Conference, pages Pages: 1 – 7, 2007.
[EDH11] Louwrence Erasmus and Gerd Doeben-Henisch. A theory of the
system engineering process. In ISEM 2011 International Conference. IEEE, 2011.

EXAMPLE

For a toy-example to these concepts please see the post AAI – Actor-Actor Interaction. A Toy-Example, No.1