All posts by Gerd Doeben-Henisch

LOGIC. The Theory Of Inquiry (1938) by John Dewey – An oksimo Review – Part 1

eJournal: uffmm.org, ISSN 2567-6458, Aug 16 -Aug 18, 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

SCOPE

In the uffmm review section the different papers and books are discussed from the point of view of the oksimo paradigm. [2] Here the author reads the book “Logic. The Theory Of Inquiry” by John Dewey, 1938. [1]

PREFACE DEWEY 1938/9

If one looks to the time span between Dewey’s first published book  from 1887 (Psychology)  until 1938 (Logic) we have 51 years of experience.  Thus this book about logic can be seen as a book digesting a manifold knowledge from a very special point of view: from Logic as a theory of inquiry.

And because Dewey  is qualified as one of the “primary figures associated with the philosophy of pragmatism” [3] it is of no surprise that he in his preface to the book ‘Logic …’ [1] mentions not only as one interest the ” … interpretation of the forms and formal relations that constitute the standard material of logical tradition”(cf. p.1), but within this perspective he underlines the attention   particularly to  “…  the principle of the continuum of inquiry”(cf. p.1).

If one sees like Dewey the “basic conception of inquiry” as the “determination of an indeterminate situation” (cf. p.1)  then the implicit relations can enable  “a coherent account of the different propositional forms to be given”. This provides a theoretical interface to logical thinking as thinking in inferences as well as an philosophical interface to pragmatism as a kind of inquiry which sees strong relations between the triggering assumptions and the possible consequences created by agreed procedures leading from the given and expected to the final consequences.

Dewey himself is very skeptical about the term ‘Pragmatism’, because
“… the word lends itself [perhaps]  to misconception”, thus  “that it seemed advisable to avoid its use.” (cf. p.2) But Dewey does not stay with a simple avoidance;  he gives a “proper interpretation”  of the term ‘pragmatic’ in the way that “the function of consequences” can be interpreted as “necessary tests of the validity of propositions, provided these consequences are operationally instituted and are such as to resolve the specific problem evoking the operations.”(cf. p.2)

Thus Dewey assumes the following elements of a pragmatic minded process of inquiry:

  1. A pragmatic inquiry is a process leading to some consequences.
  2. These consequences can be seen as tests of the validity of propositions.
  3. As a necessary condition that a consequence can be qualified as a test of assumed propositions one has to assume  that “these consequences are operationally instituted and are such as to resolve the specific problem”.
  4. That consequences, which are different to the assumed propositions [represented by some expressions]  can be qualified as confirming an assumed validity of the assumed propositions, requires that the assumed validity can be represented as an expectation of possible outcomes which are observably decidable.

In other words: some researchers are assuming that some propositions represented by some expressions are valid, because they are convinced about this by their commonly shared observations of the propositions. They associate these assumed propositions with an expected outcome  represented by some expressions which can be interpreted by the researchers in a way, that they are able to decide whether an upcoming situation can be judged as that situation which is expected as a valid outcome (= consequence). Then there must exist some agreed procedures (operationally instituted) whose application to the given starting situation produces the expected outcome (=consequences). Then the whole process of a start situation with an given expectation as well as given procedures can generate a sequence of situations following one another with an expected outcome after finitely many situation.

If one interprets these agreed procedures as inference rules and the assumed expressions as assumptions and expectations then the whole figure can be embedded in the usual pattern of inferential logic, but with some strong extensions.

Dewey is quite optimistic about the conformity of this pragmatic view of an inquiry and a view of logic: “I am convinced that acceptance of the general principles set forth will enable a more complete and consistent set of symbolizations than now exists to be made.”(cf. p.2) But he points to one aspect, which would be necessary for a pragmatically  inspired view of logic which is in ‘normal logic’ not yet realized: “the need for development of a general theory of language in which form and matter are not separated.” This is a very strong point because the architecture of modern logic is fundamentally depending on the complete abandonment of meaning of language; the only remaining shadow of meaning resides in  the assumptions of the property of being ‘true’ or ‘false’ related to expressions (not propositions!). To re-introduce ‘meaning’ into logic by the usage of ‘normal language’ would be a complete rewriting of the whole of modern logic.

At the time of writing these lines by Dewey 1938 there was not the faintest idea in logic how such a rewriting of the whole logic could look like.

With the new oksimo paradigm there could perhaps exist a slight chance to do it. Why? Here are the main arguments:

  1. The oksimo paradigm assumes an inference process leading from some assumed starting situation to some consequences generated by the application of some agreed change-rules.
  2. All situations are assumed to have a twofold nature: (i) primarily they are given as expressions of some language (it can be a normal language!); (ii) secondarily these expressions are part of the used normal language, where every researches is assumed to have a ‘built-in’ meaning function which has during his/her individual learning collected enough ‘meaning’, which allows  a symbolically  enabled cooperation with other researchers.
  3. Every researcher can judge every time whether a given or inferred situation is in agreement with his interpretation of the expressions and their relation to the given or considered possible situation.
  4. If the researchers assume in the beginning additionally an expectation (goal/ vision) of a possible outcome (possible consequence), then it is possible at every point of the sequence to judge to which degree the actual situation corresponds to the expected situation.

The second requirement of Dewey for the usage of logic for a pragmatic inquiry was given in the statement  “that an adequate set of symbols depends upon prior institution of valid ideas of the conceptions and relations that are symbolized.”(cf. p.2)

Thus not only the usage of normal language is required but also some presupposed knowledge.  Within the oksimo paradigm it is possible to assume as much presupposed knowledge as needed.

RESULTS SO FAR

After reading the preface to the book it seems that the pragmatic view of inquiry combined with some  idea of modern logic can directly be realized within the oksimo paradigm.

The following posts will show whether this is a good working hypothesis or not.

COMMENTS

[1] John Dewey, Logic. The Theory Of Inquiry, New York, Henry Holt and Company, 1938  (see: https://archive.org/details/JohnDeweyLogicTheTheoryOfInquiry with several formats; I am using the kindle (= mobi) format: https://archive.org/download/JohnDeweyLogicTheTheoryOfInquiry/%5BJohn_Dewey%5D_Logic_-_The_Theory_of_Inquiry.mobi . This is for the direct work with a text very convenient.  Additionally I am using a free reader ‘foliate’ under ubuntu 20.04: https://github.com/johnfactotum/foliate/releases/). The page numbers in the text of the review — like (p.13) — are the page numbers of the ebook as indicated in the ebook-reader foliate.(There exists no kindle-version for linux (although amazon couldn’t work without linux servers!))

[2] Gerd Doeben-Henisch, 2021, uffmm.org, THE OKSIMO PARADIGM
An Introduction (Version 2), https://www.uffmm.org/wp-content/uploads/2021/03/oksimo-v1-part1-v2.pdf

[3] John Dewey, Wikipedia [EN]: https://en.wikipedia.org/wiki/John_Dewey

Continuation

See part 2 HERE.

MEDIA

Here some spontaneous recording of the author, talking ‘unplugged’ into a microphone how he would describe the content of the text above in a few words. It’s not perfect, but it’s ‘real’: we all are real persons not being perfect, but we have to fight for ‘truth’ and a better life while being ‘imperfect’ …. take it as ‘fun’ 🙂

OKSIMO MEETS POPPER. The Generalized Oksimo Theory Paradigm

eJournal: uffmm.org
ISSN 2567-6458, 5.April – 5.April  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last changes: Small corrections, April 8, 2021

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE GENERALIZED OKSIMO THEORY PARADIGM
The Generalized Oksimo Paradigm
Figure: Overview of the Generalized Oksimo Paradigm

In the preceding sections it has been shown that the oksimo paradigm is principally fitting in the theory paradigm as it has been  discussed by Popper. This is possible because some of the concepts used by Popper have been re-interpreted by re-analyzing the functioning of the symbolic dimension. All the requirements of Popper could be shown to work but now even in a more extended way.

SUSTAINABLE FUTURE

To describe the oksimo paradigm it is not necessary to mention as a wider context the general perspective of sustainability as described by the United Nations [UN][1]. But if one understands the oksiomo paradigm deeper and one knows that from the 17 sustainable development goals [SDGs] the fourth goal [SDG4] is understood by the UN as the central key for the development of all the other SDGs [2], then one can understand this as an invitation to think about that kind of knowledge which could be the ‘kernel technology’ for sustainability. A ‘technology’ is not simply ‘knowledge’, it is a process which enables the participants — here assumed as human actors with built-in meaning functions — to share their experience of the world and as well their hopes, their wishes, their dreams to become true in a reachable future. To be ‘sustainable’ these visions have to be realized in a fashion which keeps the whole of biological life alive on earth as well in the whole universe. Biological life is the highest known value with which the universe is gifted.

Knowledge as a kernel technology for a sustainable future of the whole biological life has to be a process where all human biological life-forms headed by the human actors have to contribute with their experience and capabilities to find those possible future states (visions, goals, …) which can really enable a sustainable future.

THE SYMBOLIC DIMENSION

To enable different isolated brains in different bodies to ‘cooperate’ and thereby to ‘coordinate’ their experience, and their behavior, the only and most effective way to do this is known as ‘symbolic communication’: using expressions of some ordinary language whose ‘meaning’ has been learned by every member of the population beginning with being born on this planet.  Human actors (classified as the life-form ‘homo sapiens’) have the most known elaborated language capability by being able to associate all kinds of experience with expressions of an ordinary language. These ‘mappings’ between expressions and the general experience is taking place ‘inside the brain’ and these mappings are highly ‘adaptive’; they can change over time and they are mostly ‘synchronized’ with the mappings taking place in other brains. Such a mapping is here called a ‘meaning function’ [μ].

DIFFERENT KINDS OF EXPRESSIONS

The different sientific disciplines today have developed many different views and models how to describe the symbolic dimension, their ‘parts’, their functioning. Here we assume only three different kinds of expressions which can be analayzed further with nearly infinite many details.

True Concrete Expressions [S_A]

The ‘everyday case’ occurs if human actors share a real actual situation and they use their symbolic expressions to ‘talk about’ the shared situation, telling each other what is given according to their understanding using their built-in meaning function μ. With regard to the shared knowledge and language these human actors can decide, wether an expression E used in the description is matching the observed situation or not. If the expression is matching than such an expression is classified as being a ‘true expression’. Otherwise it is either undefined or eventually ‘false’ if it ‘contradicts’ directly. Thus the set of all expressions assumed to be true in a actual given situation S is named  here S_A. Let us look to an example: Peter says, “it is raining”, and Jenny says “it is not raining”. If all would agree, that   it is raining, then Peters expression is classified as ‘true’ and Jennys expression as ‘false’. If  different views would exist in the group, then it is not clear what is true or false or undefined in this group! This problem belongs to the pragmatic dimension of communication, where human actors have to find a way to clarify their views of the world. The right view of the situation  depends from the different individual views located in the individual brains and these views can be wrong. There exists no automatic procedure to get a ‘true’ vision of the real world.

General Assumptions [S_U]

It is typical for human actors that they are collecting knowledge about the world including general assumptions like “Birds can fly”, “Ice is melting in the sun”, “In certain cases the covid19-virus can bring people to death”, etc. These expressions are usually understood as ‘general’ rules  because they do not describe a concrete single case but are speaking of many possible cases. Such a general rule can be used within some logical deduction as demonstrated by the  classical greek logic:  ‘IF it is true that  “Birds can fly” AND we have a certain fact  “R2D2 is a bird” THEN we can deduce the fact  “R2D2 can fly”‘.  The expression “R2D2 can fly”  claims to be  true. Whether this is ‘really’ the case has to be shown in a real situation, either actually or at some point in the future. The set of all assumed general assumptions is named here S_U.

Possible Future States [S_V]

By experience and some ‘creative’ thinking human actors can imagine concrete situations, which are not yet actually given but which are assumed to be ‘possible’; the possibility can be interpreted as some ‘future’ situation. If a real situation would be reached which includes the envisioned state then one could say that the vision has become  ‘true’. Otherwise the envisioned state is ‘undefined’: perhaps it can become true or not.  In human culture there exist many visions since hundreds or even thousands of years where still people are ‘believing’ that they will become ‘true’ some day. The set of all expressions related to a vision is named here S_V.

REALIZING FUTURE [X, X]

If the set of expressions S_V  related to a ‘vision’ (accompanied by many emotions, desires, details of all kinds) is not empty,  then it is possible to look for those ‘actions’ which with highest ‘probability’ π can ‘change’ a given situation S_A in a way that the new situation S’  is becoming more and more similar to the envisioned situation S_V. Thus a given goal (=vision) can inspire a ‘construction process’ which is typical for all kinds of engineering and creative thinking. The general format of an expression to describe a change is within the oksimo paradigm assumed as follows:

  1. With regard to a given situation S
  2. Check whether a certain set of expressions COND is a subset of the expressions of S
  3. If this is the case then with probability π:
  4. Remove all expressions of the set Eminus from S,
  5. Add all expressions of the set Eplus to S
  6. and update (compute) all parameters of the set Model

In a short format:

S’π = S – Eminus + Eplus & MODEL(S)

All change rules together represent the set X. In the general theory paradigm the change rules X represent the inference rules, which together with a general ‘inference concept’ X constitute the ‘logic’ of the theory. This enables the following general logical relation:

{S_U, S_A} <S_A, S1, S2, …, Sn>

with the continuous evaluation: |S_V ⊆ Si| > θ. During the whole construction it is possible to evaluate each individual state whether the expressions of the vision state S_V are part of the actual state Si and to which degree.

Such a logical deduction concept is called a ‘simulation’ by using a ‘simulator’ to repeat the individual deductions.

POSSIBLE EXTENSIONS

The above outlined oksimo theory paradigm can easily be extended by some more features:

  1. AUTONOMOUS ACTORS: The change rules X so far are ‘static’ rules. But we know from everyday life that there are many dynamic sources around which can cause some change, especially biological and non-biological actors. Every such actors can be understood as an input-output system with an adaptive ‘behavior function’ φ.  Such a behavior can not be modeled by ‘static’ rules alone. Therefore one can either define theoretical models of such ‘autonomous’ actors with  their behavior and enlarge the set of change rules X with ‘autonomous change rules’ Xa as Xa ⊆ X. The other variant is to include in real time ‘living autonomous’ actors as ‘players’ having the role of an ‘autonomous’ rule and being enabled to act according to their ‘will’.
  2. MACHINE INTELLIGENCE: To run a simulation will always give only ‘one path’ P in the space of possible states. Usually there would be many more paths which can lead to a goal state S_V and the accompanying parameters from Model can be different: more or less energy consumption, more or less financial losses, more or less time needed, etc. To improve the knowledge about the ‘good candidates’ in the possible state space one can introduce  general machine intelligence algorithms to evaluate the state space and make proposals.
  3. REAL-TIME PARAMETERS: The parameters of Model can be connected online with real measurements in near real-time. This would allow to use the collected knowledge to ‘monitor’ real processes in the world and based on the collected knowledge recommend actions to react to some states.
COMMENTS

[1] The 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet, now and into the future. At its heart are the 17 Sustainable Development Goals (SDGs), which are an urgent call for action by all countries – developed and developing – in a global partnership. They recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests. See PDF: https://sdgs.un.org/sites/default/files/publication/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf

[2] UN, SDG4, PDF, Argumentation why the SDG4 ist fundamental for all other SDGs: https://sdgs.un.org/sites/default/files/publications/2275sdbeginswitheducation.pdf

 

 

OKSIMO MEETS POPPER. The Oksimo Theory Paradigm

eJournal: uffmm.org
ISSN 2567-6458, 2.April – 2.April  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE OKSIMO THORY PARADIGM
The Oksimo Theory Paradigm
Figure 1: The Oksimo Theory Paradigm

The following text is a short illustration how the general theory concept as extracted from the text of Popper can be applied to the oksimo simulation software concept.

The starting point is the meta-theoetical schema as follows:

MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>

In the oksimo case we have also a given empirical context S, a non-epty set of human actors A[μ] whith a built-in meaning function for the expressions E of some language L, some axioms AX as a subset of the expressions E, an inference concept , and all the other concepts.

The human actors A[μ] can write  some documents with the expressions E of language L. In one document S_U they can write down some universal facts they belief that these are true (e.g. ‘Birds can fly’).  In another document S_E they can write down some empirical facts from the given situation S like ‘There is something named James. James is a bird’. And somehow they wish that James should be able to fly, thus they write down a vision text S_V with ‘James can fly’.

The interesting question is whether it is possible to generate a situation S_E.i in the future, which includes the fact ‘James can fly’.

With the knowledge already given they can built the change rule: IF it is valid, that {Birds can fly. James is a bird} THEN with probability π = 1 add the expression Eplus = {‘James can fly’} to the actual situation S_E.i. EMinus = {}. This rule is then an element of the set of change rules X.

The simulator X works according to the schema S’ = S – Eminus + Eplus.

Because we have S=S_U + S_E we are getting

S’ = {Birds can fly. Something is named James. James is a bird.} – Eminus + Eplus

S’ = {Birds can fly. Something is named James. James is a bird.} – {}+ {James can fly}

S’ = {Birds can fly. Something is named James. James is a bird. James can fly}

With regard to the vision which is used for evaluation one can state additionally:

|{James can fly} ⊆ {Birds can fly. Something is named James. James is a bird. James can fly}|= 1 ≥ 1

Thus the goal has been reached with 1 meaning with 100%.

THE ROLE OF MEANING

What makes a certain difference between classical concepts of an empirical theory and the oksimo paradigm is the role of meaning in the oksimo paradigm. While the classical empirical theory concept is using formal (mathematical) languages for their descriptions with the associated — nearly unsolvable — problem how to relate these concepts to the intended empirical world, does the oksimo paradigm assume the opposite: the starting point is always the ordinary language as basic language which on demand can be extended by special expressions (like e.g. set theoretical expressions, numbers etc.).

Furthermore it is in the oksimo paradigm assumed that the human actors with their built-in meaning function nearly always are able to  decided whether an expression e of the used expressions E of the ordinary language L is matching certain properties of the given situation S. Thus the human actors are those who have the authority to decided by their meaning whether some expression is actually true or not.

The same holds with possible goals (visions) and possible inference rules (= change rules). Whether some consequence Y shall happen if some condition X is satisfied by a given actual situation S can only be decided by the human actors. There is no other knowledge available then that what is in the head of the human actors. [1] This knowledge can be narrow, it can even be wrong, but human actors can only decide with that knowledge what is available to them.

If they are using change rules (= inference rules) based on their knowledge and they derive some follow up situation as a theorem, then it can happen, that there exists no empiricial situation S which is matching the theorem. This would be an undefined truth case. If the theorem t would be a contradiction to the given situation S then it would be clear that the theory is inconsistent and therefore something seems to be wrong. Another case cpuld be that the theorem t is matching a situation. This would confirm the belief on the theory.

COMMENTS

[1] Well known knowledge tools are since long libraries and since not so long data-bases. The expressions stored there can only be of use (i) if a human actor knows about these and (ii) knows how to use them. As the amount of stored expressions is increasing the portion of expressions to be cognitively processed by human actors is decreasing. This decrease in the usable portion can be used for a measure of negative complexity which indicates a growng deterioration of the human knowledge space.  The idea that certain kinds of algorithms can analyze these growing amounts of expressions instead of the human actor themself is only constructive if the human actor can use the results of these computations within his knowledge space.  By general reasons this possibility is very small and with increasing negativ complexity it is declining.

 

 

 

OKSIMO MEETS POPPER. Popper’s Position

eJournal: uffmm.org
ISSN 2567-6458, 31.March – 31.March  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

POPPERs POSITION IN THE CHAPTERS 1-17

In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.

Scientific Theory

A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference , which allows the deduction of a certain theorem t  if one makes  some additional concrete assumptions H.

Example: Theory T1 = <AX1,>

AX1= {Birds can fly}

H1= {Peter is  a bird}

: Peter can fly

Because  there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can  fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.

One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completely true. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds   like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)

The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:

AX1= {Birds can fly}

H2= {Doxy is  a bird, Doxy cannot fly}

: ‘Doxy can fly’ & ~’Doxy can fly’

If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

In this case the set {AX1, H2} is called inconsistent.

If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.

Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof  a falsification of a theory T can destroy the ability  of this  theory to distinguish between true and false statements.

Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try  it.(cf. p.33, 63)

Meta Theory, Logic of Scientific Discovery, Philosophy of Science

Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.

Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person  like Popper.

Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:

A scientific theory from a point of  view of Philosophy of Science represents a structure like the following one (minimal version):

MT=<S, A[μ], E, L, AX, , ET, E+, E-, true, false, contradiction, inconsistent>

In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L.  Based on their built-in adaptive meaning function μ the human actors A can relate  properties of the situation S with expressions E of L.  Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions  E-. Both sets of expressions are true subsets of E: E+ ⊂ E  and E- ⊂ E. Additionally the experts can define some special  set of expressions called axioms  AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T  ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either  only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions  that this unified set  confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions  can not  distinguish between true and false expressions.

Popper gives some general requirements for the axioms of a theory (cf. p.71):

  1. Axioms must be free from contradiction.
  2. The axioms  must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
  3. The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.

While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.

Empirical Interpretation(s)

Popper assumes that the universal statements  of scientific theories   are linguistic representations, and this means  they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning.  Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true),  or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).

Examples:

  1. Two human actors in a situation S agree about the  fact, that there is ‘something’ which  they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is  some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract  concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one).  To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
  2. These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’.  If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
  3. The  expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
  4. To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’  they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations

The basic idea with a system of axioms AX is — according to Popper —  that the axioms as universal expressions represent  a system of equations where  the  general terms   should be able to be substituted by certain values. The set of admissible values is different from the set of  inadmissible values. The relation between those values which can be substituted for the terms  is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)

But Popper has difficulties with an axiomatic system interpreted as a system of equations  since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is,  that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)

On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)

But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.

Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.

As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E  of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists  together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and  are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!

Example:

Lets us have a look to another  example.

If we take the system of axioms AX as the following schema:  AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m  as follows: m={<1,2,3>, <2,3,5>}. The values are again given as  a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers   which can satisfy the general concept of the equation.  In this secondary interpretation m is becoming  a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.

It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing  objects; the individual contributions will be  reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derived forecast of the theory 1+2 = 3  and the empirically measured value 1+2 = 2. If one would  define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with  ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief  the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …

Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.

SOURCES

[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by  Basic Books, New York (more editions have been published  later; I am using the eBook version of Routledge (2002))

 

 

THE OKSIMO CASE as SUBJECT FOR PHILOSOPHY OF SCIENCE. Part 5. Oksimo as Theory?

eJournal: uffmm.org
ISSN 2567-6458, 24.March – 24.March 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the  oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

DERIVATION

In formal logic exists the concept of logical derivation ‘⊢’ written as

EX e

saying that one can get the expression e out of the set of expressions E by applying the rules X.

In the oksimo case we have sets of expressions ES to represent either a given starting state S or to represent as EV a given vision V. Furthermore  we have change rules X operating on sets of expressions and we can derive sequences of states of expressions <E1, E2, …, En> by applying the change rules X with the aid of a simulator Σ onto these expressions written as

ESΣ,X <E1, E2, …, En>

Thus given an initial set of expressions ES one can derive a whole sequence of expression sets Ei by applying the change rules.

While all individual expressions of the start set ES are by assumption classified as true it holds for the derived sets of expressions Ei  that these expressions are correct with regard to the used change rules X but whether these sets of expressions are also true with regard to a given  situation Si considered as a possible future state Sfuti has to be proved separately! The reason for this unclear status results from the fact that the change rules X represent changes which the authoring experts consider as possible changes which they want to apply but they cannot guarantee the empirical validity for all upcoming times   only by thinking. This implicit uncertainty can be handled a little bit with the probability factor π of an individual change rule. The different degrees of certainty in the application of a change rule can give an approximation of this uncertainty. Thus as longer the chain of derivations is becoming as lower the assumed probability will develop.

SIMPLE OKSIMO THEORY [TOKSIMO]

Thus if we have some human actors Ahum, an environment ENV, some starting situation S as part of the environment ENV, a first set of expressions ES representing only true expressions with regard to the starting situation S, a set of elaborated change rules X, and a simulator Σ then one can  define a simple  oksimo-like theory Toksimo as follows:

TOKSIMO(x) iff x = <ENV, S, Ahum, ES, X, Σ, ⊢Σ,X, speakL(), makedecidable()>

The human actors can describe a given situation S as part of an environment ENV as a set of expressions ES which can be proved with makedecidable() as true. By defining a set of change rules X and a simulator Σ one can define  a formal derivation relation Σ,X which allows the derivation of a sequence of sets of expressions <E1, E2, …, En> written as

EST,Σ,X <E1, E2, …, En>

While the truth of the first set of expressions ES has been proved in the beginning, the truth of the derived sets of expressions has to be shown explicitly for each set Ei separately. Given is only the formal correctness of the derived expressions according to the change rules X and the working of the simulator.

VALIDADED SIMPLE OKSIMO THEORY [TOKSIMO.V]

One can extend the simple oksimo theory TOKSIMO to a biased  oksimo theory TOKSIMO.V if one includes in the theory a set of vision expressions EV. Vision expressions can describe a possible situation in the future Sfut which is declared as a goal to be reached. With a given vision document EV the simulator can check for every new derived set of expressions Ei to which degree the individual expressions e of the set of vision expressions EV are already reached.

FROM THEORY TO ENGINEERING

But one has to keep in mind that the purely formal achievement of a given vision document EV does not imply that the corresponding situation Sfut    is a real situation.  The corresponding situation Sfut  is first of all only an idea in the mind of the experts.  To transfer this idea into the real environment as a real situation is a process on its own known as engineering.

 

THE OKSIMO CASE as SUBJECT FOR PHILOSOPHY OF SCIENCE. Part 4. Describing Change

eJournal: uffmm.org
ISSN 2567-6458, 24.March – 24.March 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

CHANGE

AS described in part 1 of the philosophy of science analysis of the oksimo behavior space it is here assumed — following  the ideas of  von Uexküll — that every biological species SP embedded in a real environment ENV transforms this environment  in its species specific internal representation  ENVSP which is no 1-to-1 mapping. Furthermore we know from modern Biology and brain research that the human brain cuts its sensory perceptions P into time-slices P1, P2, … which have durations between about 50 – 700 milliseconds and which are organized as multi-modal structures for further processing. The results of this processing are different kinds of abstracted structures which represent — not in a 1-to-1 fashion — different aspects of a given situation S which   in the moment of being processed and then being stored is not any longer actual, ‘not now’, but ‘gone‘, ‘past‘.

Thus if we as human actors are speaking about change then we are primarily speaking about the difference which our brain can compute comparing the actual situation S being kept in an actual time-slice P0 and those abstracted structures A(P) coming out of preceding time slices interacting in many various ways with other available abstracted structures:  Diff(A(P0), A(P)) = Δint. Usually we assume automatically that the perceived internal change Δint corresponds to a change in the actual situation S leading to a follow-up situation S’ which differs with regard to the species specific perception represented in Δint as Δext = Diff(S, S’). As psychological tests can  reveal  this automatic (unconscious) assumption that a perceived change Δint corresponds to a real external change Δext must not be the case. There is a real difference between Δint, Δext and on account of this difference there exists the possibility that we can detect an error  comparing our ideas with the real world environment. Otherwise — in the absence of an error —  a congruence can be interpreted as a confirmation of our ideas.

EXPRESSIONS CAN FOLLOW REAL PROPERTIES

As described in the preceding posts about a decidable start state S and a vision V  it is possible to map a perceived actual situation S in a set of expressions ES={e1, e2, …, en }. This general assumption is valid for all real states S, which results in the fact that a series of real states S1, S2, …, Sn is conceivable where every such real state Si can be associated with a set of expressions Ei which contain individual expressions ei which represent according to the presupposed meaning function φ certain aspects/ properties Pi of the corresponding real situation Si.  Thus, if two consecutive real states Si, Si+1 are include perceived  differences  indicated by some properties then it is possible to express these differences by corresponding expressions ei as part of the whole set of expressions Ei and Ei+1. If e.g. in the successor of Si one property px expressed by ex  is missing which is present in Si then the corresponding set Ei+1 should not include the expression ex. Or if the successor state Si+1 contains a property py expressed by the expression ey which is not yet given in Si then this fact too indicates a difference. Thus the differing pair (Si, Si+1)  could correspond to the pair (Ei, Ei+1) with ex as part of Ei but not any more in Ei+1 and the expression ey not part of Ei but then in Ei+1.

The general schema could be described as:

Si+1 = Si -{px} + {py} (the real dimension)

Ei+1 = Ei – {ex} + {ey} (the symbolic dimension)

Between the real dimension and the symbolic dimension is the body with the brain offering all the neural processing which is necessary to enable such complex mappings. This can bne expressed by the following pragmatic recipe:

symbolicarticulation: S x body[brain] —> E

symbolicarticulation(S,body[brain]) = E

Having a body with a brain embedded in an actual (real) situation S the body (with the brain) can produce symbolic expressions corresponding to certain properties of the situation S.

DESCRIBING CHANGE

Assuming that symbolic articulation is possible and that there is some regular mapping between an actual situation S and a set of expressions E it is conceivable to describe the generation of two successive actual states S, S’  as follows:

Apply a Change Rule ξ of X
  • We have a given actual situation S.
  • We have a group of human actors Ahum which are using a language L.
  • The group generates a decidable description of S as a set of expressions ELS following the rules of language L.
  • Thus we have symbolicarticulation(S, Ahum) = ELS
  • The group of human actors defines a finite set of change rules X which describe which expressions Eminus should be removed from ES and which expressions Eplus should be added to ES to get the successor state  ES‘ represented in a symbolic space:
  • ES‘ = ES – Eminus + Eplus . An individual change rule ξ of X has the format:
  • IF COND THEN with probability π REMOVE Eminus and ADD Eplus.
  • COND is a set of expressions which shall be a subset of the given set ES saying: COND ⊆ ES. If this condition is satisfied (fulfilled) then the rule can be applied following probability  π.
  • Thus applying a change rule ξ to a given state S means to operate on the corresponding set of expressions ES of  S as follows:
  • applychange: S x ES x {ξ}    —> ES
  • There can be more than only one change rule ξ as a finite set X = {ξ1, ξ2, …, ξn}. They have all to be applied in a random order. Thus we get:
  • applychange: S x ES x X   —> ES‘ or applychange(S,ES,X) = ES
Simulation

If one has a given actual state S with a finite set of change rules X we know now how to apply this finite set of change rules X to a given state description  ES. But if we would enlarge the set of change rules X in a way that this set X* not only contains rules for the given actual state description ES but also for a finite number of other possible state descriptions ES* then one could repeat the application of the change rules X* several times by using the last outcome desribing ES‘ to make ES‘ to the new actual state description ES. Proceeding in this way we can generate a whole sequence of state decriptions: <ES.0, ES.1, …, ES.n> where for each pair (ES.i, ES.i+1) it holds that  applychange(Si,ES.i,X) = ES.i+1

Such a repetitive application of the applychange() rule we call here a simulation: S x ES x X   —> <ES.0, ES.1, …, ES.n> with the condition  for each pair (ES.i, ES.i+1) that it holds that  applychange(Si,ES.i,X) = ES.i+1also written as: simulation(S , ES, X) = <ES.0, ES.1, …, ES.n>.

A device which can operate a simulation is called a simulator ∑. A simulator is either a human actor or a computer with an appropriate algorithm.

 

THE OKSIMO CASE as SUBJECT FOR PHILOSOPHY OF SCIENCE. Part 3. Generate a Vision

eJournal: uffmm.org
ISSN 2567-6458, 23.March – 24.March 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

GENERATE A VISION

As explained in the preceding post a basic idea of the oksimo behavior space is to bring together different human actors, let them share their knowledge and experience of some real part of their world and then they are invited to  think about, how one can   improve this part.

In this text we will deal with this improvement of a given situation S. It is assumed here that any kind of improvement needs some idea, a vision [V] of a  possible real situation Sfut, which is not yet real but which in principal could become real. The vision of a possible real situation can in the beginning only exist as a set of Expressions ES whose  meaning is accessible by the meaning function φ applied to the expression ES as φ(ES) = Sfut = V. The vision V exists therefore as intended meaning only. An intended but not yet real meaning appears to us as as an idea in our mind,  which we can share  with other human actors by expressions classified as visions.

Such an intended future situation Sfut, the vision V, can be said to be real or true if there will be a point in  time in the future where Sfut   exists as a given  real situation S about which  can be said that S is fitting as an instance the meaning of the set of expressions ES describing the   situation S.

Le us for instance assume as a given real situation the  situation S with the describing expression ES= {There is a white wooden table}.

Le us for instance assume as a vision V  the describing expression EV = {There is a black metallic  table}.

The expression EV alone gives no hints whether it is describing a real situation or an intended possible future situation. This can only be decided based on actual knowledge about the world KRW which enables a human actor to  classify  a situation S either as actual given or as not actual given but generally possible. Depending on such a classification of a human actor A the human actor can decide whether the expression ES= {There is a white wooden table} is decidable as true or the expression EV = {There is a black metallic  table}. As long as the situation S is given as a real situation which corresponds to the expression ES= {There is a white wooden table} then the other expression EV = {There is a black metallic  table}  can be classified as not yet given.

FORMAL LOGIC BEYOND MEANING

(Last change: March 24, 2021)

Until now it has been stressed that expressions of a language L — external as well as internal – can only be understood   in connection with the assumed built-in meaning function φ which enables a mapping inside a brain between different kinds of brain   states  NN and a subset of these brain states  Lint  which is  representing the expressions of an inner  language, Lint ⊆ NN.

Assuming this we can look  to given sets of external expressions like  E and E’ of the external language L nevertheless in a purely formal way. Let us assume for instance the following two sets:

ES = {There is a table. The table is white. The table is quadratic.}

EV = {There is a table. The table is black. The table is round. The table allows four seats.}

If we look to both sets purely formally from the point of set theory then we can  apply set operations like the following ones:

  1. Cardinality of the sets (amount of members): |ES| = 3,  |EV| = 4
  2. Intersection (what is common to both): ES ∩ EV = {There is a table}
  3. Cardinality of the intersection: |{There is a table}| = 1
  4. Degree of sharing of EV to Eas percentage = 1/4 = 25%

Thus purely formally without looking to the presupposed meaning we can say that the set EV representing the vision does  25% of its content share with the set ES representing the actual given real situation S.

If by some reason the actual situation S would change and thereby the corresponding set of expressions ES would change one can repeat the set operations and thereby one can monitor the relationship of the  given actual situation S and the vision V. If for instance a young couple wants to by a new table according to the vision EV owing actual a table according to the description ES than it can happen that the young couple  will find different kinds of tables t1, t2, …, tn  in  the furniture shops. The degree of similarity between the wanted table according to the vision V and the found tables ti in the furniture shops can vary between at least 25% and 100%. After 6 hours of looking around with the result that the best candidate ti reached  only 75% it is conceivable that the young couple changes their goal from 100% fulfillment to only 75%, or not. She says: “No, I want 100%”.

MEANING IN THE BACKGROUND

What one can see here is that formal mechanisms can work with sets of expressions without looking to the actual meaning. But it is at the same time clear that these formal operations are only useful seen in a  bigger framework where these expressions are clearly rooted in the meaning spaces of  every human actor participating in a communication inside a group of human actors — experts, citizens, people … –, where the group wants to clarify the relation between an actual given situation S and another not yet given situation Sfut which appears to the group as a vision of a possible situation which — by reasons only known to this group — seems to be more favorable.

 

 

 

 

 

THE OKSIMO CASE as SUBJECT FOR PHILOSOPHY OF SCIENCE. Part 2. makedecidable()

eJournal: uffmm.org
ISSN 2567-6458, 23.March – 23.March 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

STARTING WITH SOMETHING ‘REAL’

A basic idea of the oksimo behavior space is to bring together different human actors, let them share their knowledge and experience of some real part of their world and then they are invited to  think about, how one can   improve this part.

What sounds so common — some real part of their world — isn’t necessarily  easy to define.

As has been discussed in the  preceding post to make language expressions decidable this is only possible if certain practical requirements are fulfilled. The ‘practical recipe’

makedecidable :  S x Ahum x E —> E x {true, false}

given in the preceding post claims that you —  if you want to know whether an expression E is concrete and can be classified as   ‘true’ or ‘false’ —   have to ask  a human actor Ahum , which is part of the same  concrete situation S as you, and he/ she  should confirm or disclaim   whether the expression E can be interpreted as  being  ‘true’ or ‘false’ in this situation S.

Usually, if  there is a real concrete situation S with you and some other human actor A, then you both will have a perception of the situation, you will both have internal abstraction processes with abstract states, you will have mappings from such abstracted states into some expressions of your internal language Lint and you and the other human actor A can exchange external expressions corresponding to the inner expressions and thereby corresponding to the internal abstracted states of the situation S. Even if the used language expressions E — like for instance ‘There is a white wooden table‘ — will contain abstract expressions/ universal expressions like ‘white’, ‘wooden’, ‘table’, even then you and the other human actor  will be able to decide whether there are properties of the concrete situation which are fitting as accepted instances the universal parts  of the language expression ‘There is a white wooden table‘.

Thus being in a real situation S with the other human actors enables usually all participants of the situation to decide language expressions which are related to the situation.

But what consequences does it have  if you are somehow abroad, if you are not actually part of the situation S? Usually — if you are hearing or reading an expression like  ‘There is a white wooden table‘ — you will be able to get an idea of the intended meaning only by your learned meaning function φ which maps the external expression into an internal expression and further maps the internal expression into the learned abstracted states.  While the expressions ‘white’ and  ‘wooden’ are perhaps rather ‘clear’ the expression  ‘table’ is today associated with many, many different possible concrete matters and only by hearing or reading it is not possible to decide which of all these are the intended concrete matter. Thus although if you would be able to decided in the real situation S which of these many possible instances are given in the real situation, with the expression only disconnected from the situation, you are not able to decide whether  the expression is true or not. Thus the expression has the cognitive status that it perhaps can be true but actually you cannot decide.

REALITY SUPPORTERS

Between the two cases (i) being part of he real situation S or (ii) being disconnected from the real situation S there are many variants of situations which can be understood as giving some additional support to decide whether an expression E is rather true or not.

The main weakness for not being  able to decide is  the lack of hints to narrow down the set of possible interpretations of learned  meanings by counter examples. Thus while a human actor could  have learned that the expression ‘table’ can be associated with for instance  25 different concrete matters, then he/ she needs some hints/ clues which of these possibilities can be ruled out and thereby the actor could narrow down the set of possible learned meanings to then only for instance left possibly 5 of 25.

While the real situation S can not be send along with the expression it is possible to send for example a drawing of the situation  S or a photo. If properties are involved which deserve different senses like smelling or hearing or touching or … then a photo would not suffice.

Thus to narrow down the possible interpretations of an expression for someone who is not part of the situation it can be of help to give additional  ‘clues’ if possible, but this is not always possible and moreover it is always more or less incomplete.

 

 

 

 

THE OKSIMO CASE as SUBJECT FOR PHILOSOPHY OF SCIENCE. Part 1

eJournal: uffmm.org
ISSN 2567-6458, 22.March – 23.March 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE OKSIMO EVENT SPACE

The characterization of the oksimo software paradigm starts with an informal characterization  of the oksimo software event space.

EVENT SPACE

An event space is a space which can be filled up by observable events fitting to the species-specific internal processed environment representations [1], [2] here called internal environments [ENVint]. Thus the same external environment [ENV] can be represented in the presence of  10 different species  in 10 different internal formats. Thus the expression ‘environment’ [ENV] is an abstract concept assuming an objective reality which is common to all living species but indeed it is processed by every species in a species-specific way.

In a human culture the usual point of view [ENVhum] is simultaneous with all the other points of views [ENVa] of all the other other species a.

In the ideal case it would be possible to translate all species-specific views ENVa into a symbolic representation which in turn could then be translated into the human point of view ENVhum. Then — in the ideal case — we could define the term environment [ENV] as the sum of all the different species-specific views translated in a human specific language: ∑ENVa = ENV.

But, because such a generalized view of the environment is until today not really possible by  practical reasons we will use here for the beginning only expressions related to the human specific point of view [ENVhum] using as language an ordinary language [L], here  the English language [LEN]. Every scientific language — e.g. the language of physics — is understood here as a sub language of the ordinary language.

EVENTS

An event [EV] within an event space [ENVa] is a change [X] which can be observed at least from the  members of that species [SP] a which is part of that environment ENV which enables  a species-specific event space [ENVa]. Possibly there can be other actors around in the environment ENV from different species with their specific event space [ENVa] where the content of the different event spaces  can possible   overlap with regard to  certain events.

A behavior is some observable movement of the body of some actor.

Changes X can be associated with certain behavior of certain actors or with non-actor conditions.

Thus when there are some human or non-human  actors in an environment which are moving than they show a behavior which can eventually be associated with some observable changes.

CHANGE

Besides being   associated with observable events in the (species specific) environment the expression  change is understood here as a kind of inner state in an actor which can compare past (stored) states Spast with an actual state SnowIf the past and actual state differ in some observable aspect Diff(Spast, Snow) ≠ 0, then there exists some change X, or Diff(Spast, Snow) = X. Usually the actor perceiving a change X will assume that this internal structure represents something external to the brain, but this must not necessarily be the case. It is of help if there are other human actors which confirm such a change perception although even this does not guarantee that there really is a  change occurring. In the real world it is possible that a whole group of human actors can have a wrong interpretation.

SYMBOLIC COMMUNICATION AND MEANING

It is a specialty of human actors — to some degree shared by other non-human biological actors — that they not only can built up internal representations ENVint of the reality external to the  brain (the body itself or the world beyond the body) which are mostly unconscious, partially conscious, but also they can built up structures of expressions of an internal language Lint which can be mimicked to a high degree by expressions in the body-external environment ENV called expressions of an ordinary language L.

For this to work one  has  to assume that there exists an internal mapping from internal representations ENVint into the expressions of the internal language   Lint as

meaning : ENVint <—> Lint.

and

speaking: Lint —> L

hearing: Lint <— L

Thus human actors can use their ordinary language L to activate internal encodings/ decodings with regard to the internal representations ENVint  gained so far. This is called here symbolic communication.

NO SPEECH ACTS

To classify the occurrences of symbolic expressions during a symbolic communication  is a nearly infinite undertaking. First impressions of the unsolvability of such a classification task can be gained if one reads the Philosophical Investigations of Ludwig Wittgenstein. [5] Later trials from different philosophers and scientists  — e.g. under the heading of speech acts [4] — can  not fully convince until today.

Instead of assuming here a complete scientific framework to classify  occurrences of symbolic expressions of an ordinary language L we will only look to some examples and discuss these.

KINDS OF EXPRESSIONS

In what follows we will look to some selected examples of symbolic expressions and discuss these.

(Decidable) Concrete Expressions [(D)CE]

It is assumed here that two human actors A and B  speaking the same ordinary language L  are capable in a concrete situation S to describe objects  OBJ and properties PROP of this situation in a way, that the hearer of a concrete expression E can decide whether the encoded meaning of that expression produced by the speaker is part of the observable situation S or not.

Thus, if A and B are together in a room with a wooden  white table and there is a enough light for an observation then   B can understand what A is saying if he states ‘There is a white wooden table.

To understand means here that both human actors are able to perceive the wooden white table as an object with properties, their brains will transform these external signals into internal neural signals forming an inner — not 1-to-1 — representation ENVint which can further be mapped by the learned meaning function into expressions of the inner language Lint and mapped further — by the speaker — into the external expressions of the learned ordinary language L and if the hearer can hear these spoken expressions he can translate the external expressions into the internal expressions which can be mapped onto the learned internal representations ENVint. In everyday situations there exists a high probability that the hearer then can respond with a spoken ‘Yes, that’s true’.

If this happens that some human actor is uttering a symbolic expression with regard to some observable property of the external environment  and the other human actor does respond with a confirmation then such an utterance is called here a decidable symbolic expression of the ordinary language L. In this case one can classify such an expression  as being true. Otherwise the expression  is classified as being not true.

The case of being not true is not a simple case. Being not true can mean: (i) it is actually simply not given; (ii) it is conceivable that the meaning could become true if the external situation would be  different; (iii) it is — in the light of the accessible knowledge — not conceivable that the meaning could become true in any situation; (iv) the meaning is to fuzzy to decided which case (i) – (iii) fits.

Cognitive Abstraction Processes

Before we talk about (Undecidable) Universal Expressions [(U)UE] it has to clarified that the internal mappings in a human actor are not only non-1-to-1 mappings but they are additionally automatic transformation processes of the kind that concrete perceptions of concrete environmental matters are automatically transformed by the brain into different kinds of states which are abstracted states using the concrete incoming signals as a  trigger either to start a new abstracted state or to modify an existing abstracted state. Given such abstracted states there exist a multitude of other neural processes to process these abstracted states further embedded  in numerous  different relationships.

Thus the assumed internal language Lint does not map the neural processes  which are processing the concrete events as such but the processed abstracted states! Language expressions as such can never be related directly to concrete material because this concrete material  has no direct  neural basis.  What works — completely unconsciously — is that the brain can detect that an actual neural pattern nn has some similarity with a  given abstracted structure NN  and that then this concrete pattern nn  is internally classified as an instance of NN. That means we can recognize that a perceived concrete matter nn is in ‘the light of’ our available (unconscious) knowledge an NN, but we cannot argue explicitly why. The decision has been processed automatically (unconsciously), but we can become aware of the result of this unconscious process.

Universal (Undecidable) Expressions [U(U)E]

Let us repeat the expression ‘There is a white wooden table‘ which has been used before as an example of a concrete decidable expression.

If one looks to the different parts of this expression then the partial expressions ‘white’, ‘wooden’, ‘table’ can be mapped by a learned meaning function φ into abstracted structures which are the result of internal processing. This means there can be countable infinite many concrete instances in the external environment ENV which can be understood as being white. The same holds for the expressions ‘wooden’ and ‘table’. Thus the expressions ‘white’, ‘wooden’, ‘table’ are all related to abstracted structures and therefor they have to be classified as universal expressions which as such are — strictly speaking —  not decidable because they can be true in many concrete situations with different concrete matters. Or take it otherwise: an expression with a meaning function φ pointing to an abstracted structure is asymmetric: one expression can be related to many different perceivable concrete matters but certain members of  a set of different perceived concrete matters can be related to one and the same abstracted structure on account of similarities based on properties embedded in the perceived concrete matter and being part of the abstracted structure.

In a cognitive point of view one can describe these matters such that the expression — like ‘table’ — which is pointing to a cognitive  abstracted structure ‘T’ includes a set of properties Π and every concrete perceived structure ‘t’ (caused e.g. by some concrete matter in our environment which we would classify as a ‘table’) must have a ‘certain amount’ of properties Π* that one can say that the properties  Π* are entailed in the set of properties Π of the abstracted structure T, thus Π* ⊆ Π. In what circumstances some speaker-hearer will say that something perceived concrete ‘is’ a table or ‘is not’ a table will depend from the learning history of this speaker-hearer. A child in the beginning of learning a language L can perhaps call something   a ‘chair’ and the parents will correct the child and will perhaps  say ‘no, this is table’.

Thus the expression ‘There is a white wooden table‘ as such is not true or false because it is not clear which set of concrete perceptions shall be derived from the possible internal meaning mappings, but if a concrete situation S is given with a concrete object with concrete properties then a speaker can ‘translate’ his/ her concrete perceptions with his learned meaning function φ into a composed expression using universal expressions.  In such a situation where the speaker is  part of  the real situation S he/ she  can recognize that the given situation is an  instance of the abstracted structures encoded in the used expression. And recognizing this being an instance interprets the universal expression in a way  that makes the universal expression fitting to a real given situation. And thereby the universal expression is transformed by interpretation with φ into a concrete decidable expression.

SUMMING UP

Thus the decisive moment of turning undecidable universal expressions U(U)E into decidable concrete expressions (D)CE is a human actor A behaving as a speaker-hearer of the used  language L. Without a speaker-hearer every universal expressions is undefined and neither true nor false.

makedecidable :  S x Ahum x E —> E x {true, false}

This reads as follows: If you want to know whether an expression E is concrete and as being concrete is  ‘true’ or ‘false’ then ask  a human actor Ahum which is part of a concrete situation S and the human actor shall  answer whether the expression E can be interpreted such that E can be classified being either ‘true’ or ‘false’.

The function ‘makedecidable()’ is therefore  the description (like a ‘recipe’) of a real process in the real world with real actors. The important factors in this description are the meaning functions inside the participating human actors. Although it is not possible to describe these meaning functions directly one can check their behavior and one can define an abstract model which describes the observable behavior of speaker-hearer of the language L. This is an empirical model and represents the typical case of behavioral models used in psychology, biology, sociology etc.

SOURCES

[1] Jakob Johann Freiherr von Uexküll (German: [ˈʏkskʏl])(1864 – 1944) https://en.wikipedia.org/wiki/Jakob_Johann_von_Uexk%C3%BCll

[2] Jakob von Uexküll, 1909, Umwelt und Innenwelt der Tiere. Berlin: J. Springer. (Download: https://ia802708.us.archive.org/13/items/umweltundinnenwe00uexk/umweltundinnenwe00uexk.pdf )

[3] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act

[4] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein

[5] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */

OKSIMO (RELOADED) SOFTWARE

eJournal: uffmm.org
ISSN 2567-6458, 15.March 2021 – 1.April 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This post is part of the theme called ‘Oksimo Software Structures‘ which in turn is part of the overall uffmm.org Blog.

OKSIMO SOFTWARE APPLICATION STRUCTURE from March 2021

oksimo software general outline
This shows the general outline of oksimo applications

Seen from the users

Eveybody, who has a device, which can be connected to the internet and which owns a browser can address the URL of an oksimo server. There can be multiple users from around the world which can act as a ‘virtual user group’, as a ‘team’.

Seen from smart devices

Every application which can interact with the internet can connect to an oksimo server and send measurment data to the server or can even interact interactively within a simulation acting as a smart actor.

This feature of being capable to use empirical data in real time during a simulation allows  an oksimo application also to function within a smart city environment.

USER – SIMULATIONS

Users can start a given simulation by loading either a simulation presented as an HTML-page or by loading a simulation from the server login.

In case of a simulation as HTML-page the user needs a simple simulation application on his own device.

In case of a simulation by server-login the user can simulate and while doing this all additional sources (smart actors, external data-sources, …) can be used.

User editing & simulation

A user can edit a new simulation with his local device, if there exists a text editor. The edited simulation can be handed out to the local oksimo-server app and can be simulated. In this case no team-work and no usage of external sources is possible.

Being logged-in a user can work together with other users while editing a new simulation. The edited simulation can be started to run every time. Additional external resources can be activated, if these are freely callable. Depending from the osimo server a set of ‘standard smart actors’ is located in the oksimo database and can be activated.

HMI ANALYSIS, Part 4: Tool based Actor Story Development with Testing and Gaming

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, March 3-4, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 4, 2021, 07:49h (Minor corrections; relating to the UN SDGs)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 4: Tool based Actor Story Development with Testing and Gaming

Context

This text is preceded by the following texts:

INFO GRAPH

Overview about different scenarios which will be possible for the development, simulation, testing and gaming of actor stories using the oksimo software tool

Introduction

In the preceding post it has been explained, how one can format an actor story [AS] as a theory in the  format  of  an Evaluated Theory Tε with Algorithmic Intelligence:   Tε,α=<M,∑,ε,α>.

In the following text it will be explained which kinds of different scenarios will be possible to elaborate, to simulate, to test, and to enable gaming with  an actor story theory by using the oksimo software tool.

UNIVERSAL TEAM

The classical distinctions between certain types of managers, special experts and the rest of the world is given up here in favor of a stronger generalization: everybody is a potential expert with regard to a future, which nobody knows. This is emphasized by the fact, that everybody can use its usual mother tongue, a normal language, every language. Nothing more is needed.

BASIC MODELS (S, X)

As minimal elements for all possible applications it is assumed here that the experts define at least a given situation (state) [S] and a set of change rules [X].

The given state S is  either (i)  taken as it is or (ii)  as a state which  should be improved. In both cases the initial state S is called the start state [S0].

The change rules X describe possible changes which transform a given state S into a changed successor state S’.

A pair of S and X as (S,X) is called a basic model M(S,X). One can define as many models as one wants.

A DIRECTION BY A VISION V

A vision [V] can describe a possible state SV  in an assumed future. If such a state SV is given, then this state becomes a goal state SGoal In this case  we assume V ≠ 0. If no explicit goal is given, then we assume V = 0.

DEVELOPMENT BY GOALS

If a vision is given (V ≠ 0), then the vision can be used to induce a direction which can/ shall be approached by creating a set X, which enables the generation of a sequence of states with the start state S0 as first state followed by successor state Si until the goal state SGoal has been reached or at least it holds that the goal state is a subset of the reached state: SGoalSn.

It is possible to use many basic models M(S,X) in parallel and for each model Mi one can define a different goal Vi (the typical situation in a pluralistic society).

Thus there can be many basic theories T(M,V) in parallel.

STEADY STATES (V = 0)

If no explicit visions are defined (V = 0) then every direction of change is allowed. A basic steady state theory T(M,V) with V = 0 can   be written as T(M,0). Whether such a case can be of interest is not clear at the moment.

BASIC INTERACTION PATTERNS

The following interaction modes are assumed as typical cases:

  1. N-1: Within an online session an interactive webpage with the oksimo software is active and the whole group can interact with the oksimo software tool.
  2. N-N-1: N-many participants can individually login into the interactive oksimo website and being logged in they can collaborate within the oksimo software with one project.
  3. N-N-N: N-many participants can individually login into the interactive oksimo website and there everybody can run its own process or can collaborate in various ways.

The default case is case (1). The exact dates for the availability of modes (2) – (3) depends from how fast the roadmap can be realized.

BASIC APPLICATIONS
  1. Exploring Simulation-Based Development [ESBD] (V ≠ 0): If the main goal is to find a path from a given state today S (Now) to an envisioned state V in the future then one has  to collect appropriate change rules X to approach the final goal state SGoal better and better. Activating the simulator ∑ during search and construction phase at will can be of great help, especially if the documents (S, X, V) are becoming more and more complex.
  2. Embedded Simulation-Based  Testing [ESBT] (V ≠ 0): If a basic  actor story theory T(M,) is given with a given goal (V ≠ 0) then it is of great help if the simulation is done in interactive mode where the simulator is not applying the change rules by itself but by asking different logged in users which rule they want to apply and how. These tests show not only which kinds of errors will occur but they can also show during n-many repetitions to which degree an user  can learn to behave task-conform. If the tests will not show the expected outcomes then this can point  to possible deficiencies of the software as well to specialties of the user.
  3. Embedded Simulation-Based Gaming [ESBTG] (V ≠ 0):  The case of gaming is partially  different to the case of testing.  Although it is assumed here too that at least one vision (goal) is given, it is additionally assumed that  there exists  a competition between different players or different teams. Different to testing exists in gaming according to the goal(s) the role of a winner: that player/ team which has reached a defined  goal state before the other player/ teams,  has won. As a side-effect of gaming one can also evaluate the playing environment and give some feedback to the developers.
ALGORITHMIC INTELLIGENCE
  1. Case ESBD, T(S,X,V,∑,ε,α): Because a normal simulation with the simulator always does  produce only one path from the start state to the goal state it is desirable to have an algorithm α which would run on demand as many times as wanted and thereby the algorithm α would search for all possible paths and at the same time it would look for those derivations, where the goal state satisfies with  ε certain special requirements. Thus the result from the application of α onto a given model M with the vision V would generate the set SV* of all those final states which satisfy the special requirements.
  2. Case ESBG, T(S,X,V,∑,ε,α):   The case of gaming allows at least three kinds of interesting applications for algorithmic intelligence: (i) Introduce non-biological players with learning capabilities which can act simultaneously with the biological players; (ii) Introduce non-biological players with learning capabilities which have to learn how to support, to assist, to train biological player. This second case addresses the challenging task to develop algorithmic tutors for several kinds of learning tasks. (iii) Another variant of case (ii) is to enable the development of a personal algorithmic assistant who works only with one person on a long-term basis.

The kinds of algorithmic Intelligence in (2)(i)-(iii) are different to the  mentioned algorithmic intelligence α in (1).

TYPES OF ACTORS

As the default standard case of an actor it is assumed that there are biological actors, usually human persons, which will not be analyzed with their inner structure [IS]. While the behavior of every system — and  therefore any biological system too — can be described with a behavior function φ: I x IS —> IS x O (if one has all the necessary knowledge), in the default case of biological systems  no behavior function φ is specified, φ = 0. During interactive simulations biological systems act by themselves.

If non-biological actors are used — e.g. automata with a certain machine program (an algorithm) — then one can use these only if one has a fully specified behavior function φ. From this follows that a  change rule which is associated with a non-biological actor has in its Eplus and in its Eminus part not a concrete expression but a variable, which will be computed during the simulation by the non-biological actor depending from its input and its behavior function φ: φ(input)IS=(Eplus, Eminus)IS.

FINAL COMMENT

Everybody who has read the parts (1) – (4) has now a general knowledge about the motivation to develop the oksimo software tool to support human kind to have a better communication and thinking of possible futures and a first understanding (hopefully :-)) how this tool can work. Reading the UN sustainable development goals [SDGs] [1] you will learn, that the SDG4 (Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all) is fundamental to all other SDGs. The oksimo software tool is one tool to be of help to reach these goals.

REFERENCES

[1] The 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet, now and into the future. At its heart are the 17 Sustainable Development Goals (SDGs), which are an urgent call for action by all countries – developed and developing – in a global partnership. They recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests. See PDF: https://sdgs.un.org/sites/default/files/publication/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf

[2] UN, SDG4, PDF, Argumentation why the SDG4 ist fundamental for all other SDGs: https://sdgs.un.org/sites/default/files/publications/2275sdbeginswitheducation.pdf

 

 

 

 

 

 

 

 

HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, March 2, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 2, 2021 13:59h (Minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 3: Actor Story and  Theories

Context

This text is preceded by the following texts:

Introduction

Having a vision is that moment  where something really new in the whole universe is getting an initial status in some real brain which can enable other neural events which  can possibly be translated in bodily events which finally can change the body-external outside world. If this possibility is turned into reality than the outside world has been changed.

When human persons (groups of homo sapiens specimens) as experts — here acting as stakeholder and intended users as one but in different roles! — have stated a problem and a vision document, then they have to translate these inevitably more fuzzy than clear ideas into the concrete terms of an everyday world, into something which can really work.

To enable a real cooperation  the experts have to generate a symbolic description of their vision (called specification) — using an everyday language, possibly enhanced by special expressions —  in a way that  it can became clear to the whole group, which kind of real events, actions and processes are intended.

In the general case an engineering specification describes concrete forms of entanglements of human persons which enable  these human persons to cooperate   in a real situation. Thereby the translation of  the vision inside the brain  into the everyday body-external reality happens. This is the language of life in the universe.

WRITING A STORY

To elaborate a usable specification can metaphorically be understood  as the writing of a new story: which kinds of actors will do something in certain situations, what kinds of other objects, instruments etc. will be used, what kinds of intrinsic motivations and experiences are pushing individual actors, what are possible outcomes of situations with certain actors, which kind of cooperation is  helpful, and the like. Such a story is  called here  Actor Story [AS].

COULD BE REAL

An Actor Story must be written in a way, that all participating experts can understand the language of the specification in a way that   the content, the meaning of the specification is either decidable real or that it eventually can become real.  At least the starting point of the story should be classifiable as   being decidable actual real. What it means to be decidable actual real has to be defined and agreed between the participating experts before they start writing the Actor Story.

ACTOR STORY [AS]

An Actor Story assumes that the described reality is classifiable as a set of situations (states) and  a situation as part of the Actor Story — abbreviated: situationAS — is understood  as a set of expressions of some everyday language. Every expression being part of an situationAS can be decided as being real (= being true) in the understood real situation.

If the understood real situation is changing (by some event), then the describing situationAS has to be changed too; either some expressions have to be removed or have to be added.

Every kind of change in the real situation S* has to be represented in the actor story with the situationAS S symbolically in the format of a change rule:

X: If condition  C is satisfied in S then with probability π  add to S Eplus and remove from  S Eminus.

or as a formula:

S’π = S + Eplus – Eminus

This reads as follows: If there is an situationAS S and there is a change rule X, then you can apply this change rule X with probability π onto S if the condition of X is satisfied in S. In that case you have to add Eplus to S and you have to remove Eminus from S. The result of these operations is the new (successor) state S’.

The expression C is satisfied in S means, that all elements of C are elements of S too, written as C ⊆ S. The expression add Eplus to S means, that the set Eplus is unified with the set S, written as Eplus ∪ S (or here: Eplus + S). The expression remove Eminus from S means, that the set Eminus is subtracted from the set S, written as S – Eminus.

The concept of apply change rule X to a given state S resulting in S’ is logically a kind of a derivation. Given S,X you will derive by applicating X the new  S’. One can write this as S,X ⊢X S’. The ‘meaning’ of the sign ⊢  is explained above.

Because every successor state S’ can become again a given state S onto which change rules X can be applied — written shortly as X(S)=S’, X(S’)=S”, … — the repeated application of change rules X can generate a whole sequence of states, written as SQ(S,X) = <S’, S”, … Sgoal>.

To realize such a derivation in the real world outside of the thinking of the experts one needs a machine, a computer — formally an automaton — which can read S and X documents and can then can compute the derivation leading to S’. An automaton which is doing such a job is often called a simulator [SIM], abbreviated here as ∑. We could then write with more information:

S,X ⊢ S’

This will read: Given a set S of many states S and a set X of change rules we can derive by an actor story simulator ∑ a successor state S’.

A Model M=<S,X>

In this context of a set S and a set of change rules X we can speak of a model M which is defined by these two sets.

A Theory T=<M,>

Combining a model M with an actor story simulator enables a theory T which allows a set of derivations based on the model, written as SQ(S,X,⊢) = <S’, S”, … Sgoal>. Every derived final state Sgoal in such a derivation is called a theorem of T.

An Empirical Theory Temp

An empirical theory Temp is possible if there exists a theory T with a group of experts which are using this theory and where these experts can interpret the expressions used in theory T by their built-in meaning functions in a way that they always can decide whether the expressions are related to a real situation or not.

Evaluation [ε]

If one generates an Actor Story Theory [TAS] then it can be of practical importance to get some measure how good this theory is. Because measurement is always an operation of comparison between the subject x to be measured and some agreed standard s one has to clarify which kind of a standard for to be good is available. In the general case the only possible source of standards are the experts themselves. In the context of an Actor Story the experts have agreed to some vision [V] which they think to be a better state than a  given state S classified as a problem [P]. These assumptions allow a possible evaluation of a given state S in the ‘light’ of an agreed vision V as follows:

ε: V x S —> |V ⊆ S|[%]
ε(V,S) = |V ⊆ S|[%]

This reads as follows: the evaluation ε is a mapping from the sets V and S into the number of elements from the set V included in the set S converted in the percentage of the number of elements included. Thus if no  element of V is included in the set S then 0% of the vision is realized, if all elements are included then 100%, etc. As more ‘fine grained’ the set V is as more ‘fine grained’  the evaluation can be.

An Evaluated Theory Tε=<M,,ε>

If one combines the concept of a  theory T with the concept of evaluation ε then one can use the evaluation in combination with the derivation in the way that every  state in a derivation SQ(S,X,⊢) = <S’, S”, … Sgoal> will additionally be evaluated, thus one gets sequences of pairs as follows:

SQ(S,X,⊢∑,ε) = <(S’,ε(V,S’)), (S”,ε(V,S”)), …, (Sgoal, ε(V,Sgoal))>

In the ideal case Sgoal is evaluated to 100% ‘good’. In real cases 100% is only an ideal value which usually will only  be approximated until some threshold.

An Evaluated Theory Tε with Algorithmic Intelligence Tε,α=<M,,ε,α>

Because every theory defines a so-called problem space which is here enhanced by some evaluation function one can add an additional operation α (realized by an algorithm) which can repeat the simulator based derivations enhanced with the evaluations to identify those sets of theorems which are qualified as the best theorems according to some criteria given. This operation α is here called algorithmic intelligence of an actor story AS]. The existence of such an algorithmic intelligence of an actor story [αAS] allows the introduction of another derivation concept:

S,X ⊢∑,ε,α S* ⊆  S’

This reads as follows: Given a set S and a set X an evaluated theory with algorithmic intelligence Tε,α can derive a subset S* of all possible theorems S’ where S* matches certain given criteria within V.

WHERE WE ARE NOW

As it should have become clear now the work of HMI analysis is the elaboration of a story which can be done in the format of different kinds of theories all of which can be simulated and evaluated. Even better, the only language you have to know is your everyday language, your mother tongue (mathematics is understood here as a sub-language of the everyday language, which in some special cases can be of some help). For this theory every human person — in all ages! — can be a valuable  colleague to help you in understanding better possible futures. Because all parts of an actor story theory are plain texts, everybody ran read and understand everything. And if different groups of experts have investigated different  aspects of a common field you can merge all texts by only ‘pressing a button’ and you will immediately see how all these texts either work together or show discrepancies. The last effect is a great opportunity  to improve learning and understanding! Together we represent some of the power of life in the universe.

CONTINUATION

See here.

 

 

 

 

 

 

 

 

HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, February 27-March 16, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 16, 2021 (minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 2: Problem & Vision

Context

This text is preceded by the following texts:

Introduction

Before one starts the HMI analysis  some stakeholder  — in our case are the users stakeholder as well as  users in one role —  have to present some given situation — classifiable as a ‘problem’ — to depart from and a vision as the envisioned goal to be realized.

Here we give a short description of the problem for the CM:MI paradigm and the vision, what should be gained.

Problem: Mankind on the Planet Earth

In this project  the mankind  on the planet earth is  understood as the primary problem. ‘Mankind’ is seen here  as the  life form called homo sapiens. Based on the findings of biological evolution one can state that the homo sapiens has — besides many other wonderful capabilities — at least two extraordinary capabilities:

Outside to Inside

The whole body with the brain is  able to convert continuously body-external  events into internal, neural events. And  the brain inside the body receives many events inside the body as external events too. Thus in the brain we can observe a mixup of body-external (outside 1) and body-internal events (outside 2), realized as set of billions of neural processes, highly interrelated.  Most of these neural processes are unconscious, a small part is conscious. Nevertheless  these unconscious and conscious events are  neurally interrelated. This overall conversion from outside 1 and outside 2 into neural processes  can be seen as a mapping. As we know today from biology, psychology and brain sciences this mapping is not a 1-1 mapping. The brain does all the time a kind of filtering — mostly unconscious — sorting out only those events which are judged by the brain to be important. Furthermore the brain is time-slicing all its sensory inputs, storing these time-slices (called ‘memories’), whereby these time-slices again are no 1-1 copies. The storing of time-sclices is a complex (unconscious) process with many kinds of operations like structuring, associating, abstracting, evaluating, and more. From this one can deduce that the content of an individual brain and the surrounding reality of the own body as well as the world outside the own body can be highly different. All kinds of perceived and stored neural events which can be or can become conscious are  here called conscious cognitive substrates or cognitive objects.

Inside to Outside (to Inside)

Generally it is known that the homo sapiens can produce with its body events which have some impact on the world outside the body.  One kind of such events is the production of all kinds of movements, including gestures, running, grasping with hands, painting, writing as well as sounds by his voice. What is of special interest here are forms of communications between different humans, and even more specially those communications enabled by the spoken sounds of a language as well as the written signs of a language. Spoken sounds as well as written signs are here called expressions associated with a known language. Expressions as such have no meaning (A non-speaker of a language L can hear or see expressions of the language L but he/she/x  never will understand anything). But as everyday experience shows nearly every child  starts very soon to learn which kinds of expressions belong to a language and with what kinds of shared experiences they can be associated. This learning is related to many complex neural processes which map expressions internally onto — conscious and unconscious — cognitive objects (including expressions!). This mapping builds up an internal  meaning function from expressions into cognitive objects and vice versa. Because expressions have a dual face (being internal neural structures as well as being body-outside events by conversions from the inside to body-outside) it is possible that a homo sapiens  can transmit its internal encoding of cognitive objects into expressions from his  inside to the outside and thereby another homo sapiens can perceive the produced outside expression and  can map this outside expression into an intern expression. As far as the meaning function of of the receiving homo sapiens  is sufficiently similar to the meaning function of  the sending homo sapiens there exists some probability that the receiving homo sapiens can activate from its memory cognitive objects which have some similarity with those of  the sending  homo sapiens.

Although we know today of different kinds of animals having some form of language, there is no species known which is with regard to language comparable to  the homo sapiens. This explains to a large extend why the homo sapiens population was able to cooperate in a way, which not only can include many persons but also can stretch through long periods of time and  can include highly complex cognitive objects and associated behavior.

Negative Complexity

In 2006 I introduced the term negative complexity in my writings to describe the fact that in the world surrounding an individual person there is an amount of language-encoded meaning available which is beyond the capacity of an  individual brain to be processed. Thus whatever kind of experience or knowledge is accumulated in libraries and data bases, if the negative complexity is higher and higher than this knowledge can no longer help individual persons, whole groups, whole populations in a constructive usage of all this. What happens is that the intended well structured ‘sound’ of knowledge is turned into a noisy environment which crashes all kinds of intended structures into nothing or badly deformed somethings.

Entangled Humans

From Quantum Mechanics we know the idea of entangled states. But we must not dig into quantum mechanics to find other phenomena which manifest entangled states. Look around in your everyday world. There exist many occasions where a human person is acting in a situation, but the bodily separateness is a fake. While sitting before a laptop in a room the person is communicating within an online session with other persons. And depending from the  social role and the  membership in some social institution and being part of some project this person will talk, perceive, feel, decide etc. with regard to the known rules of these social environments which are  represented as cognitive objects in its brain. Thus by knowledge, by cognition, the individual person is in its situation completely entangled with other persons which know from these roles and rules  and following thereby  in their behavior these rules too. Sitting with the body in a certain physical location somewhere on the planet does not matter in this moment. The primary reality is this cognitive space in the brains of the participating persons.

If you continue looking around in your everyday world you will probably detect that the everyday world is full of different kinds of  cognitively induced entangled states of persons. These internalized structures are functioning like protocols, like scripts, like rules in a game, telling everybody what is expected from him/her/x, and to that extend, that people adhere to such internalized protocols, the daily life has some structure, has some stability, enables planning of behavior where cooperation between different persons  is necessary. In a cognitively enabled entangled state the individual person becomes a member of something greater, becoming a super person. Entangled persons can do things which usually are not possible as long you are working as a pure individual person.[1]

Entangled Humans and Negative Complexity

Although entangled human persons can principally enable more complex events, structures,  processes, engineering, cultural work than single persons, human entanglement is still limited by the brain capacities as well as by the limits of normal communication. Increasing the amount of meaning relevant artifacts or increasing the velocity of communication events makes things even more worse. There are objective limits for human processing, which can run into negative complexity.

Future is not Waiting

The term ‘future‘ is cognitively empty: there exists nowhere an object which can  be called ‘future’. What we have is some local actual presence (the Now), which the body is turning into internal representations of some kind (becoming the Past), but something like a future does not exist, nowhere. Our knowledge about the future is radically zero.

Nevertheless, because our bodies are part of a physical world (planet, solar system, …) and our entangled scientific work has identified some regularities of this physical world which can be bused for some predictions what could happen with some probability as assumed states where our clocks are showing a different time stamp. But because there are many processes running in parallel, composed of billions of parameters which can be tuned in many directions, a really good forecast is not simple and depends from so many presuppositions.

Since the appearance of homo sapiens some hundred thousands years ago in Africa the homo sapiens became a game changer which makes all computations nearly impossible. Not in the beginning of the appearance of the homo sapiens, but in the course of time homo sapiens enlarged its number, improved its skills in more and more areas, and meanwhile we know, that homo sapiens indeed has started to crash more and more  the conditions of its own life. And principally thinking points out, that homo sapiens could even crash more than only planet earth. Every exemplar of a homo sapiens has a built-in freedom which allows every time to decide to behave in a different way (although in everyday life we are mostly following some protocols). And this built-in freedom is guided by actual knowledge, by emotions, and by available resources. The same child can become a great musician, a great mathematician, a philosopher, a great political leader, an engineer, … but giving the child no resources, depriving it from important social contexts,  giving it the wrong knowledge, it can not manifest its freedom in full richness. As human population we need the best out of all children.

Because  the processing of the planet, the solar system etc.  is going on, we are in need of good forecasts of possible futures, beyond our classical concepts of sharing knowledge. This is where our vision enters.

VISION: DEVELOPING TOGETHER POSSIBLE FUTURES

To find possible and reliable shapes of possible futures we have to exploit all experiences, all knowledge, all ideas, all kinds of creativity by using maximal diversity. Because present knowledge can be false — as history tells us –, we should not rule out all those ideas, which seem to be too crazy at a first glance. Real innovations are always different to what we are used to at that time. Thus the following text is a first rough outline of the vision:

  1. Find a format
  2. which allows any kinds of people
  3. for any kind of given problem
  4. with at least one vision of a possible improvement
  5. together
  6. to search and to find a path leading from the given problem (Now) to the envisioned improved state (future).
  7. For all needed communication any kind of  everyday language should be enough.
  8. As needed this everyday language should be extendable with special expressions.
  9. These considerations about possible paths into the wanted envisioned future state should continuously be supported  by appropriate automatic simulations of such a path.
  10. These simulations should include automatic evaluations based on the given envisioned state.
  11. As far as possible adaptive algorithms should be available to support the search, finding and identification of the best cases (referenced by the visions)  within human planning.

REFERENCES or COMMENTS

[1] One of the most common entangled state in daily life is the usage of normal language! A normal language L works only because the rules of usage of this language L are shared by all speaker-hearer of this language, and these rules are explicit cognitive structures (not necessarily conscious, mostly unconscious!).

Continuation

Yes, it will happen 🙂 Here.

 

 

 

 

 

 

HMI Analysis for the CM:MI paradigm. Part 1

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, February 25, 2021
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Last change: March 16, 2021 (Some minor corrections)
HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 1
Introduction

Since January 2021 an intense series of posts has been published how the new ideas manifested in the new software published in this journal  can adequately be reflected in the DAAI theoretical framework. Because these ideas included in the beginning parts of philosophy, philosophy of science, philosophy of engineering, these posts have been first published in the German Blog of the author (cognitiveagent.org). This series of posts started with an online lecture for students of the University of Leipzig together with students of the ‘Hochschule für Technik, Wirtschaft und Kultur (HTWK)’ January 12, 2021.  Here is the complete list of posts:

In what follows in this text is an English version of the following 5 posts. This is not a 1-to-1 translation but rather a new version:

HMI Analysis as Part of Systems Engineering

HMI analysis as pat of systems engineering illustrated with the oksimo software
HMI analysis for the CM:MI paradigm illustrated with the oksimo software concept

As described in the original DAAI theory paper the whole topic of HMI is here understood as a job within the systems engineering paradigm.

The specification process is a kind of a ‘test’ whether the DAAI format of the HMI analysis works with this new  application too.

To remember, the main points of the integrated engineering concept are the following ones:

  1. A philosophical  framework (Philosophy of Science, Philosophy of Engineering, …), which gives the fundamentals for such a process.
  2. The engineering process as such where managers and engineers start the whole process and do it.
  3. After the clarification of the problem to be solved and a minimal vision, where to go, it is the job of the HMI analysis to clarify which requirements have to be fulfilled, to find an optimal solution for the intended product/ service. In modern versions of the HMI analysis substantial parts of the context, i.e. substantial parts of the surrounding society, have to be included in the analysis.
  4. Based on the HMI analysis  in  the logical design phase a mathematical structure has to be identified, which integrates all requirements sufficiently well. This mathematical structure has to be ‘map-able’ into a set of algorithms written in  appropriate programming languages running on  an appropriate platform (the mentioned phases Problem, Vision, HMI analysis, Logical Design are in reality highly iterative).
  5. During the implementation phase the algorithms will be translated into a real working system.
Which Kinds of Experts?

While the original version of the DAAI paper is assuming as ‘experts’ only the typical manager and engineers of an engineering process including all the typical settings, the new extended version under the label CM:MI (Collective Man-Machine Intelligence) has been generalized to any kind of human person as an expert, which allows a maximum of diversity. No one is the ‘absolute expert’.

Collective Intelligence

As ‘intelligence’ is understood here the whole of knowledge, experience, and motivations which can be the moving momentum inside of a human person. As ‘collective’  is meant  the situation, where more than one person is communicating with other persons to share it’s intelligence.

Man-Machine Symbiosis

Today there are discussions going around  about the future of man and (intelligent) machines. Most of these discussions are very weak because they are lacking clear concepts of intelligent machines as well of what is a human person. In the CM:MI paradigm the human person (together with all other biological systems)  is seen at the center of the future  (by  reasons based on modern theories of biological evolution) and the  intelligent machines are seen as supporting devices (although it is assumed here to use ‘strong’ intelligence compared to the actual ‘weak’ machine intelligence today).

CM:MI by Design

Although we know, that groups of many people are ‘in principal’ capable of sharing intelligence to define problems, visions, constructing solutions, testing the solutions etc., we know too, that the practical limits of the brains and the communication are quite narrow. For special tasks a computer can be much, much better. Thus the CM:MI paradigm provides an environment for groups of people to do the shared planning and testing in a new way, only using normal language. Thus the software is designed to enable new kinds of shared knowledge about shared common modes of future worlds. Only with such a truly general framework the vision of a sustainable society as pointed out by the United Nations since 1992 can become real.

Continuation

Look here.

OKSIMO SW – Minimal Basic Requirements

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, January 8, 2021
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI]. This includes Human Machine Intelligence [HMIntelligence]  as part of Human Machine Interaction [HMI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly dealing with python programming – and a section about a web-server with Dragon. This document is part of the Case Studies section.

CONTENT

In the long way of making the theory  as well as the software [SW] more concrete we have reached January 5, 2021 a first published version on [www.]oksimo.com.  This version contains a sub-part of the whole concept which I call here the Minimal Basic Version [MBV] of the osimo SW. This minimal basic will be tested until the end of february 2021. Then we will add stepwise all the other intended features.

THE MINIMAL BASIC VERSION

oksimo SW Minimal Basic Version Jan 3, 2021
oksimo SW Minimal Basic Version Jan 3, 2021

If one compares this figure with the figure of the Multi-Group Management from Dec 5, 2020 one can easily detect simplifications for the first modul now called Vision [V] as well as for the last modul called Evaluation [EVAL].

While the basic modules States [S], Change Rules [X] and Simulator [SIM] stayed the same the mentioned first and last module have slightly changed in the sense that they have become simplified.

During the first tests with the oksimo reloaded SW it became clear that for a simulation unified with evaluation  it is sufficient to have at least one vision V to be compared with an actual state S whether parts of the vision V are also part of the state S. This induced the requirement that a vision V has to be understood as a collection of statements where earch statement describes some aspect of a vision as a whole.

Example 1: Thus a global vision of a city to have a ‘Kindergarten’ could be extended with facts like ‘It is free for all children’, ‘I is constructed in an ecological acceptable manner’, …

Example 2: A global vision to have a system interface [SI] for the oksimo reloaded SW could include statements (facts) like: ‘The basic mode is text input in an everyday language’, ‘In an advanced mode you can use speech-recognition tools to enter a text into the system’, ‘The basic mode of the simulation output is text-based’, ‘In an advanced mode you can use text-to-speech SW to allow audio-output of the simulation’, ….

Vision V – Statement S: The citizen which will work with the oksimo reloaded SW has now only to distinguish between the vision V which points into some — as such — unknown future and the given situation S describing some part of the everyday world. The vision with all its possible different partial views (statements, facts) can then be used to a evaluate a given state S whether the vision is already part of it or not. If during a simulation a state S* has been reached and the global vision ‘The city has a Kindergarten’ is part of S*  but not the partial aspects ‘It is free for all children’, ‘I is constructed in an ecological acceptable manner’,  then only one third of the vision has been fulfilled: eval(V,S*)= 33,3 … %. As one can see the amount of vision facts determines the fineness of the evaluation.

Requirements Point of View: In Software Engineering [SWE] and — more general — in Human-Machine Interaction [HMI] as part of System Engineering [SE] the analysis phase is characterized by a list of functional and non-functional requirements [FR, NFR]. Both concepts are in the oksimo SW parts of the vision modul. Everything you think of  to be important for your vision you can write down as some aspect of the vision.  And if you want to structure your vision into several parts you can edit different vision documents which for a simulation can be united to one document again.

Change Rules [X]: In the minimal basic version only three components of a change rule X will be considered: The condition [COND] part which checks whether an actual state S satisfies (fulfills)  the condition; the Eplus part which contains facts which shall be added to the actual state S for the next turn; the Eminus part which contains facts which shall be removed from the actual state S für the next turn. Other components like Probability [PROB] or Model [MODEL] will be added in the future.