Category Archives: theorem

OKSIMO MEETS POPPER. The Oksimo Theory Paradigm

eJournal: uffmm.org
ISSN 2567-6458, 2.April – 2.April  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE OKSIMO THORY PARADIGM
The Oksimo Theory Paradigm
Figure 1: The Oksimo Theory Paradigm

The following text is a short illustration how the general theory concept as extracted from the text of Popper can be applied to the oksimo simulation software concept.

The starting point is the meta-theoetical schema as follows:

MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>

In the oksimo case we have also a given empirical context S, a non-epty set of human actors A[μ] whith a built-in meaning function for the expressions E of some language L, some axioms AX as a subset of the expressions E, an inference concept , and all the other concepts.

The human actors A[μ] can write  some documents with the expressions E of language L. In one document S_U they can write down some universal facts they belief that these are true (e.g. ‘Birds can fly’).  In another document S_E they can write down some empirical facts from the given situation S like ‘There is something named James. James is a bird’. And somehow they wish that James should be able to fly, thus they write down a vision text S_V with ‘James can fly’.

The interesting question is whether it is possible to generate a situation S_E.i in the future, which includes the fact ‘James can fly’.

With the knowledge already given they can built the change rule: IF it is valid, that {Birds can fly. James is a bird} THEN with probability π = 1 add the expression Eplus = {‘James can fly’} to the actual situation S_E.i. EMinus = {}. This rule is then an element of the set of change rules X.

The simulator X works according to the schema S’ = S – Eminus + Eplus.

Because we have S=S_U + S_E we are getting

S’ = {Birds can fly. Something is named James. James is a bird.} – Eminus + Eplus

S’ = {Birds can fly. Something is named James. James is a bird.} – {}+ {James can fly}

S’ = {Birds can fly. Something is named James. James is a bird. James can fly}

With regard to the vision which is used for evaluation one can state additionally:

|{James can fly} ⊆ {Birds can fly. Something is named James. James is a bird. James can fly}|= 1 ≥ 1

Thus the goal has been reached with 1 meaning with 100%.

THE ROLE OF MEANING

What makes a certain difference between classical concepts of an empirical theory and the oksimo paradigm is the role of meaning in the oksimo paradigm. While the classical empirical theory concept is using formal (mathematical) languages for their descriptions with the associated — nearly unsolvable — problem how to relate these concepts to the intended empirical world, does the oksimo paradigm assume the opposite: the starting point is always the ordinary language as basic language which on demand can be extended by special expressions (like e.g. set theoretical expressions, numbers etc.).

Furthermore it is in the oksimo paradigm assumed that the human actors with their built-in meaning function nearly always are able to  decided whether an expression e of the used expressions E of the ordinary language L is matching certain properties of the given situation S. Thus the human actors are those who have the authority to decided by their meaning whether some expression is actually true or not.

The same holds with possible goals (visions) and possible inference rules (= change rules). Whether some consequence Y shall happen if some condition X is satisfied by a given actual situation S can only be decided by the human actors. There is no other knowledge available then that what is in the head of the human actors. [1] This knowledge can be narrow, it can even be wrong, but human actors can only decide with that knowledge what is available to them.

If they are using change rules (= inference rules) based on their knowledge and they derive some follow up situation as a theorem, then it can happen, that there exists no empiricial situation S which is matching the theorem. This would be an undefined truth case. If the theorem t would be a contradiction to the given situation S then it would be clear that the theory is inconsistent and therefore something seems to be wrong. Another case cpuld be that the theorem t is matching a situation. This would confirm the belief on the theory.

COMMENTS

[1] Well known knowledge tools are since long libraries and since not so long data-bases. The expressions stored there can only be of use (i) if a human actor knows about these and (ii) knows how to use them. As the amount of stored expressions is increasing the portion of expressions to be cognitively processed by human actors is decreasing. This decrease in the usable portion can be used for a measure of negative complexity which indicates a growng deterioration of the human knowledge space.  The idea that certain kinds of algorithms can analyze these growing amounts of expressions instead of the human actor themself is only constructive if the human actor can use the results of these computations within his knowledge space.  By general reasons this possibility is very small and with increasing negativ complexity it is declining.

 

 

 

HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, March 2, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 2, 2021 13:59h (Minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 3: Actor Story and  Theories

Context

This text is preceded by the following texts:

Introduction

Having a vision is that moment  where something really new in the whole universe is getting an initial status in some real brain which can enable other neural events which  can possibly be translated in bodily events which finally can change the body-external outside world. If this possibility is turned into reality than the outside world has been changed.

When human persons (groups of homo sapiens specimens) as experts — here acting as stakeholder and intended users as one but in different roles! — have stated a problem and a vision document, then they have to translate these inevitably more fuzzy than clear ideas into the concrete terms of an everyday world, into something which can really work.

To enable a real cooperation  the experts have to generate a symbolic description of their vision (called specification) — using an everyday language, possibly enhanced by special expressions —  in a way that  it can became clear to the whole group, which kind of real events, actions and processes are intended.

In the general case an engineering specification describes concrete forms of entanglements of human persons which enable  these human persons to cooperate   in a real situation. Thereby the translation of  the vision inside the brain  into the everyday body-external reality happens. This is the language of life in the universe.

WRITING A STORY

To elaborate a usable specification can metaphorically be understood  as the writing of a new story: which kinds of actors will do something in certain situations, what kinds of other objects, instruments etc. will be used, what kinds of intrinsic motivations and experiences are pushing individual actors, what are possible outcomes of situations with certain actors, which kind of cooperation is  helpful, and the like. Such a story is  called here  Actor Story [AS].

COULD BE REAL

An Actor Story must be written in a way, that all participating experts can understand the language of the specification in a way that   the content, the meaning of the specification is either decidable real or that it eventually can become real.  At least the starting point of the story should be classifiable as   being decidable actual real. What it means to be decidable actual real has to be defined and agreed between the participating experts before they start writing the Actor Story.

ACTOR STORY [AS]

An Actor Story assumes that the described reality is classifiable as a set of situations (states) and  a situation as part of the Actor Story — abbreviated: situationAS — is understood  as a set of expressions of some everyday language. Every expression being part of an situationAS can be decided as being real (= being true) in the understood real situation.

If the understood real situation is changing (by some event), then the describing situationAS has to be changed too; either some expressions have to be removed or have to be added.

Every kind of change in the real situation S* has to be represented in the actor story with the situationAS S symbolically in the format of a change rule:

X: If condition  C is satisfied in S then with probability π  add to S Eplus and remove from  S Eminus.

or as a formula:

S’π = S + Eplus – Eminus

This reads as follows: If there is an situationAS S and there is a change rule X, then you can apply this change rule X with probability π onto S if the condition of X is satisfied in S. In that case you have to add Eplus to S and you have to remove Eminus from S. The result of these operations is the new (successor) state S’.

The expression C is satisfied in S means, that all elements of C are elements of S too, written as C ⊆ S. The expression add Eplus to S means, that the set Eplus is unified with the set S, written as Eplus ∪ S (or here: Eplus + S). The expression remove Eminus from S means, that the set Eminus is subtracted from the set S, written as S – Eminus.

The concept of apply change rule X to a given state S resulting in S’ is logically a kind of a derivation. Given S,X you will derive by applicating X the new  S’. One can write this as S,X ⊢X S’. The ‘meaning’ of the sign ⊢  is explained above.

Because every successor state S’ can become again a given state S onto which change rules X can be applied — written shortly as X(S)=S’, X(S’)=S”, … — the repeated application of change rules X can generate a whole sequence of states, written as SQ(S,X) = <S’, S”, … Sgoal>.

To realize such a derivation in the real world outside of the thinking of the experts one needs a machine, a computer — formally an automaton — which can read S and X documents and can then can compute the derivation leading to S’. An automaton which is doing such a job is often called a simulator [SIM], abbreviated here as ∑. We could then write with more information:

S,X ⊢ S’

This will read: Given a set S of many states S and a set X of change rules we can derive by an actor story simulator ∑ a successor state S’.

A Model M=<S,X>

In this context of a set S and a set of change rules X we can speak of a model M which is defined by these two sets.

A Theory T=<M,>

Combining a model M with an actor story simulator enables a theory T which allows a set of derivations based on the model, written as SQ(S,X,⊢) = <S’, S”, … Sgoal>. Every derived final state Sgoal in such a derivation is called a theorem of T.

An Empirical Theory Temp

An empirical theory Temp is possible if there exists a theory T with a group of experts which are using this theory and where these experts can interpret the expressions used in theory T by their built-in meaning functions in a way that they always can decide whether the expressions are related to a real situation or not.

Evaluation [ε]

If one generates an Actor Story Theory [TAS] then it can be of practical importance to get some measure how good this theory is. Because measurement is always an operation of comparison between the subject x to be measured and some agreed standard s one has to clarify which kind of a standard for to be good is available. In the general case the only possible source of standards are the experts themselves. In the context of an Actor Story the experts have agreed to some vision [V] which they think to be a better state than a  given state S classified as a problem [P]. These assumptions allow a possible evaluation of a given state S in the ‘light’ of an agreed vision V as follows:

ε: V x S —> |V ⊆ S|[%]
ε(V,S) = |V ⊆ S|[%]

This reads as follows: the evaluation ε is a mapping from the sets V and S into the number of elements from the set V included in the set S converted in the percentage of the number of elements included. Thus if no  element of V is included in the set S then 0% of the vision is realized, if all elements are included then 100%, etc. As more ‘fine grained’ the set V is as more ‘fine grained’  the evaluation can be.

An Evaluated Theory Tε=<M,,ε>

If one combines the concept of a  theory T with the concept of evaluation ε then one can use the evaluation in combination with the derivation in the way that every  state in a derivation SQ(S,X,⊢) = <S’, S”, … Sgoal> will additionally be evaluated, thus one gets sequences of pairs as follows:

SQ(S,X,⊢∑,ε) = <(S’,ε(V,S’)), (S”,ε(V,S”)), …, (Sgoal, ε(V,Sgoal))>

In the ideal case Sgoal is evaluated to 100% ‘good’. In real cases 100% is only an ideal value which usually will only  be approximated until some threshold.

An Evaluated Theory Tε with Algorithmic Intelligence Tε,α=<M,,ε,α>

Because every theory defines a so-called problem space which is here enhanced by some evaluation function one can add an additional operation α (realized by an algorithm) which can repeat the simulator based derivations enhanced with the evaluations to identify those sets of theorems which are qualified as the best theorems according to some criteria given. This operation α is here called algorithmic intelligence of an actor story AS]. The existence of such an algorithmic intelligence of an actor story [αAS] allows the introduction of another derivation concept:

S,X ⊢∑,ε,α S* ⊆  S’

This reads as follows: Given a set S and a set X an evaluated theory with algorithmic intelligence Tε,α can derive a subset S* of all possible theorems S’ where S* matches certain given criteria within V.

WHERE WE ARE NOW

As it should have become clear now the work of HMI analysis is the elaboration of a story which can be done in the format of different kinds of theories all of which can be simulated and evaluated. Even better, the only language you have to know is your everyday language, your mother tongue (mathematics is understood here as a sub-language of the everyday language, which in some special cases can be of some help). For this theory every human person — in all ages! — can be a valuable  colleague to help you in understanding better possible futures. Because all parts of an actor story theory are plain texts, everybody ran read and understand everything. And if different groups of experts have investigated different  aspects of a common field you can merge all texts by only ‘pressing a button’ and you will immediately see how all these texts either work together or show discrepancies. The last effect is a great opportunity  to improve learning and understanding! Together we represent some of the power of life in the universe.

CONTINUATION

See here.