OKSIMO.R – EVERYDAY SCENES – GO OUT FOR EAT

eJournal: uffmm.org
ISSN 2567-6458, 6.November 2022 – 17.November 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Parts of this text have been translated with www.DeepL.com/Translator (free version), afterwards only minimally edited.

CONTEXT

This post is part of the book project ‘oksimo.R Editor and Simulator for Theories’.

CONTENT

A normal everyday scene is used to illustrate some properties of modeling (theory building) in the oksimo.R paradigm. This case is about a person who works in a university, has an office there (together with others), and ‘feels hungry’ around noon. This becomes the occasion for this person to decide to go out to eat. In this case ‘to the Greek around the corner’. The short story ends with this person no longer feeling hungry.

OKSIMO.R TEXT TYPES

Modeling (theory building) in the oksimo.R paradigm takes place by a group of people working together to formulate a text in a common language. In the concrete case, this is the German language; however, it can also be any other language.

Three types of texts are distinguished:

  • ACTUAL descriptions (initial situations)
  • TARGET descriptions (requirements)
  • CHANGE descriptions (rules for change)

These distinctions presuppose that a human actor can distinguish between such ideas in his head, which ‘correspond’ to experiences outside his brain (in ‘his own body’, in the ‘body world outside his body’), and such ideas in his head, which he thinks/remembers/dreams/fantasizes ‘alone’, ‘for himself’ … .

ACTUAL situation

Here, ACTUAL descriptions refer to such ideas that relate to the body world beyond the own body and that can be ‘shared’ by other human actors. For example, if someone stands outside and says “It is raining”, and all bystanders would confirm this, then this would be a case of an ACTUAL-description that can be ‘confirmed’ by all. Most of the time people then also say that this description is ‘true’. If in this situation, where it is raining, someone would say “It is not raining” then everyone – usually – would say that this ‘statement’ is ‘false’. If someone says instead “It will rain soon”, then all bystanders who understand English will be able to form an idea in their brain that it is raining, but there is then no concrete equivalent to this idea in the real interpersonal physical world. This statement would then be neither ‘true’ nor ‘false’. Its relation to the ‘common body world’ would be ‘indeterminate’: it may perhaps become true, but need not.

GOAL description

TARGET descriptions (also in the form of requirements) refer to such ‘imaginations in the minds of actors’ to which there are accepted linguistic expressions, but which at the moment of writing or saying do not yet have a correspondence in the shared physical world. The ideas belonging to a merely imagined description of a goal have a greater or lesser probability of possibly occurring ‘sometime in the future’. Either there are ‘experiences’ from the past, which suggest an occurrence or there is only the ‘wish’ that these conceptions become real.

CHANGE Descriptions

CHANGE descriptions refer to such ‘events’ or ‘measures’ of which one knows (or strongly assumes) that their occurrence or their implementation ‘changes’ a given situation (ACTUAL) in at least one property in such a way that after a ‘certain time’ (‘time interval’) the ‘old’ situation represents a ‘new’ situation due to the ‘change’, which then becomes the ‘new ACTUAL situation’ as ‘successor situation’. Further events or measures can also change this new actual situation again.

Required Text Sets

While one needs at least one ACTUAL situation and at least one CHANGE description for an oksimo.R modeling (theory building), a TARGET description is optional. If no TARGET description is given, then there is a – more or less – directed or open sequence of ACTUAL states, which can arise by – also repeated – ‘applications’ of the CHANGE descriptions to a given ACTUAL situation. If at least one TARGET description is available, then this can be used to ‘evaluate’ a current ACTUAL situation according to how many elements of the TARGET situation are already present in the ACTUAL situation. This can be between 0% and 100%.

Applying change descriptions to an ACTUAL situation.

For applying a change description to a given ACTUAL situation, one must understand that in the oksimo.R paradigm a TEXT is nothing but a set of LANGUAGE EXPRESSIONS whose ‘meaning’ is known only by the speakers. Each linguistic expression is considered as an ‘element’ of this ‘expression set’ called text , and it is assumed that each linguistic expression describes some ‘property’ of the real ACTUAL situation. An imputed ACTUAL situation has exactly as many properties as the TEXT of the ACTUAL situation contains linguistic expressions. The amount of the imputed properties of a situation represent only a true subset of the real situation. If a certain expression is removed from the text, the associated property disappears; if a new linguistic expression is added, then a new property is created in the imputed ACTUAL situation.

A CHANGE description (also ‘change rule’ or simply ‘rule’) must therefore minimally do the following:

  1. Specify which expressions are to be added (generate new properties)
  2. Specify which of the previous expressions should be removed (eliminate properties).

In order to keep the application of the rule ‘under control’, one should make the application of a change rule to a current ACTUAL situation dependent on CONDITIONS in such a way that one prefixes the change specifications for ‘adding’ or ‘removing’ with a set of expressions, which must be given in the ACTUAL-description; otherwise the change rule can not become ‘active’.

Simple example

ACTUAL situation:

Gerd is sitting in his office. Gerd is hungry.

TARGET situation:

Gerd is not hungry.

CHANGE Description:

IF:
Gerd is hungry.
THEN:
Add as a property to the ACTUAL situation: Gerd leaves his office.
Remove as property from the ACTUAL situation: Gerd is sitting in his office.

APPLICATION of the change description:

The CONDITION is fulfilled.

THEN:

NEW ACTUAL situation:

Gerd leaves his office. Gerd is hungry.

EVALUATION:

The property from the GOAL: ‘Gerd is not hungry’ is not yet fulfilled, so: Success so far: 0%.

REPEATED APPLICATION

Each change rule can in principle be applied as often as possible, but only as long as the CONDITION is fulfilled.

In the example above, the CONDITION ‘Gerd is hungry’ would continue to be fulfilled, but a repeated application of the rule will not change the situation any further. Thus, it is foreseeable that the TARGET condition can never be reached in this model (in this theory).

Example with oksimo.R software

Contextualization of the software

The oksimo.R software is part of the ‘oksimo.R paradigm’. The oksimo.R paradigm includes three components: (i) As an ‘application format’ a set of arbitrary citizens who see themselves as ‘natural experts’ who ‘work together scientifically’. This format is called ‘citizen science 2.0’ in the context of the oksimo.R paradigm. (ii) The ‘oksimo.R software’ that can be used by citizens to formulate (‘edit’) their scientific descriptions of the experiential world in such a way that they ‘automatically’ meet the requirements of an ’empirical theory’, so as to be able to draw ‘inferences’ at any time, practiced as ‘simulations’. (iii) A clear concept of an ’empirical theory’ compatible with all known forms of ’empirical sciences’ (in fact, the general form of the oksimo.R theory concept can also represent all forms of non-empirical theories).

The oksimo.R software is currently being developed and deployed on a server in the Internet, accessible via the address oksimo.com.

Since the theoretical concept of the oksimo.R software covers almost everything we know so far as a software application in the Internet (including the various forms of ‘Artificial Intelligence (AI)’ and ‘Internet of Things (IoT)’), the transformation of the theoretical concept into applicable software is generally an ‘infinite process’. As of this writing (Nov 16, 2022), Level 2 is directly available and work is underway with Level 3 (and there will be much more levels in the future …)

An oksimo.R theory REALIZED WITH the software (Still level 2)

The old menu – still in command line mode – shows up as follows after logging in:

Welcome to Oksimo v2.1 02 May 2022 (ed14)

MAIN MENU
1 is NEW VISION
2 is MANAGE VISIONS
3 is VISION COLLECTIONS
4 is NEW STATE
5 is MANAGE STATES
6 is STATE COLLECTIONS
7 is NEW RULE
8 is MANAGE RULES
9 is RULE DOCUMENT
10 is NEW SIMULATION
11 is MANAGE SIMULATIONS
12 is LOAD SIMULATION
13 is COMBINE SIMULATIONS
14 is SHARE
15 is EXIT SIMULATOR
Enter a Number [1-15] for Menu Option

See: oksimo.com (16.Nov. 2022)

In the old command line mode you have to enter the oksimo.R texts manually. For the ACTUAL state this looks like this:

Enter ACTUAL description

Enter a Number [1-15] for Menu Option

4

Here you can describe an actual state S related to your problem.

Enter a NAME for the new state description:

Food1

Enter an expression for your state description in plain text:

Gerd is sitting in his office.

Expressions so far:
Gerd is sitting in his office.

Enter another expression or leave blank to proceed:

Gerd is hungry.

Expressions so far:
Gerd is sitting in his office.
Gerd is hungry.

Enter another expression or leave blank to proceed:

Name: Food1
Expressions:
Gerd is sitting in his office.
Gerd is hungry.

Note: In the Level 2 version an ACTUAL description is generally only called ‘state’.

Enter VISIONs text

Enter a Number [1-15] for Menu Option

1

Here you can describe your vision.

Enter a NAME for the new vision:

Food1-v1

Enter an expression for your vision in plain text:

Gerd is not hungry.

Expressions so far:
Gerd is not hungry.

Enter another expression or leave blank to proceed:

Your final vision document is now:
Name: Food1-v1
Expressions:
Gerd is not hungry.

Enter CHANGE rule

Enter the name of the new rules document:

Food1-Decision1

Enter condition:

Gerd is hungry.

Conditions so far:
Gerd is hungry.

Enter another condition or leave blank to proceed:

Enter a probability between 0.0 and 1.0:

1.0

(Comment: The ‘Probability’ feature at this point is now obsolete. Probabilities are handled more generally and flexibly. Examples follow.)

Enter positive effect:

Gerd leaves his office.

Positive Effects so far:
Gerd leaves his office.

Enter another positive effect or leave blank to proceed:

Enter negative effect:

Gerd is sitting in his office.

Negative Effects so far:
Gerd is sitting in his office.

Enter another negative effect or leave blank to proceed:

Summary:
Rule:Food1-Decision1
Conditions:Gerd is hungry.

Probability:
1.0
Positive Effects:
Gerd leaves his office.

Negative Effects:
Gerd is sitting in his office.

Test the effect of the theory

Test conclusions

The ‘core of an oksimo.R theory’ consists of the two components ACTUAL situation (here: state) and CHANGE rule (here: rule). By applying a rule to a state, a successor state is created, which is ultimately an ‘inference’ (a ‘theorem’) of the theory. The more complex the initial state is and the more change rules there are, the more diverse the set of possible consequences (‘inferences’, ‘theorems’) becomes. To keep track of these consequences, especially if the rules of change can be applied again and again to a successor state, so that an ever longer sequence of states emerges out of this, this can become very difficult.

Test target fulfillment

If you use an oksimo.R theory kernel together with a TARGET description, then during the inference process (the ‘simulation’) you can also check at any point how many ‘elements of the TARGET description’ already ‘occur’ in an inferred state. If ‘all’ elements of the GOAL-description occur, the theory is able to ‘infer’ 100% of the GOAL-description, otherwise less, down to 0% goal fulfillment.

Start an oksimo.R simulation

Enter a Number [1-15] for Menu Option

10

Here you can run a simulation SIM to check what happens with your initial state S when the change rules X will be applied repeatedly on the state S.

Available vision descriptions:

Food1-v1

Enter a name for a vision description you want to load. Use prefix col to load a collection:

Food1-v1

Visions selected so far:
Food1-v1

Add another vision or leave blank to proceed:

Enter a name for a state description you want to load. Use prefix col to load a collection:

Food1

States selected so far:
Food1

Add another state or leave blank to proceed:

Selected states:
Food1

Available rules:

Food1-Decision1

Enter a name for a rule or a ruledocument (with prefix doc) you want to load:

Food1-Decision1

Rules selected so far:
Food1-Decision1
Add another rule or leave blank to proceed:

Selected visions:
Food1-v1
Selected states:
Food1
Selected rules:
Food1-Decision1

Enter maximum number of simulation rounds

3

Your vision:
Gerd is not hungry.

Initial states:
Gerd is hungry.,Gerd is sitting in his office.

Round 1

Round 3

Save Simulation [S], Rerun simulation [R], export as text [T] or exit [leave blank]:

S

Enter Name for Simulation:

Food1-sim1

Saved!

Enter a Number [1-15] for Menu Option

12

Here you can load a previously saved simulation and rerun it. Add prefix dev for detailed developer-mode.

List of your saved simulations:

Food1-sim1

Restart the saved simulation with 12 Load Simulation (Comment: The math-elements are deleted from the protocol because these will be used a little bit later):

our vision:
Gerd is not hungry.

Initial states: 
Gerd is hungry.,Gerd is sitting in his office.

Round 1

Current states: Gerd is hungry.,Gerd leaves his office.
Current visions: Gerd is not hungry.
Current values:

0.00 percent of your vision was achieved by reaching the following states:
None
And the following math visions:
None

Round 2

Current states: Gerd is hungry.,Gerd leaves his office.
Current visions: Gerd is not hungry.
Current values:

0.00 percent of your vision was achieved by reaching the following states:
None
And the following math visions:
None

One can easily see, that the state of round 2 is repeating. And there is no reason, that this will change in the future.

Rule application and inference concept

The preceding simple example was used to explain concretely what happens when a rule is applied to a given ACTUAL situation. A science which deals with such change processes by means of rule application(s) is ‘logic’. Logic considerations have been around for more than 2500 years, in many different forms. The most significant logic paradigms in retrospect are possibly the logic associated with the name of Aristotle, in which logical expressions were not yet considered in isolation from possible linguistic meanings, and modern formal logic, in which logical expressions have no connection to any linguistic meaning except with abstract ‘truth values’. The history of modern formal logic began in the 19th century about 150 years ago (Bool, de Morgan, Venn, Frege, Russell, …).

The central idea of any logic is to find a ‘procedure that allows the user to ‘derive’ from a set of ‘assumed to be (abstractly) true’ statements only those statements that are also ‘(abstractly) true’ again. The ‘abstract truth’ of modern formal logic is a ‘placeholder’ for an everyday language truth which cannot be expressed as such within a formal logic. Formal logic presupposes that there are ‘actors’ who ‘know’ what they are saying when they speak of a ‘true’ statement. Whether the formalization of ‘truth relations’ between different sets of expressions in the format of modern formal logic ‘adequately’ represent the meaning knowledge of the actors can therefore not be decided ‘within the logical system’, but only ‘from outside’, from the perspective of the ‘meaning knowledge of the acting actor’.

If one calls the initial set of linguistic expressions ‘assumed to be abstractly true’ an IS-description (in the style of the oksimo.R paradigm) and the set of possible ‘derived expressions assumed to be abstractly true’ the ‘inferred abstractly true expressions’, then one could formulate this in the style of formal logic as follows:

IS-STATEMENTS ⊢CHANGE-RULES GENERATED-POTENTIAL-IS-STATEMENTS.

or abbreviated:

X  R X‘

The character ‘⊢’ represents an inference term. This consists of a text describing how to apply a change rule from the set R to a given set of expressions X in such a way that a new set X’ is created as a result of the application to the given set X. The inference term must be of such a nature that it is completely unambiguous ‘what to do’.

The claim of the ‘pure formal logic’ of the modern times that all expressions, which are generated with the inference term, are also conform to the ‘assumed abstract truth value’, applies in the same way to the inference term of the oksimo.R theory software too. With the oksimo.R inference term it is guaranteed that all ‘generated expressions’ are ‘true’ in the sense of the ‘linguistically founded meaning knowledge’ of the involved ‘actors’! However, linguistically grounded meaning knowledge is ‘knowledge dependent’ and therefore can be empirically either ‘true’ or ‘false’ or ‘indeterminate’. This points to the fact that in general the actors are the ‘gatekeepers of the truth’. Actors formulate the change-rules R based on their linguistic knowledge. If these change-rules R are ‘true’, then this is also true for the linguistic expressions generated by means of inference. If the change rules R contain an ‘error’, then this error will necessarily be contained in the generated inference situation X’ as description element E. This expression element E as part of the prediction set X’ may then turn out to be either ‘true’ in the further course of comparison with the commonly shared empirical reality, or it will remain ‘indeterminate’ in the long run, since it neither becomes ‘true’ nor can be directly classified as ‘false’. In the case of modern formal logic, the empirical truth status of inferred expressions is completely indeterminate.

The oksimo.R inference concept united the formal advantages of modern formal logic with the meaning reference of Aristotelian logic and is understood as a ‘natural means of expression’ for an empirical theory with truth claim.

This post has a continuation (Pert 2) HERE.