THE OKSIMO CASE as SUBJECT FOR PHILOSOPHY OF SCIENCE. Part 1

eJournal: uffmm.org
ISSN 2567-6458, 22.March – 23.March 2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE OKSIMO EVENT SPACE

The characterization of the oksimo software paradigm starts with an informal characterization  of the oksimo software event space.

EVENT SPACE

An event space is a space which can be filled up by observable events fitting to the species-specific internal processed environment representations [1], [2] here called internal environments [ENVint]. Thus the same external environment [ENV] can be represented in the presence of  10 different species  in 10 different internal formats. Thus the expression ‘environment’ [ENV] is an abstract concept assuming an objective reality which is common to all living species but indeed it is processed by every species in a species-specific way.

In a human culture the usual point of view [ENVhum] is simultaneous with all the other points of views [ENVa] of all the other other species a.

In the ideal case it would be possible to translate all species-specific views ENVa into a symbolic representation which in turn could then be translated into the human point of view ENVhum. Then — in the ideal case — we could define the term environment [ENV] as the sum of all the different species-specific views translated in a human specific language: ∑ENVa = ENV.

But, because such a generalized view of the environment is until today not really possible by  practical reasons we will use here for the beginning only expressions related to the human specific point of view [ENVhum] using as language an ordinary language [L], here  the English language [LEN]. Every scientific language — e.g. the language of physics — is understood here as a sub language of the ordinary language.

EVENTS

An event [EV] within an event space [ENVa] is a change [X] which can be observed at least from the  members of that species [SP] a which is part of that environment ENV which enables  a species-specific event space [ENVa]. Possibly there can be other actors around in the environment ENV from different species with their specific event space [ENVa] where the content of the different event spaces  can possible   overlap with regard to  certain events.

A behavior is some observable movement of the body of some actor.

Changes X can be associated with certain behavior of certain actors or with non-actor conditions.

Thus when there are some human or non-human  actors in an environment which are moving than they show a behavior which can eventually be associated with some observable changes.

CHANGE

Besides being   associated with observable events in the (species specific) environment the expression  change is understood here as a kind of inner state in an actor which can compare past (stored) states Spast with an actual state SnowIf the past and actual state differ in some observable aspect Diff(Spast, Snow) ≠ 0, then there exists some change X, or Diff(Spast, Snow) = X. Usually the actor perceiving a change X will assume that this internal structure represents something external to the brain, but this must not necessarily be the case. It is of help if there are other human actors which confirm such a change perception although even this does not guarantee that there really is a  change occurring. In the real world it is possible that a whole group of human actors can have a wrong interpretation.

SYMBOLIC COMMUNICATION AND MEANING

It is a specialty of human actors — to some degree shared by other non-human biological actors — that they not only can built up internal representations ENVint of the reality external to the  brain (the body itself or the world beyond the body) which are mostly unconscious, partially conscious, but also they can built up structures of expressions of an internal language Lint which can be mimicked to a high degree by expressions in the body-external environment ENV called expressions of an ordinary language L.

For this to work one  has  to assume that there exists an internal mapping from internal representations ENVint into the expressions of the internal language   Lint as

meaning : ENVint <—> Lint.

and

speaking: Lint —> L

hearing: Lint <— L

Thus human actors can use their ordinary language L to activate internal encodings/ decodings with regard to the internal representations ENVint  gained so far. This is called here symbolic communication.

NO SPEECH ACTS

To classify the occurrences of symbolic expressions during a symbolic communication  is a nearly infinite undertaking. First impressions of the unsolvability of such a classification task can be gained if one reads the Philosophical Investigations of Ludwig Wittgenstein. [5] Later trials from different philosophers and scientists  — e.g. under the heading of speech acts [4] — can  not fully convince until today.

Instead of assuming here a complete scientific framework to classify  occurrences of symbolic expressions of an ordinary language L we will only look to some examples and discuss these.

KINDS OF EXPRESSIONS

In what follows we will look to some selected examples of symbolic expressions and discuss these.

(Decidable) Concrete Expressions [(D)CE]

It is assumed here that two human actors A and B  speaking the same ordinary language L  are capable in a concrete situation S to describe objects  OBJ and properties PROP of this situation in a way, that the hearer of a concrete expression E can decide whether the encoded meaning of that expression produced by the speaker is part of the observable situation S or not.

Thus, if A and B are together in a room with a wooden  white table and there is a enough light for an observation then   B can understand what A is saying if he states ‘There is a white wooden table.

To understand means here that both human actors are able to perceive the wooden white table as an object with properties, their brains will transform these external signals into internal neural signals forming an inner — not 1-to-1 — representation ENVint which can further be mapped by the learned meaning function into expressions of the inner language Lint and mapped further — by the speaker — into the external expressions of the learned ordinary language L and if the hearer can hear these spoken expressions he can translate the external expressions into the internal expressions which can be mapped onto the learned internal representations ENVint. In everyday situations there exists a high probability that the hearer then can respond with a spoken ‘Yes, that’s true’.

If this happens that some human actor is uttering a symbolic expression with regard to some observable property of the external environment  and the other human actor does respond with a confirmation then such an utterance is called here a decidable symbolic expression of the ordinary language L. In this case one can classify such an expression  as being true. Otherwise the expression  is classified as being not true.

The case of being not true is not a simple case. Being not true can mean: (i) it is actually simply not given; (ii) it is conceivable that the meaning could become true if the external situation would be  different; (iii) it is — in the light of the accessible knowledge — not conceivable that the meaning could become true in any situation; (iv) the meaning is to fuzzy to decided which case (i) – (iii) fits.

Cognitive Abstraction Processes

Before we talk about (Undecidable) Universal Expressions [(U)UE] it has to clarified that the internal mappings in a human actor are not only non-1-to-1 mappings but they are additionally automatic transformation processes of the kind that concrete perceptions of concrete environmental matters are automatically transformed by the brain into different kinds of states which are abstracted states using the concrete incoming signals as a  trigger either to start a new abstracted state or to modify an existing abstracted state. Given such abstracted states there exist a multitude of other neural processes to process these abstracted states further embedded  in numerous  different relationships.

Thus the assumed internal language Lint does not map the neural processes  which are processing the concrete events as such but the processed abstracted states! Language expressions as such can never be related directly to concrete material because this concrete material  has no direct  neural basis.  What works — completely unconsciously — is that the brain can detect that an actual neural pattern nn has some similarity with a  given abstracted structure NN  and that then this concrete pattern nn  is internally classified as an instance of NN. That means we can recognize that a perceived concrete matter nn is in ‘the light of’ our available (unconscious) knowledge an NN, but we cannot argue explicitly why. The decision has been processed automatically (unconsciously), but we can become aware of the result of this unconscious process.

Universal (Undecidable) Expressions [U(U)E]

Let us repeat the expression ‘There is a white wooden table‘ which has been used before as an example of a concrete decidable expression.

If one looks to the different parts of this expression then the partial expressions ‘white’, ‘wooden’, ‘table’ can be mapped by a learned meaning function φ into abstracted structures which are the result of internal processing. This means there can be countable infinite many concrete instances in the external environment ENV which can be understood as being white. The same holds for the expressions ‘wooden’ and ‘table’. Thus the expressions ‘white’, ‘wooden’, ‘table’ are all related to abstracted structures and therefor they have to be classified as universal expressions which as such are — strictly speaking —  not decidable because they can be true in many concrete situations with different concrete matters. Or take it otherwise: an expression with a meaning function φ pointing to an abstracted structure is asymmetric: one expression can be related to many different perceivable concrete matters but certain members of  a set of different perceived concrete matters can be related to one and the same abstracted structure on account of similarities based on properties embedded in the perceived concrete matter and being part of the abstracted structure.

In a cognitive point of view one can describe these matters such that the expression — like ‘table’ — which is pointing to a cognitive  abstracted structure ‘T’ includes a set of properties Π and every concrete perceived structure ‘t’ (caused e.g. by some concrete matter in our environment which we would classify as a ‘table’) must have a ‘certain amount’ of properties Π* that one can say that the properties  Π* are entailed in the set of properties Π of the abstracted structure T, thus Π* ⊆ Π. In what circumstances some speaker-hearer will say that something perceived concrete ‘is’ a table or ‘is not’ a table will depend from the learning history of this speaker-hearer. A child in the beginning of learning a language L can perhaps call something   a ‘chair’ and the parents will correct the child and will perhaps  say ‘no, this is table’.

Thus the expression ‘There is a white wooden table‘ as such is not true or false because it is not clear which set of concrete perceptions shall be derived from the possible internal meaning mappings, but if a concrete situation S is given with a concrete object with concrete properties then a speaker can ‘translate’ his/ her concrete perceptions with his learned meaning function φ into a composed expression using universal expressions.  In such a situation where the speaker is  part of  the real situation S he/ she  can recognize that the given situation is an  instance of the abstracted structures encoded in the used expression. And recognizing this being an instance interprets the universal expression in a way  that makes the universal expression fitting to a real given situation. And thereby the universal expression is transformed by interpretation with φ into a concrete decidable expression.

SUMMING UP

Thus the decisive moment of turning undecidable universal expressions U(U)E into decidable concrete expressions (D)CE is a human actor A behaving as a speaker-hearer of the used  language L. Without a speaker-hearer every universal expressions is undefined and neither true nor false.

makedecidable :  S x Ahum x E —> E x {true, false}

This reads as follows: If you want to know whether an expression E is concrete and as being concrete is  ‘true’ or ‘false’ then ask  a human actor Ahum which is part of a concrete situation S and the human actor shall  answer whether the expression E can be interpreted such that E can be classified being either ‘true’ or ‘false’.

The function ‘makedecidable()’ is therefore  the description (like a ‘recipe’) of a real process in the real world with real actors. The important factors in this description are the meaning functions inside the participating human actors. Although it is not possible to describe these meaning functions directly one can check their behavior and one can define an abstract model which describes the observable behavior of speaker-hearer of the language L. This is an empirical model and represents the typical case of behavioral models used in psychology, biology, sociology etc.

SOURCES

[1] Jakob Johann Freiherr von Uexküll (German: [ˈʏkskʏl])(1864 – 1944) https://en.wikipedia.org/wiki/Jakob_Johann_von_Uexk%C3%BCll

[2] Jakob von Uexküll, 1909, Umwelt und Innenwelt der Tiere. Berlin: J. Springer. (Download: https://ia802708.us.archive.org/13/items/umweltundinnenwe00uexk/umweltundinnenwe00uexk.pdf )

[3] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act

[4] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein

[5] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */

HMI ANALYSIS, Part 4: Tool based Actor Story Development with Testing and Gaming

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, March 3-4, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 4, 2021, 07:49h (Minor corrections; relating to the UN SDGs)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 4: Tool based Actor Story Development with Testing and Gaming

Context

This text is preceded by the following texts:

INFO GRAPH

Overview about different scenarios which will be possible for the development, simulation, testing and gaming of actor stories using the oksimo software tool

Introduction

In the preceding post it has been explained, how one can format an actor story [AS] as a theory in the  format  of  an Evaluated Theory Tε with Algorithmic Intelligence:   Tε,α=<M,∑,ε,α>.

In the following text it will be explained which kinds of different scenarios will be possible to elaborate, to simulate, to test, and to enable gaming with  an actor story theory by using the oksimo software tool.

UNIVERSAL TEAM

The classical distinctions between certain types of managers, special experts and the rest of the world is given up here in favor of a stronger generalization: everybody is a potential expert with regard to a future, which nobody knows. This is emphasized by the fact, that everybody can use its usual mother tongue, a normal language, every language. Nothing more is needed.

BASIC MODELS (S, X)

As minimal elements for all possible applications it is assumed here that the experts define at least a given situation (state) [S] and a set of change rules [X].

The given state S is  either (i)  taken as it is or (ii)  as a state which  should be improved. In both cases the initial state S is called the start state [S0].

The change rules X describe possible changes which transform a given state S into a changed successor state S’.

A pair of S and X as (S,X) is called a basic model M(S,X). One can define as many models as one wants.

A DIRECTION BY A VISION V

A vision [V] can describe a possible state SV  in an assumed future. If such a state SV is given, then this state becomes a goal state SGoal In this case  we assume V ≠ 0. If no explicit goal is given, then we assume V = 0.

DEVELOPMENT BY GOALS

If a vision is given (V ≠ 0), then the vision can be used to induce a direction which can/ shall be approached by creating a set X, which enables the generation of a sequence of states with the start state S0 as first state followed by successor state Si until the goal state SGoal has been reached or at least it holds that the goal state is a subset of the reached state: SGoalSn.

It is possible to use many basic models M(S,X) in parallel and for each model Mi one can define a different goal Vi (the typical situation in a pluralistic society).

Thus there can be many basic theories T(M,V) in parallel.

STEADY STATES (V = 0)

If no explicit visions are defined (V = 0) then every direction of change is allowed. A basic steady state theory T(M,V) with V = 0 can   be written as T(M,0). Whether such a case can be of interest is not clear at the moment.

BASIC INTERACTION PATTERNS

The following interaction modes are assumed as typical cases:

  1. N-1: Within an online session an interactive webpage with the oksimo software is active and the whole group can interact with the oksimo software tool.
  2. N-N-1: N-many participants can individually login into the interactive oksimo website and being logged in they can collaborate within the oksimo software with one project.
  3. N-N-N: N-many participants can individually login into the interactive oksimo website and there everybody can run its own process or can collaborate in various ways.

The default case is case (1). The exact dates for the availability of modes (2) – (3) depends from how fast the roadmap can be realized.

BASIC APPLICATIONS
  1. Exploring Simulation-Based Development [ESBD] (V ≠ 0): If the main goal is to find a path from a given state today S (Now) to an envisioned state V in the future then one has  to collect appropriate change rules X to approach the final goal state SGoal better and better. Activating the simulator ∑ during search and construction phase at will can be of great help, especially if the documents (S, X, V) are becoming more and more complex.
  2. Embedded Simulation-Based  Testing [ESBT] (V ≠ 0): If a basic  actor story theory T(M,) is given with a given goal (V ≠ 0) then it is of great help if the simulation is done in interactive mode where the simulator is not applying the change rules by itself but by asking different logged in users which rule they want to apply and how. These tests show not only which kinds of errors will occur but they can also show during n-many repetitions to which degree an user  can learn to behave task-conform. If the tests will not show the expected outcomes then this can point  to possible deficiencies of the software as well to specialties of the user.
  3. Embedded Simulation-Based Gaming [ESBTG] (V ≠ 0):  The case of gaming is partially  different to the case of testing.  Although it is assumed here too that at least one vision (goal) is given, it is additionally assumed that  there exists  a competition between different players or different teams. Different to testing exists in gaming according to the goal(s) the role of a winner: that player/ team which has reached a defined  goal state before the other player/ teams,  has won. As a side-effect of gaming one can also evaluate the playing environment and give some feedback to the developers.
ALGORITHMIC INTELLIGENCE
  1. Case ESBD, T(S,X,V,∑,ε,α): Because a normal simulation with the simulator always does  produce only one path from the start state to the goal state it is desirable to have an algorithm α which would run on demand as many times as wanted and thereby the algorithm α would search for all possible paths and at the same time it would look for those derivations, where the goal state satisfies with  ε certain special requirements. Thus the result from the application of α onto a given model M with the vision V would generate the set SV* of all those final states which satisfy the special requirements.
  2. Case ESBG, T(S,X,V,∑,ε,α):   The case of gaming allows at least three kinds of interesting applications for algorithmic intelligence: (i) Introduce non-biological players with learning capabilities which can act simultaneously with the biological players; (ii) Introduce non-biological players with learning capabilities which have to learn how to support, to assist, to train biological player. This second case addresses the challenging task to develop algorithmic tutors for several kinds of learning tasks. (iii) Another variant of case (ii) is to enable the development of a personal algorithmic assistant who works only with one person on a long-term basis.

The kinds of algorithmic Intelligence in (2)(i)-(iii) are different to the  mentioned algorithmic intelligence α in (1).

TYPES OF ACTORS

As the default standard case of an actor it is assumed that there are biological actors, usually human persons, which will not be analyzed with their inner structure [IS]. While the behavior of every system — and  therefore any biological system too — can be described with a behavior function φ: I x IS —> IS x O (if one has all the necessary knowledge), in the default case of biological systems  no behavior function φ is specified, φ = 0. During interactive simulations biological systems act by themselves.

If non-biological actors are used — e.g. automata with a certain machine program (an algorithm) — then one can use these only if one has a fully specified behavior function φ. From this follows that a  change rule which is associated with a non-biological actor has in its Eplus and in its Eminus part not a concrete expression but a variable, which will be computed during the simulation by the non-biological actor depending from its input and its behavior function φ: φ(input)IS=(Eplus, Eminus)IS.

FINAL COMMENT

Everybody who has read the parts (1) – (4) has now a general knowledge about the motivation to develop the oksimo software tool to support human kind to have a better communication and thinking of possible futures and a first understanding (hopefully :-)) how this tool can work. Reading the UN sustainable development goals [SDGs] [1] you will learn, that the SDG4 (Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all) is fundamental to all other SDGs. The oksimo software tool is one tool to be of help to reach these goals.

REFERENCES

[1] The 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet, now and into the future. At its heart are the 17 Sustainable Development Goals (SDGs), which are an urgent call for action by all countries – developed and developing – in a global partnership. They recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests. See PDF: https://sdgs.un.org/sites/default/files/publication/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf

[2] UN, SDG4, PDF, Argumentation why the SDG4 ist fundamental for all other SDGs: https://sdgs.un.org/sites/default/files/publications/2275sdbeginswitheducation.pdf

 

 

 

 

 

 

 

 

HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, March 2, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 2, 2021 13:59h (Minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 3: Actor Story and  Theories

Context

This text is preceded by the following texts:

Introduction

Having a vision is that moment  where something really new in the whole universe is getting an initial status in some real brain which can enable other neural events which  can possibly be translated in bodily events which finally can change the body-external outside world. If this possibility is turned into reality than the outside world has been changed.

When human persons (groups of homo sapiens specimens) as experts — here acting as stakeholder and intended users as one but in different roles! — have stated a problem and a vision document, then they have to translate these inevitably more fuzzy than clear ideas into the concrete terms of an everyday world, into something which can really work.

To enable a real cooperation  the experts have to generate a symbolic description of their vision (called specification) — using an everyday language, possibly enhanced by special expressions —  in a way that  it can became clear to the whole group, which kind of real events, actions and processes are intended.

In the general case an engineering specification describes concrete forms of entanglements of human persons which enable  these human persons to cooperate   in a real situation. Thereby the translation of  the vision inside the brain  into the everyday body-external reality happens. This is the language of life in the universe.

WRITING A STORY

To elaborate a usable specification can metaphorically be understood  as the writing of a new story: which kinds of actors will do something in certain situations, what kinds of other objects, instruments etc. will be used, what kinds of intrinsic motivations and experiences are pushing individual actors, what are possible outcomes of situations with certain actors, which kind of cooperation is  helpful, and the like. Such a story is  called here  Actor Story [AS].

COULD BE REAL

An Actor Story must be written in a way, that all participating experts can understand the language of the specification in a way that   the content, the meaning of the specification is either decidable real or that it eventually can become real.  At least the starting point of the story should be classifiable as   being decidable actual real. What it means to be decidable actual real has to be defined and agreed between the participating experts before they start writing the Actor Story.

ACTOR STORY [AS]

An Actor Story assumes that the described reality is classifiable as a set of situations (states) and  a situation as part of the Actor Story — abbreviated: situationAS — is understood  as a set of expressions of some everyday language. Every expression being part of an situationAS can be decided as being real (= being true) in the understood real situation.

If the understood real situation is changing (by some event), then the describing situationAS has to be changed too; either some expressions have to be removed or have to be added.

Every kind of change in the real situation S* has to be represented in the actor story with the situationAS S symbolically in the format of a change rule:

X: If condition  C is satisfied in S then with probability π  add to S Eplus and remove from  S Eminus.

or as a formula:

S’π = S + Eplus – Eminus

This reads as follows: If there is an situationAS S and there is a change rule X, then you can apply this change rule X with probability π onto S if the condition of X is satisfied in S. In that case you have to add Eplus to S and you have to remove Eminus from S. The result of these operations is the new (successor) state S’.

The expression C is satisfied in S means, that all elements of C are elements of S too, written as C ⊆ S. The expression add Eplus to S means, that the set Eplus is unified with the set S, written as Eplus ∪ S (or here: Eplus + S). The expression remove Eminus from S means, that the set Eminus is subtracted from the set S, written as S – Eminus.

The concept of apply change rule X to a given state S resulting in S’ is logically a kind of a derivation. Given S,X you will derive by applicating X the new  S’. One can write this as S,X ⊢X S’. The ‘meaning’ of the sign ⊢  is explained above.

Because every successor state S’ can become again a given state S onto which change rules X can be applied — written shortly as X(S)=S’, X(S’)=S”, … — the repeated application of change rules X can generate a whole sequence of states, written as SQ(S,X) = <S’, S”, … Sgoal>.

To realize such a derivation in the real world outside of the thinking of the experts one needs a machine, a computer — formally an automaton — which can read S and X documents and can then can compute the derivation leading to S’. An automaton which is doing such a job is often called a simulator [SIM], abbreviated here as ∑. We could then write with more information:

S,X ⊢ S’

This will read: Given a set S of many states S and a set X of change rules we can derive by an actor story simulator ∑ a successor state S’.

A Model M=<S,X>

In this context of a set S and a set of change rules X we can speak of a model M which is defined by these two sets.

A Theory T=<M,>

Combining a model M with an actor story simulator enables a theory T which allows a set of derivations based on the model, written as SQ(S,X,⊢) = <S’, S”, … Sgoal>. Every derived final state Sgoal in such a derivation is called a theorem of T.

An Empirical Theory Temp

An empirical theory Temp is possible if there exists a theory T with a group of experts which are using this theory and where these experts can interpret the expressions used in theory T by their built-in meaning functions in a way that they always can decide whether the expressions are related to a real situation or not.

Evaluation [ε]

If one generates an Actor Story Theory [TAS] then it can be of practical importance to get some measure how good this theory is. Because measurement is always an operation of comparison between the subject x to be measured and some agreed standard s one has to clarify which kind of a standard for to be good is available. In the general case the only possible source of standards are the experts themselves. In the context of an Actor Story the experts have agreed to some vision [V] which they think to be a better state than a  given state S classified as a problem [P]. These assumptions allow a possible evaluation of a given state S in the ‘light’ of an agreed vision V as follows:

ε: V x S —> |V ⊆ S|[%]
ε(V,S) = |V ⊆ S|[%]

This reads as follows: the evaluation ε is a mapping from the sets V and S into the number of elements from the set V included in the set S converted in the percentage of the number of elements included. Thus if no  element of V is included in the set S then 0% of the vision is realized, if all elements are included then 100%, etc. As more ‘fine grained’ the set V is as more ‘fine grained’  the evaluation can be.

An Evaluated Theory Tε=<M,,ε>

If one combines the concept of a  theory T with the concept of evaluation ε then one can use the evaluation in combination with the derivation in the way that every  state in a derivation SQ(S,X,⊢) = <S’, S”, … Sgoal> will additionally be evaluated, thus one gets sequences of pairs as follows:

SQ(S,X,⊢∑,ε) = <(S’,ε(V,S’)), (S”,ε(V,S”)), …, (Sgoal, ε(V,Sgoal))>

In the ideal case Sgoal is evaluated to 100% ‘good’. In real cases 100% is only an ideal value which usually will only  be approximated until some threshold.

An Evaluated Theory Tε with Algorithmic Intelligence Tε,α=<M,,ε,α>

Because every theory defines a so-called problem space which is here enhanced by some evaluation function one can add an additional operation α (realized by an algorithm) which can repeat the simulator based derivations enhanced with the evaluations to identify those sets of theorems which are qualified as the best theorems according to some criteria given. This operation α is here called algorithmic intelligence of an actor story AS]. The existence of such an algorithmic intelligence of an actor story [αAS] allows the introduction of another derivation concept:

S,X ⊢∑,ε,α S* ⊆  S’

This reads as follows: Given a set S and a set X an evaluated theory with algorithmic intelligence Tε,α can derive a subset S* of all possible theorems S’ where S* matches certain given criteria within V.

WHERE WE ARE NOW

As it should have become clear now the work of HMI analysis is the elaboration of a story which can be done in the format of different kinds of theories all of which can be simulated and evaluated. Even better, the only language you have to know is your everyday language, your mother tongue (mathematics is understood here as a sub-language of the everyday language, which in some special cases can be of some help). For this theory every human person — in all ages! — can be a valuable  colleague to help you in understanding better possible futures. Because all parts of an actor story theory are plain texts, everybody ran read and understand everything. And if different groups of experts have investigated different  aspects of a common field you can merge all texts by only ‘pressing a button’ and you will immediately see how all these texts either work together or show discrepancies. The last effect is a great opportunity  to improve learning and understanding! Together we represent some of the power of life in the universe.

CONTINUATION

See here.

 

 

 

 

 

 

 

 

KOMEGA REQUIREMENTS No.4, Version 2. Basic Application Scenario

ISSN 2567-6458, 28.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF DOCUMENT

requirements-no4-v2-27Aug2020

KOMEGA REQUIREMENTS No.4, Version 1

ISSN 2567-6458, 26.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF DOCUMENT

requirements-no4-v1-26Aug2020

KOMEGA REQUIREMENTS No.3, Version 1. Basic Application Scenario – Editing S

ISSN 2567-6458, 26.July – 12.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF DOCUMENT

requirements-no3-v1-12Aug2020 (Last update: August 12, 2020)

KOMEGA REQUIREMENTS No.2. Actor Story Overview

ISSN 2567-6458, 26.July – 12.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF DOCUMENT

requirements-no2-v1-11Aug2020 (Last change: August 12, 2020)

KOMEGA REQUIREMENTS No.1. Basic Application Scenario

KOMEGA REQUIREMENTS No.1. Basic Application Scenario

ISSN 2567-6458, 26.July – 11.August 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF TEXT:

requirements-no1-v3-11Aug2020 (published: Aug-11, 2020; this version replaces the version from 7.August 2020)

requirements-no1-v2-2-7Aug2020 (published: Aug-7, 2020; this version replaces the version from 6.August 2020)

requirements-no1-v2-6Aug2020 (published: Aug-6, 2020; this version replaces the version from 25.July 2020)

requirements-no1-25july2020-v1-pub (published: July-26, 2020)

STARTING WITH PYTHON3 – The very beginning – part 9

Journal: uffmm.org,
ISSN 2567-6458, July 24-25, 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email:gerd@doeben-henisch.de

CONTEXT

This is the next step in the python3 programming project. The overall context is still the python Co-Learning project.

SUBJECT

In this file you will see a first encounter between the AAI paradigm (described in the theory part of this uffmm blog) and some applications of the python programming language. A simple virtual world with objects and actors can become activated with a free selectable size, amount of objects and amount of actors. In later post lots of experiments with this virtual world will be described as well as many extensions.

SOURCE CODE
Main file: vw4.py

The main file ‘vw4.py’ describes the start of a virtual world and then allows a loop to run this world n-many times.

Import file: vwmanager.py

The main file ‘vw4.py’ is using many functions to enable the process. All these functions are collected in the file ‘vwmanager.py’. This file will automatically be loaded during run time of the program vw4.py.

COMMENTS

comment-vw4

DEMO

TEST RUN AUG 19, 2919, 12:56h

gerd@Doeben-Henisch:~/code$ python3 vw4.py
Amount of information: 1 is maximum, 0 is minimum0
Number of columns (= equal to rows!) of 2D-grid ?4
[‘_’, ‘_’, ‘_’, ‘_’]

[‘_’, ‘_’, ‘_’, ‘_’]

[‘_’, ‘_’, ‘_’, ‘_’]

[‘_’, ‘_’, ‘_’, ‘_’]

Percentage (as integer) of obstacles in the 2D-grid?77
Percentage (as integer) of Food Objects in the 2D-grid ?44
Percentage (as integer) of Actor Objects in the 2D-grid ?15

Objects as obstacles

[0, 2, ‘O’]

[0, 3, ‘O’]

[1, 2, ‘O’]

[2, 3, ‘O’]

Objects as food

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Objects as actor

[1, 3, ‘A’, [0, 1000, 100, 500, 0]]

[3, 2, ‘A’, [1, 1000, 100, 500, 0]]

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘A’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘A’, ‘F’]

END OF PREPARATION

WORLD CYCLE STARTS

—————————————————-
Real percentage of obstacles = 25.0
Real percentage of food = 37.5
Real percentage of actors = 12.5
—————————————————-
How many CYCLES do you want?25
Singe Step = 1 or Continous = 0?1
Length of olA 2

—————————————————–

WORLD AT CYCLE = 0

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘A’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘A’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 1000, 100, 500, -1]]

[2, 1, ‘A’, [1, 1000, 100, 500, 8]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 1

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘A’]

[‘F’, ‘A’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 900, 100, 500, -1]]

[2, 1, ‘A’, [1, 900, 100, 500, 0]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 2

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘A’]

[‘F’, ‘A’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 800, 100, 500, -1]]

[1, 1, ‘A’, [1, 1300, 100, 500, 1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 500, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 3

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘A’, ‘O’, ‘A’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 700, 100, 500, -1]]

[2, 0, ‘A’, [1, 1700, 100, 500, 6]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 600, 100]]

[2, 0, ‘F’, [2, 500, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 4

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘A’]

[‘A’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 600, 100, 500, -1]]

[1, 0, ‘A’, [1, 1600, 100, 500, 1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 700, 100]]

[2, 0, ‘F’, [2, 600, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 5

[‘F’, ‘_’, ‘O’, ‘O’]

[‘A’, ‘F’, ‘O’, ‘A’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 500, 100, 500, -1]]

[1, 1, ‘A’, [1, 2000, 100, 500, 3]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 300, 100]]

[2, 0, ‘F’, [2, 700, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 6

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘A’, ‘O’, ‘A’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 400, 100, 500, -1]]

[1, 1, ‘A’, [1, 1900, 100, 500, -1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 400, 100]]

[2, 0, ‘F’, [2, 800, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 7

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘A’, ‘O’, ‘A’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 300, 100, 500, -1]]

[1, 1, ‘A’, [1, 1800, 100, 500, -1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 500, 100]]

[2, 0, ‘F’, [2, 900, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 8

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘A’, ‘O’, ‘A’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 200, 100, 500, -1]]

[1, 1, ‘A’, [1, 1700, 100, 500, -1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 600, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 2

—————————————————–

WORLD AT CYCLE = 9

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘A’, ‘O’, ‘A’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 3, ‘A’, [0, 100, 100, 500, 0]]

[1, 0, ‘A’, [1, 1600, 100, 500, 7]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 700, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 10

[‘F’, ‘_’, ‘O’, ‘O’]

[‘A’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 0, ‘A’, [1, 1500, 100, 500, -1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 800, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 11

[‘F’, ‘_’, ‘O’, ‘O’]

[‘A’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 0, ‘A’, [1, 1400, 100, 500, -1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 900, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 12

[‘F’, ‘_’, ‘O’, ‘O’]

[‘A’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 0, ‘A’, [1, 1300, 100, 500, -1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 13

[‘F’, ‘_’, ‘O’, ‘O’]

[‘A’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[2, 0, ‘A’, [1, 1700, 100, 500, 5]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 500, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 14

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘A’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 1, ‘A’, [1, 2100, 100, 500, 2]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 500, 100]]

[2, 0, ‘F’, [2, 600, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 15

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘A’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[0, 0, ‘A’, [1, 2500, 100, 500, 8]]

[0, 0, ‘F’, [0, 500, 100]]

[1, 1, ‘F’, [1, 600, 100]]

[2, 0, ‘F’, [2, 700, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 16

[‘A’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[0, 0, ‘A’, [1, 2400, 100, 500, -1]]

[0, 0, ‘F’, [0, 600, 100]]

[1, 1, ‘F’, [1, 700, 100]]

[2, 0, ‘F’, [2, 800, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 17

[‘A’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[0, 0, ‘A’, [1, 2300, 100, 500, -1]]

[0, 0, ‘F’, [0, 700, 100]]

[1, 1, ‘F’, [1, 800, 100]]

[2, 0, ‘F’, [2, 900, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 18

[‘A’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[0, 0, ‘A’, [1, 2200, 100, 500, -1]]

[0, 0, ‘F’, [0, 800, 100]]

[1, 1, ‘F’, [1, 900, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 19

[‘A’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[0, 0, ‘A’, [1, 2100, 100, 500, -1]]

[0, 0, ‘F’, [0, 900, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 20

[‘A’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[0, 0, ‘A’, [1, 2000, 100, 500, -1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 21

[‘A’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[0, 0, ‘A’, [1, 1900, 100, 500, 0]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 22

[‘A’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[0, 1, ‘A’, [1, 1800, 100, 500, 3]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 1000, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 23

[‘F’, ‘A’, ‘O’, ‘O’]

[‘_’, ‘F’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 1, ‘A’, [1, 2200, 100, 500, 5]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 500, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

Length of olA 1

—————————————————–

WORLD AT CYCLE = 24

[‘F’, ‘_’, ‘O’, ‘O’]

[‘_’, ‘A’, ‘O’, ‘_’]

[‘F’, ‘_’, ‘F’, ‘O’]

[‘F’, ‘_’, ‘_’, ‘F’]

Press key c for continuation!c
EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE

Updated energy levels in olF and olA
[1, 1, ‘A’, [1, 2100, 100, 500, -1]]

[0, 0, ‘F’, [0, 1000, 100]]

[1, 1, ‘F’, [1, 600, 100]]

[2, 0, ‘F’, [2, 1000, 100]]

[2, 2, ‘F’, [3, 1000, 100]]

[3, 0, ‘F’, [4, 1000, 100]]

[3, 3, ‘F’, [5, 1000, 100]]

 

STARTING WITH PYTHON3 – The very beginning – part 5

Journal: uffmm.org,
ISSN 2567-6458, July 18-19, 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email:
gerd@doeben-henisch.de

CONTEXT

This is the next step in the python3 programming project. The overall context is still the python Co-Learning project.

SUBJECT

After a first clearing of the environment for python programming we have started with the structure of the python programming language, and in this section will continue dealing with the object type sequences and string and more programming elements are shown in a simple example of a creative actor.

Remark: for general help information go directly to the python manuals, which you can find associated with the entry for python 3.7.3 if you press the Windows-Button, look to the list of Apps (= programs), and identify the entry for python 3.7.3. If you open the python entry by clicking you see the sub-entry python 3.7.3 Manuals. If you click on this sub-entry the python documentation will open. In this documentation you can find nearly everything you will need. For Beginners you even find a small tutorial.

SZENARIO

For the further discussion of additional properties of python string and sequence objects I will assume again a simple scenario. I will expand the last scenario with the simple input-output actor by introducing some creativity into the actor. This means that the actor receives again either one word or sequences of words but instead of classifying the word according to some categories or instead of giving back the list of the multiple words as individual entities the actor will change the input creatively.
In case of a single word the actor will re-order the symbols of the string and additionally he can replace one individual symbol by some random symbol out of a finite alphabet.
In case of multiple words the actor will first partition the sequence of words into the individual words in a list, then he will also re-order these items of the list, will then re-order the letters in the words, and finally he can replace in every word one individual symbol by some random symbol out of a finite alphabet. After these operations the list is again concatenated to one sequence of words.
In this version of the program one can repeat in two ways: either (i) input manually new words or (ii) redirect the output into the input that the actor can continue to change the output further.
Interesting feature Cognitive Entropy: If the user selects always the closed world option then the set of available letters will not be expanded during all the repetitions. This reveals then after some repetitions the implicit tendency of all words to become more and more equal until only one type of word ‘survived’. This depends on the random character of the process which increases the chances of the bigger numbers to overrun the smaller ones. The other option is the open world option. This includes that in a repetition a completely new letter can be introduced in a single word. This opposes the implicit tendency of cognitive entropy to enforce the big numbers against the smaller ones.

How can this scenario be realized?

ACTOR STORY

1. There is a user (as executive actor) who can enter single or multiple words into the input interface of an assisting interface.
2. After confirming the input the assisting actor will respond in a creative manner. These creativity is manifested in changed orders of symbols and words as well as by replaced symbols.
3. After the response the user can either repeat the sequence or he can stop. If repeating then he can select between two options: (i) enter manually new words as input or (ii) redirect the output of the system as new input. This allows a continuous creative change of the words.
4. The repeated re-direction offers again two options: (i) Closed world, no real input, or (ii) Open world; some real new input

IMPLEMENTATION

Download here the python source code. This text appears as an HTML-document, because the blog-software does not allow to load a python program file directly.

stringDemo2.py

DEMOS

Single word in a closed world:

PS C:\Users\gerd_2\code> python stringDemo2.py
Single word = ‘1’ or Multiple words = ‘2’
1
New manual input =’1′ or Redirect the last output = ‘2’
1
Closed world =’1′ or Open world =’2′
1
Input a single word
abcde
Your input word is = abcde
New in-word order with worder():
ebaca
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
1
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = ebaca
New in-word order with worder():
ccbaa
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
1
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = ccbaa
New in-word order with worder():
ccccb
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
1
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = ccccb
New in-word order with worder():
ccccc
STOP = ‘N’, CONTINUE != ‘N’

The original word ‘abcde’ has been changed to ‘ccccc’ in a closed world environment. If one introduces an open world scenario then this monotonicity can never happen.

Multiple words in a closed world

PS C:\Users\gerd_2\code> python stringDemo2.py
Single word = ‘1’ or Multiple words = ‘2’
2
New manual input =’1′ or Redirect the last output = ‘2’
1
Closed world =’1′ or Open world =’2′
1
Input multiple words
abc def geh
Your input words are = abc def geh
List version of sqorder input =
[‘abc’, ‘def’, ‘geh’]
New word order in sequence with sqorder():
def geh geh
List version of input in mcworder()=
[‘def’, ‘geh’, ‘geh’]
New in-word order with worder():
fef
New in-word order with worder():
hee
New in-word order with worder():
ege
New word-sequence order :
fef hee ege
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
2
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = fef hee ege
List version of sqorder input =
[‘fef’, ‘hee’, ‘ege’]
New word order in sequence with sqorder():
fef fef ege
List version of input in mcworder()=
[‘fef’, ‘fef’, ‘ege’]
New in-word order with worder():
fff
New in-word order with worder():
fee
New in-word order with worder():
eee
New word-sequence order :
fff fee eee
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
2
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = fff fee eee
List version of sqorder input =
[‘fff’, ‘fee’, ‘eee’]
New word order in sequence with sqorder():
eee fee fee
List version of input in mcworder()=
[‘eee’, ‘fee’, ‘fee’]
New in-word order with worder():
eee
New in-word order with worder():
eef
New in-word order with worder():
eee
New word-sequence order :
eee eef eee
STOP = ‘N’, CONTINUE != ‘N’
y
Single word = ‘1’ or Multiple words = ‘2’
2
New manual input =’1′ or Redirect the last output = ‘2’
2
Closed world =’1′ or Open world =’2′
1
The last output was = eee eef eee
List version of sqorder input =
[‘eee’, ‘eef’, ‘eee’]
New word order in sequence with sqorder():
eee eee eee
List version of input in mcworder()=
[‘eee’, ‘eee’, ‘eee’]
New in-word order with worder():
eee
New in-word order with worder():
eee
New in-word order with worder():
eee
New word-sequence order :
eee eee eee
STOP = ‘N’, CONTINUE != ‘N’

You can see that the cognitive entropy replicates with the closed world assumption in the multi-word scenario too.

EXERCISES

Here are some details of objects and operations.

Letters and Numbers

With  ord(‘a’) one can get the decimal code of the letter as ’97’ and the other way around one can translate a decimal number ’97’ in a letter with  chr(97) to ‘a’.  For ord(‘z’) one gets ‘122’, and then one can use the numbers to compute characters which has been used in the program to find random characters to be inserted in a word.

Strings and Lists

There are some operations only available for list-objects and others only for string-objects.  Thus to change and re-arrange a string directly is not possible, but translating a string in a list, then apply some operations, and then transfer the changed list back into a string, this works fine. Thus translate a word w into a list wl by wl = list(w) allows the re-order of these elements by appending: wll.append(wl[r]). Afterwords I have translated the list again in a string by constructing a new string wnew by concatenating all letters step by step: wnew=wnew+wl[i]. If yould try to transfer the list directly like in the following example, then you will get as a result again  list:

>> w=’abcd’
>>> wl=list(w)
>>> wl
[‘a’, ‘b’, ‘c’, ‘d’]
>>> wn=str(wl)
>>> wn
“[‘a’, ‘b’, ‘c’, ‘d’]”

Immediate Help

If one needs direct information about the operations which are possible with a certain object like here the string object ‘w’, then one can ask for all possible operation like this:

>> dir(w)
[‘__add__’, ‘__class__’, ‘__contains__’, ‘__delattr__’, ‘__dir__’, ‘__doc__’, ‘__eq__’, ‘__format__’, ‘__ge__’, ‘__getattribute__’, ‘__getitem__’, ‘__getnewargs__’, ‘__gt__’, ‘__hash__’, ‘__init__’, ‘__init_subclass__’, ‘__iter__’, ‘__le__’, ‘__len__’, ‘__lt__’, ‘__mod__’, ‘__mul__’, ‘__ne__’, ‘__new__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__rmod__’, ‘__rmul__’, ‘__setattr__’, ‘__sizeof__’, ‘__str__’, ‘__subclasshook__’, ‘capitalize’, ‘casefold’, ‘center’, ‘count’, ‘encode’, ‘endswith’, ‘expandtabs’, ‘find’, ‘format’, ‘format_map’, ‘index’, ‘isalnum’, ‘isalpha’, ‘isascii’, ‘isdecimal’, ‘isdigit’, ‘isidentifier’, ‘islower’, ‘isnumeric’, ‘isprintable’, ‘isspace’, ‘istitle’, ‘isupper’, ‘join’, ‘ljust’, ‘lower’, ‘lstrip’, ‘maketrans’, ‘partition’, ‘replace’, ‘rfind’, ‘rindex’, ‘rjust’, ‘rpartition’, ‘rsplit’, ‘rstrip’, ‘split’, ‘splitlines’, ‘startswith’, ‘strip’, ‘swapcase’, ‘title’, ‘translate’, ‘upper’, ‘zfill’]
>>>

In the case that ‘w’ is a sequence of strings/ words like w=’abc def’, then does the list operations be of no help, because one gets a list of letters, not of words:

>> wl2=list(w)
>>> wl2
[‘a’, ‘b’, ‘c’, ‘ ‘, ‘d’, ‘e’, ‘f’]

For the program one needs a list of single words. Looking to the possible operations with string objects with Dir() above, one sees the name ‘split’. We can ask, what this ‘split’ is about:

>>> help(str.split)
Help on method_descriptor:

split(self, /, sep=None, maxsplit=-1)
Return a list of the words in the string, using sep as the delimiter string.

sep
The delimiter according which to split the string.
None (the default value) means split according to any whitespace,
and discard empty strings from the result.
maxsplit
Maximum number of splits to do.
-1 (the default value) means no limit.

This sounds as if it could be of help. Indeed, that is the mechanism I have used:

>> w=’abc def’
>>> w
‘abc def’
>>> wl=w.split()
>>> wl
[‘abc’, ‘def’]

Function Definition

As you can see in the program text the minimal structure of a function definition is as follows:

def fname(Input-Arguments):
     some commands
    [return VarNames]

The name is needed for the identification of the command, the input variables to get some values from the outside to work on and finally, but optionally, you can return the values of some variables back to the outside of the function.

The For-Loop

Besides the loop organized with the while-command there is the other command with a fixed number of repetitions indicated by the for-command:

for i in range(n):
commands

The operator ‘range()’ delivers a sequence of numbers from ‘0’ to ‘n-1’ and attaches these to the variable ‘i’. Thus the variable i takes one after the other all the numbers from range(). During one repetition all the commands will be executed which are listed after the for-command.

Random Numbers

In this program very heavily I have used random numbers. To be able to do this one has before this usage to import the random number library. I did this with the call:

import random as rnd

This introduces additionally an abbreviation ‘rnd’. Thus if one wants to call a certain operation from the random object one can write like this:

r=rnd.randrange(0,n)

In this example one uses  the randrange() operation from random with the arguments (0,n) this means that an integer random number will be generated in the intervall [0,n-1].

If-Operator with Combined Conditions

In the program you can find statements like

if opt==’1′ and opt2==’1′ and opt3==’1′:

Following the if-keyword you see three different conditions

opt==’1′
opt2==’1′
opt3==’1′

which are put together to one expression by the logical operator ‘and’. This means that all three conditions must simultaneously be true, otherwise this combined condition will not work.

Introduce the Import Module Mechanism

See for this the two files:

stringDemo2b.py
stringDemos.py

StringDemo2b.py is the same as stringDemo2.py discussed above but all the supporting functions are now removed from the main file and stored in an extra file called ‘stringDemos.py’ which works for the main file stringDemo2b.py as a module file. That this works there must be a special

import stringDemos as sd

command and at each occurence of a function call with functions from the imported module in the main module stringDemo2b.py one has to add the prefix ‘sd.’ indicating, that these functions are now located in a special place.

This import call does work only if the special path for the import module ‘stringDemos.py’ is visible to the python modulecall mechanisms. In this case the Path with the modul stringDemos.py is given as C:\Users\gerd_2\code. If one wants know what the actual path names are which are known to the python system one can use a system call:

>> import sys
>>> sys.path
>>> …

If the wanted path is not yet part of these given paths one can append the new path like this:

>> sys.path.append(‘C:\\Users\\gerd_2\\code’)

If this has all done rightly one can work with the program like before. The main advantage of this splitting of the main program and of the supporting functions is (i) a greater transparency of the main code and (ii) the supporting functions can now easily be used from other programs too if needed.

A next possible continuation you can find HERE.

 

STARTING WITH PYTHON3 – The very beginning – part 4

Journal: uffmm.org,
ISSN 2567-6458, July 15, 2019 – May 9, 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email:
gerd@doeben-henisch.de

Change: July 16, 2019 (Some re-arrangement of the content :-))

CONTEXT

This is the next step in the python3 programming project. The overall context is still the python Co-Learning project.

SUBJECT

After a first clearing of the environment for python programming we have started with the structure of the python programming language, and in this section will deal with the object type string(s).

Remark: the following information about strings you can get directly from the python manuals, which you can find associated with the entry for python 3.7.3 if you press the Windows-Button, look to the list of Apps (= programs), and identify the entry for python 3.7.3. If you open the python entry by clicking you see the sub-entry python 3.7.3 Manuals. If you click on this sub-entry the python documentation will open. In this documentation you can find nearly everything you will need. For Beginners you even find a nice tutorial.

TOPIC: VALUES (OBJECTS) AS STRINGS

PROBLEM(s)

(1) When I see a single word (a string of symbols) I do not know which type this is in python. (2) If I have a statement with many words I would like to get from this a partition into all the single words for further processing.

VISION OF A SOLUTION

There is a simple software actor which can receive as input either single words or multiple words and which can respond by giving either the type of the received word or the list of the received multiple words.

ACTOR STORY (AS)

We assume a human user as executing actor (eA) and a piece of running software as an assisting actor (aA). For these both we assume the following sequence of states:

  1. The user will start the program by calling python and the name of the program.
  2. The program offers the user two options: single word or multiple words.
  3. The user has to select one of these options.
  4. After the selection the user can enter accordingly either one  or multiple words.
  5. The program will respond either with the recognized type in python or with a list of words.
  6. Finally asks the program the user whether he/she will continue or stop.
  7. Depending from the answer of the user the program will continue or stop.

IMPLEMENTATION

Here you can download the sourcecode: stringDemo1

# File stringDemo1.py
# Author: G.Doeben-Henisch
# First date: July 15, 2019

##################
# Function definition sword()

def sword(w1):
w=str(w1)
if w.islower():
print(‘Is lower\n’)
elif w.isalpha() :
print(‘Is alpha\n’)
elif w.isdecimal():
print(‘Is decimal\n’)
elif w.isascii():
print(‘Is ascii\n’)
else : print(‘Is not lower, alpha, decimal, ascii\n’)

##########################
# Main Programm

###############
# Start main loop

loop=’Y’
while loop==’Y’:

###################
# Ask for Options

opt=input(‘Single word =1 or multiple words =2\n’)

if opt==’1′:
w1=input(‘Input a single word\n’)
sword(w1) # Call for new function defined above

elif opt==’2′:
w1=input(‘Input multiple words\n’)
w2=w1.split() # Call for built-in method of class str
print(w2)

loop=input(‘To stop enter N or Y otherwise\n’) # Check whether loop shall be repeated

DEMO

Here it is assumed that the code of the python program is stored in the folder ‘code’ in my home director.

I am starting the windows power shell (PS) by clicking on the icon. Then I enter the command ‘cd code’ to enter the folder code. Then I call the python interpreter together with the demo programm ‘stringDemo1.py’:

PS C:\Users\gerd_2\code> python stringDemo1.py
Single word =1 or multiple words =2

Then I select first option ‘Single word’ with entering 1:

1
Input a single word
Abrakadabra
Is alpha

To stop enter N

After entering 1 the program asks me to enter a single word.

I am entering the fantasy word ‘Abrakadabra’.

Then the program responds with the classification ‘Is alpha’, what is correct. If I want to stop I have to enter ‘N’ otherwise it contiues.

I want o try another word, therefore I am entering ‘Y’:

Y
Single word =1 or multiple words =2

I select again ‘1’ and the new menue appears:

1
Input a single word
29282726
Is decimal

To stop enter N

I entered a sequence of digits which has been classified as ‘decimal’.

I want to contiue with ‘Y’ and entering ‘2’:

Y
Single word =1 or multiple words =2
2
Input multiple words
Hans kommt meistens zu spät
[‘Hans’, ‘kommt’, ‘meistens’, ‘zu’, ‘spät’]
To stop enter N

I have entered a German sentence with 5 words. The response of the system is to identify every single word and generate a list of the individual words.

Thus, so far, the test works fine.

COMMENTS TO THE SOURCE CODE

Before the main program a new function ‘sword()’ has been defined:

def sword(w1):

The python keyword ‘def‘ indicates that here the definition of a function  takes place, ‘sword‘ is the name of this new function, and ‘w1‘ is the input argument for this function. ‘w1’ as such is the name of a variable pointing to some memory place and the value of this variable at this place will depend from the context.

w=str(w1)

The input variable w1 is taken by the operator str and str translates the input value into a python object of type ‘string’. Thus the further operations with the object string can assume that it is a string and therefore one can apply alle the operations to the object which can be applied to strings.

if w.islower():

One of these string-specific operations is islower(). Attached to the string object ‘w’ by a dot-operator ‘.’ the operation ‘islower() will check, whether the string object ‘w’ contains lower case symbols. If yes then the following ‘print()’ operation will send this message to the output, otherwise the program continues with the next ‘elif‘ statement.

The ‘if‘ (and following the if the ‘elif‘) keyword states a condition (whether ‘w’ is of type ‘lower case symbols’). The statement closes with the ‘:’ sign. This statement can be ‘true’ or not. If it is true then the part after the ‘:’ sign will be executed (the ‘print()’ action), if false then the next condition ‘elif … :’ will be checked.

If no condition would be true then the ‘else: …’ statement would be executed.

The main program is organized as a loop which can iterate as long as the user does not stop it. This entails that the user can enter as many words or multi-words as he/ she wants.

loop=’Y’
while loop==’Y’:

In the first line the variable ‘loop’ receives as a value the string ‘Y’ (short for ‘yes’). In the next line starts the loop with the python key-word ‘while’ forming a condition statement ‘while … :’. This is similar to the condition statements above with ‘if …. :’ and ‘elif … :’.

The condition depends on the expression ‘loop == ‘Y” which means that  as long as the variable loop is logically equal == to the value ‘Y’ the loop condition  is ‘true’ and the part after the ‘:’ sign will be executed. Thus if one wants to break this loop one has to change the value of the variable ‘loop’ before the while-statement ‘while … :’ will be checked again. This check is done in the last line of the while-execution part with the input command:

loop=input(‘To stop enter N\n’)

Before the while-condition will be checked again there is this input() operator asking the user to enter a ‘N’ if he/ she wantds to stop. If the user  enters a  ‘N’  in the input line the result of his input will be stored in the variable called ‘loop’ and therefore the variable will have the value ‘==’N” which is different from ‘==’Y”. But what would happen if the user enters something different from ‘N’ and ‘Y’, because ‘Y’ is expected for repetition?

Because the user does not know that he/she has to enter ‘Y’ to continue the program will highly probably stop even if the user does not want to stop. To avoid this unwanted case one should change the code for the while-condition as follows:

while loop!=’N’:

This states that the loop will be true as long as the value of the loop variable is different != from the value ‘N’ which will explicitly asked from the user at the end of the loop.

The main part of the while-loop distinguishes two cases: single word or multiple words. This is realized by a new input() operation:

opt=input(‘Single word =1 or multiple words =2\n’)

The user can enter a ‘1’ or a ‘2’, which will be stored in the variable ‘opt’. Then a construction with an if or an elif will test which one of these both happens. Depending from the option 1 or 2 ther program asks the user again with an input() operation for the specific input (one word or multiple words).

sword(w1)

In the case of the one word input in the variable ‘w1’ w1 contains as value a string input which will be delivered as input argument to the new function ‘sword()’ (explanation see above). In case of input 2 the

w2=w1.split()

‘split()’ operation will be applied to the object ‘w1’ by the dot operator ‘.’. This operation will take every word separated by a ‘blank’ and generates a list ‘[ … ]’ with the individual words as elements.

A next possible continuation you can find HERE.

 

PHILOSOPHY LAB

eJournal: uffmm.org

ISSN 2567-6458, July 13,  2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Changes: July 20.2019 (Rewriting the introduction)

CONTEXT

This Philosophy Lab section of the uffmm science blog is the last extension of the uffmm blog, happening July 2019. It has been provoked by the meta reflections about the AAI engineering approach.

SCOPE OF SECTION

This section deals with  the following topics:

  1. How can we talk about science including the scientist (and engineer!) as the main actors? In a certain sense one can say that science is mainly a specific way how to communicate and to verify the communication content. This presupposes that there is something called knowledge located in the heads of the actors.
  2. The presupposed knowledge usually is targeting different scopes encoded in different languages. The language enables or delimits meaning and meaning objects can either enable or  delimit a certain language. As part of the society and as exemplars of the homo sapiens species scientists participate in the main behavior tendencies to assimilate majority behavior and majority meanings. This can reduce the realm of knowledge in many ways. Biological life in general is the opposite to physical entropy by generating auotopoietically during the course of time  more and more complexity. This is due to a built-in creativity and the freedom to select. Thus life is always oscillating between conformity and experiment.
  3. The survival of modern societies depends highly on the ability   to communicate with maximal sharing of experience by exploring fast and extensively possible state spaces with their pros and cons. Knowledge must be round the clock visible to all, computablemodular, constructive, in the format of interactive games with transparent rules. Machines should be re-formatted as primarily helping humans, not otherwise around.
  4. To enable such new open and dynamic knowledge spaces one has to redefine computing machines extending the Turing machine (TM) concept to a  world machine (WM) concept which offers several new services for social groups, whole cities or countries. In the future there is no distinction between man and machine because there is a complete symbiotic unification because  the machines have become an integral part of a personality, the extension of the body in some new way; probably  far beyond the cyborg paradigm.
  5. The basic creativity and freedom of biological life has been further developed in a fundamental all embracing spirituality of life in the universe which is targeting a re-creation of the whole universe by using the universe for the universe.

 

DAAI V4 FRONTPAGE

eJournal: uffmm.org,
ISSN 2567-6458, 12.May – 18.Jan 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

HISTORY OF THIS PAGE

See end of this page.

CONTEXT

This Theory of Engineering section is part of the uffmm science blog.

HISTORY OF THE (D)AAI-TEXT

See below

ACTUAL VERSION

DISTRIBUTED ACTOR ACTOR INTERACTION [DAAI]. Version 15.06, From  Dec 13, 2019 until Jan 18, 2020

aaicourse-15-06-07(PDF, Chapter 8 new (but not yet completed))

aaicourse-15-06-05(PDF, Chapter 7 new)

aaicourse-15-06-04(PDF, Chapter 6 modified)

aaicourse-15-06-03(PDF, Chapter 5 modified)

aaicourse-15-06-02(PDF, Chapter 4 modified)

aaicourse-15-06-01(PDF, Chapter 1 modified)

aaicourse-15-06 (PDF, chapters 1-6)

aaicourse-15-05-2 (PDF, chapters 1-6; chapter 6 only as a first stub)

DISTRIBUTED ACTOR ACTOR INTERACTION [DAAI]. Version 15.05.1, Dec 2, 2019:

aaicourse-15-05-1(PDF, chapters 1-5; minor corrections)

aaicourse-15-05 (PDF, chapters 1-5 of the new version 15.05)

Changes: Extension of title, extension of preface!, extension of chapter 4, new: chapter 5 MAS, extension of bibliography and indices.

HISTORY OF UPDATES

ACTOR ACTOR INTERACTION [AAI]. Version: June 17, 2019 – V.7: aaicourse-17june2019-incomplete

Change: June 19, 2019 (Update  to version 8; chapter 5 has been rewritten completely).

ACTOR ACTOR INTERACTION [AAI]. Version: June 19, 2019 – V.8: aaicourse-june 19-2019-v8-incomplete

Change: June 19, 2019 (Update to version 8.1; minor corrections in chapter 5)

ACTOR ACTOR INTERACTION [AAI]. Version: June 19, 2019 – V.8.1: aaicourse-june19-2019-v8.1-incomplete

Change: June 23, 2019 (Update to version 9; adding chapter 6 (Dynamic AS) and chapter 7 (Example of dynamic AS with two actors)

ACTOR ACTOR INTERACTION [AAI]. Version: June 23, 2019 – V.9: aaicourse-June-23-2019-V9-incomplete

Change: June 25, 2019 (Update to version 9.1; minor corrections in chapters 1+2)

ACTOR ACTOR INTERACTION [AAI]. Version: June 25, 2019 – V.9.1aaicourse-June25-2019-V9-1-incomplete

Change: June 29, 2019 (Update to version 10; )rewriting of chapter 4 Actor Story on account of changes in the chapters 5-7)

ACTOR ACTOR INTERACTION [AAI]. Version: June 29, 2019 – V.10: aaicourse-June-29-2019-V10-incomplete

Change: June 30, 2019 (Update to version 11; ) completing  chapter  3 Problem Definition)

ACTOR ACTOR INTERACTION [AAI]. Version: June 30, 2019 – V.11: aaicourse-June30-2019-V11-incomplete

Change: June 30, 2019 (Update to version 12; ) new chapter 5 for normative actor stories (NAS) Problem Definition)

ACTOR ACTOR INTERACTION [AAI]. Version: June 30, 2019 – V.12: aaicourse-June30-2019-V12-incomplete

Change: June 30, 2019 (Update to version 13; ) extending chapter 9 with the section about usability testing)

ACTOR ACTOR INTERACTION [AAI]. Version: June 30, 2019 – V.13aaicourse-June30-2019-V13-incomplete

Change: July 8, 2019 (Update to version 13.1 ) some more references to chapter 4; formatting the bibliography alphabetically)

ACTOR ACTOR INTERACTION [AAI]. Version: July 8, 2019 – V.13.1: aaicourse-July8-2019-V13.1-incomplete

Change: July 15, 2019 (Update to version 13.3 ) (In chapter 9 Testing an AS extending the description of Usability Testing with more concrete details to the test procedure)

ACTOR ACTOR INTERACTION [AAI]. Version: July 15, 2019 – V.13.3: aaicourse-13-3

Change: Aug 7, 2019 (Only some minor changes in Chapt. 1 Introduction, pp.15ff, but these changes make clear, that the scope of the AAI analysis can go far beyond the normal analysis. An AAI analysis without explicit actor models (AMs) corresponds to the analysis phase of a systems engineering process (SEP), but an AAI analysis including explicit actor models will cover 50 – 90% of the (logical) design phase too. How much exactly could only be answered if  there would exist an elaborated formal SEP theory with quantifications, but there exists  no such theory. The quantification here is an estimate.)

ACTOR ACTOR INTERACTION [AAI]. Version: Aug 7, 2019 – V.14:aaicourse-14

ACTOR ACTOR INTERACTION [AAI]. Version 15, Nov 9, 2019:

aaicourse-15(PDF, 1st chapter of the new version 15)

ACTOR ACTOR INTERACTION [AAI]. Version 15.01, Nov 11, 2019:

aaicourse-15-01 (PDF, 1st chapter of the new version 15.01)

ACTOR ACTOR INTERACTION [AAI]. Version 15.02, Nov 11, 2019:

aaicourse-15-02 (PDF, 1st chapter of the new version 15.02)

ACTOR ACTOR INTERACTION [AAI]. Version 15.03, Nov 13, 2019:

aaicourse-15-03 (PDF, 1st chapter of the new version 15.03)

ACTOR ACTOR INTERACTION [AAI]. Version 15.04, Nov 19, 2019:

(update of chapter 3, new created chapter 4)

aaicourse-15-04 (PDF, chapters 1-4 of the new version 15.04)

HISTORY OF CHANGES OF THIS PAGE

Change: May 20, 2019 (Stopping Circulating Acronyms :-))

Change: May 21,  2019 (Adding the Slavery-Empowerment topic)

Change: May 26, 2019 (Improving the general introduction of this first page)

HISTORY OF AAI-TEXT

After a previous post of the new AAI approach I started the first  re-formulation of the general framework of  the AAI theory, which later has been replaced by a more advanced AAI version V2. But even this version became a change candidate and mutated to the   Actor-Cognition Interaction (ACI) paradigm, which still was not the endpoint. Then new arguments grew up to talk rather from the Augmented Collective Intelligence (ACI). Because even this view on the subject can  change again I stopped following the different aspects of the general Actor-Actor Interaction paradigm and decided to keep the general AAI paradigm as the main attractor capable of several more specialized readings.

ACI – TWO DIFFERENT READINGS

eJournal: uffmm.org
ISSN 2567-6458, 11.-12.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Change: May-17, 2019 (Some Corrections, ACI associations)
Change: May-20, 2019 (Reframing ACI with AAI)
CONTEXT

This text is part of the larger text dealing with the Actor-Actor Interaction (AAI)  paradigm.

HCI – HMI – AAI ==> ACI ?

Who has followed the discussion in this blog remembers several different phases in the conceptual frameworks used here.

The first paradigm called Human-Computer Interface (HCI) has been only mentioned by historical reasons.  The next phase Human-Machine Interaction (HMI) was the main paradigm in the beginning of my lecturing in 2005. Later, somewhere 2011/2012, I switched to the paradigm Actor-Actor Interaction (AAI) because I tried to generalize over  the different participating machines, robots, smart interfaces, humans as well as animals. This worked quite nice and some time I thought that this is now the final formula. But reality is often different compared to  our thinking. Many occasions showed up where the generalization beyond the human actor seemed to hide the real processes which are going on, especially I got the impression that very important factors rooted in the special human actor became invisible although they are playing decisive role in many  processes. Another punch against the AAI view came from application scenarios during the last year when I started to deal with whole cities as actors. At the end  I got the feeling that the more specialized expressions like   Actor-Cognition Interaction (ACI) or  Augmented Collective Intelligence (ACI) can indeed help  to stress certain  special properties  better than the more abstract AAI acronym, but using structures like ACI  within general theories and within complex computing environments it became clear that the more abstract acronym AAI is in the end more versatile and simplifies the general structures. ACI became a special sub-case

HISTORY

To understand this oscillation between AAI and  ACI one has to look back into the history of Human Computer/ Machine Interaction, but not only until the end of the World War II, but into the more extended evolutionary history of mankind on this planet.

It is a widespread opinion under the researchers that the development of tools to help mastering material processes was one of the outstanding events which changed the path of  the evolution a lot.  A next step was the development of tools to support human cognition like scripture, numbers, mathematics, books, libraries etc. In this last case of cognitive tools the material of the cognitive  tools was not the primary subject the processes but the cognitive contents, structures, even processes encoded by the material structures of the tools.

Only slowly mankind understood how the cognitive abilities and capabilities are rooted in the body, in the brain, and that the brain represents a rather complex biological machinery which enables a huge amount of cognitive functions, often interacting with each other;  these cognitive functions show in the light of observable behavior clear limits with regard to the amount of features which can be processed in some time interval, with regard to precision, with regard to working interconnections, and more. And therefore it has been understood that the different kinds of cognitive tools are very important to support human thinking and to enforce it in some ways.

Only in the 20th century mankind was able to built a cognitive tool called computer which could show   capabilities which resembled some human cognitive capabilities and which even surpassed human capabilities in some limited areas. Since then these machines have developed a lot (not by themselves but by the thinking and the engineering of humans!) and meanwhile the number and variety of capabilities where the computer seems to resemble a human person or surpasses human capabilities have extend in a way that it has become a common slang to talk about intelligent machines or smart devices.

While the original intention for the development of computers was to improve the cognitive tools with the intend  to support human beings one can  get today  the impression as if the computer has turned into a goal on its own: the intelligent and then — as supposed — the super-intelligent computer appears now as the primary goal and mankind appears as some old relic which has to be surpassed soon.

As will be shown later in this text this vision of the computer surpassing mankind has some assumptions which are

What seems possible and what seems to be a promising roadmap into the future is a continuous step-wise enhancement of the biological structure of mankind which absorbs the modern computing technology by new cognitive interfaces which in turn presuppose new types of physical interfaces.

To give a precise definition of these new upcoming structures and functions is not yet possible, but to identify the actual driving factors as well as the exciting combinations of factors seems possible.

COGNITION EMBEDDED IN MATTER
Actor-Cognition Interaction (ACI): A simple outline of the whole paradigm
Cognition within the Actor-Actor Interaction (AAI)  paradigm: A simple outline of the whole paradigm

The main idea is the shift of the focus away from the physical grounding of the interaction between actors looking instead more to the cognitive contents and processes, which shall be mediated  by the physical conditions. Clearly the analysis of the physical conditions as well as the optimal design of these physical conditions is still a challenge and a task, but without a clear knowledge manifested in a clear model about the intended cognitive contents and processes one has not enough knowledge for the design of the physical layout.

SOLVING A PROBLEM

Thus the starting point of an engineering process is a group of people (the stakeholders (SH)) which identify some problem (P) in their environment and which have some minimal idea of a possible solution (S) for this problem. This can be commented by some non-functional requirements (NFRs) articulating some more general properties which shall hold through the whole solution (e.g. ‘being save’, ‘being barrier-free’, ‘being real-time’ etc.). If the description of the problem with a first intended solution including the NFRs contains at least one task (T) to be solved, minimal intended users (U) (here called executive actors (eA)), minimal intended assistive actors (aA) to assist the user in doing the task, as well as a description of the environment of the task to do, then the minimal ACI-Check can be passed and the ACI analysis process can be started.

COGNITION AND AUGMENTED COLLECTIVE INTELLIGENCE

If we talk about cognition then we think usually about cognitive processes in an individual person.  But in the real world there is no cognition without an ongoing exchange between different individuals by communicative acts. Furthermore it has to be taken into account that the cognition of an individual person is in itself partitioned into two unequal parts: the unconscious part which covers about 99% of all the processes in the body and in the brain and about 1% which covers the conscious part. That an individual person can think somehow something this person has to trigger his own unconsciousness by stimuli to respond with some messages from his before unknown knowledge. Thus even an individual person alone has to organize a communication with his own unconsciousness to be able to have some conscious knowledge about its own unconscious knowledge. And because no individual person has at a certain point of time a clear knowledge of his unconscious knowledge  the person even does not really know what to look for — if there is no event, not perception, no question and the like which triggers the person to interact with its unconscious knowledge (and experience) to get some messages from this unconscious machinery, which — as it seems — is working all the time.

On account of this   logic of the individual internal communication with the individual cognition  an external communication with the world and the manifested cognition of other persons appears as a possible enrichment in the   interactions with the distributed knowledge in the different persons. While in the following approach it is assumed to represent the different knowledge responses in a common symbolic representation viewable (and hearable)  from all participating persons it is growing up a possible picture of something which is generally more rich, having more facets than a picture generated by an individual person alone. Furthermore can such a procedure help all participants to synchronize their different knowledge fragments in a bigger picture and use it further on as their own picture, which in turn can trigger even more aspects out of the distributed unconscious knowledge.

If one organizes this collective triggering of distributed unconscious knowledge within a communication process not only by static symbolic models but beyond this with dynamic rules for changes, which can be interactively simulated or even played with defined states of interest then the effect of expanding the explicit and shared knowledge will be boosted even more.

From this background it makes some sense to turn the wording Actor-Cognition Interaction into the wording Augmented Collective Intelligence where Intelligence is the component of dynamic cognition in a system — here a human person –, Collective means that different individual person are sharing their unconscious knowledge by communicative interactions, and Augmented can be interpreted that one enhances, extends this sharing of knowledge by using new tools of modeling, simulation and gaming, which expands and intensifies the individual learning as well as the commonly shared opinions. For nearly all problems today this appears to be  absolutely necessary.

ACI ANALYSIS PROCESS

Here it will be assumed that there exists a group of ACI experts which can supervise  other actors (stakeholders, domain experts, …) in a process to analyze the problem P with the explicit goal of finding a satisfying solution (S+).

For the whole ACI analysis process an appropriate ACI software should be available to support the ACI experts as well as all the other domain experts.

In this ACI analysis process one can distinguish two main phases: (1) Construct an actor story (AS) which describes all intended states and intended changes within the actor story. (2) Make several tests of the actor story to exploit their explanatory power.

ACTOR STORY (AS)

The actor story describes all possible states (S) of the tasks (T) to be realized to reach intended goal states (S+). A mapping from one state to a follow-up state will be described by a change rule (X). Thus having start state (S0) and appropriate change rules one can construct the follow-up states from the actual state (S*)  with the aid of the change rules. Formally this computation of the follow-up state (S’) will be computed by a simulator function (σ), written as: σ: S* x X  —> S.

SEVERAL TESTS

With the aid of an explicit actor story (AS) one can define the non-functional requirements (NFRs) in a way that it will become decidable whether  a NFR is valid with regard to an actor story or not. In this case this test of being valid can be done as an automated verification process (AVP). Part of this test paradigm is the so-called oracle function (OF) where one can pose a question to the system and the system will answer the question with regard to all theoretically possible states without the necessity to run a (passive) simulation.

If the size of the group is large and it is important that all members of the group have a sufficient similar knowledge about the problem(s) in question (as it is the usual case in a city with different kinds of citizens) then is can be very helpful to enable interactive simulations or even games, which allow a more direct experience of the possible states and changes. Furthermore, because the participants can act according to their individual reflections and goals the process becomes highly uncertain and nearly unpredictable. Especially for these highly unpredictable processes can interactive simulations (and games) help to improve a common understanding of the involved factors and their effects. The difference between a normal interactive simulation and a game is given in the fact that a game has explicit win-states whereas the interactive simulations doesn’t. Explicit win-states can improve learning a lot.

The other interesting question is whether an actor story AS with a certain idea for an assistive actor (aA) is usable for the executive actors. This requires explicit measurements of the usability which in turn requires a clear norm of reference with which the behavior of an executive actor (eA) during a process can be compared. Usually is the actor Story as such the norm of reference with which the observable behavior of the executing actors will be compared. Thus for the measurement one needs real executive actors which represent the intended executive actors and one needs a physical realization of the intended assistive actors called mock-up. A mock-up is not yet  the final implementation of the intended assistive actor but a physical entity which can show all important physical properties of the intended assistive actor in a way which allows a real test run. While in the past it has been assumed to be sufficient to test a test person only once it is here assumed that a test person has to be tested at least three times. This follows from the assumption that every executive (biological) actor is inherently a learning system. This implies that the test person will behave differently in different tests. The degree of changes can be a hint of the easiness and the learnability of the assistive actor.

COLLECTIVE MEMORY

If an appropriate ACI software is available then one can consider an actor story as a simple theory (ST) embracing a model (M) and a collection of rules (R) — ST(x) iff x = <M,R> –which can be used as a kind of a     building block which in turn can be combined with other such building blocks resulting in a complex network of simple theories. If these simple theories are stored in a  public available data base (like a library of theories) then one can built up in time a large knowledge base on their own.

 

 

AAI-THEORY V2 – BLUEPRINT: Bottom-up

eJournal: uffmm.org,
ISSN 2567-6458, 27.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 28.February 2019 (Several corrections)

CONTEXT

An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the special topic how to proceed in a bottom-up approach.

BOTTOM-UP: THE GENERAL BLUEPRINT
Outine of the process how to generate an AS
Figure 1: Outline of the process how to generate an AS with a bottom-up approach

As the introductory figure shows it is assumed here that there is a collection of citizens and experts which offer their individual knowledge, experiences, and skills to ‘put them on the table’ challenged by a given problem P.

This knowledge is in the beginning not structured. The first step in the direction of an actor story (AS) is to analyze the different contributions in a way which shows distinguishable elements with properties and relations. Such a set of first ‘objects’ and ‘relations’ characterizes a set of facts which define a ‘situation’ or a ‘state’ as a collection of ‘facts’. Such a situation/ state can also be understood as a first simple ‘model‘ as response to a given problem. A model is as such ‘static‘; it describes what ‘is’ at a certain point of ‘time’.

In a next step the group has to identify possible ‘changes‘ which can be associated with at least one fact. There can be many possible changes which eventually  need different durations to come into effect. These effects can happen  as ‘exclusive alternatives’ or in ‘parallel’. Apply the possible changes to a  situation  generates   ‘successors’ to the actual situation. A sequence of situations generated by applied changes is  usually called a ‘simulation‘.

If one allows the interaction between real actors with a simulation by associating  a real actor to one of the actors ‘inside the simulation’ one is turning the simulation into an ‘interactive simulation‘ which represents basically a ‘computer game‘ (short: ‘egame‘).

One can use interactive simulations e.g. to (i) learn about the dynamics of a model, to (ii) test the assumptions of a model, to (iii) test the knowledge and skills of the real actors.

Making new experiences with a  simulation allows a continuous improvement of the model and its change rules.

Additionally one can include more citizens and experts into this process and one can use available knowledge from databases and libraries.

EPISTEMOLOGY OF CONCEPTS
Epistemology of concepts used in an AAI Analysis rprocess
Fig.2: Epistemology of concepts used in an AAI Analysis process

As outlined in the preceding section about the blueprint of a bottom-up process there will be a heavy   usage of concepts to describe state of affairs.

The literature about this topic in philosophy as well as many scientific disciplines is overwhelmingly and therefore this small text here can only be a ‘pointer’ into a complex topic. Nevertheless I will use exactly this pointer to explore this topic further.

While the literature is mainly dealing with  more or less specific partial models, I am trying here to point out a very general framework which fits to a more genera philosophical — especially epistemological — view as well as gives respect to many results of scientific disciplines.

The main dimensions here are (i) the outside external empirical world, which connects via sensors to the (ii) internal body, especially the brain,  which works largely ‘unconscious‘, and then (iii) the ‘conscious‘ part of he brain.

The most important relationship between the ‘conscious’ and the ‘unconscious’ part of the brain is the ability of the unconscious brain to transform automatically incoming concrete sens-experiences into more   ‘abstract’ structures, which have at least three sub-dimensions: (i) different concrete material, (ii) a sub-set of extracted common properties, (iii) different sets of occurring contexts associated with the different subsets. This enables the brain to extract only a ‘few’ abstract structures (= abstract concepts)  to deal with ‘many’  concrete events. Thus the abstract concept ‘chair’ can cover many different concrete chairs which have only a few properties in common. Additionally the chairs can occur in different ‘contexts’ associating them with different ‘relations’ which can  specify  possible different ‘usages’   of  the concept ‘chair’.

Thus, if the actor perceives something which ‘matches’ some ‘known’ concept then the actor is  not only conscious about the empirical concrete phenomenon but also simultaneously about the abstract concept which will automatically be activated. ‘Immediately’ the actor ‘knows’ that this empirical something is e.g. a ‘chair’. Concrete: this concrete something is matching an abstract concept ‘chair’ which can as such cover many other concrete things too which can be as concrete somethings partially different from another concrete something.

From this follows an interesting side effect: while an actor can easily decide, whether a concrete something is there  (“it is the case, that” = “it is true”) or not (“it is not the case, that” = “it isnot true” = “it is false”), an actor can not directly decide whether an abstract concept like ‘chair’ as such is ‘true’ in the sense, that the concept ‘as a whole’ corresponds to concrete empirical occurrences. This depends from the fact that an abstract concept like ‘chair’ can match with a  nearly infinite set of possible concrete somethings which are called ‘possible instances’ of the abstract concept. But a human actor can directly   ‘check’ only a ‘few’ concrete somethings. Therefore the usage of abstract concepts like ‘chair’, ‘house’, ‘bottle’ etc. implies  inherently an ‘open set’ of ‘possible’ concrete  exemplars and therefor is the usage of such concepts necessarily a ‘hypothetical’ usage.  Because we can ‘in principle’ check the real extensions of these abstract concepts   in everyday life as long there is the ‘freedom’ to do  such checks,  we are losing the ‘truth’ of our concepts and thereby the basis for a  realistic cooperation, if this ‘freedom of checking’ is not possible.

If some incoming perception is ‘not yet known’,  because nothing given in the unconsciousness does ‘match’,  it is in a basic sens ‘new’ and the brain will automatically generate a ‘new concept’.

THE DIMENSION OF MEANING

In Figure 2 one can find two other components: the ‘meaning relation’ which maps concepts into ‘language expression’.

Language expressions inside the brain correspond to a diversity of visual, auditory, tactile or other empirical event sequences, which are in use for communicative acts.

These language expressions are usually not ‘isolated structures’ but are embedded in relations which map the expression structures to conceptual structures including  the different substantiations of the abstract concepts and the associated contexts. By these relations the expressions are attached to the conceptual structures which are called the ‘meaning‘ of the expressions and vice versa the expressions are called the ‘language articulation’ of the meaning structures.

As far as conceptual structures are related via meaning relations to language expressions then  a perception can automatically cause the ‘activation’ of the associated language expressions, which in turn can be uttered in some way. But conceptual structures   can exist  (especially with children) without an available  meaning relation.

When language expressions are used within a communicative act then  their usage can activate in all participants of the communication the ‘learned’ concepts as their intended meanings. Heaving the meaning activated in someones ‘consciousness’ this is a real phenomenon for that actor. But from the occurrence of  concepts alone does not automatically follow, that a  concept is ‘backed up’ by some ‘real matter’ in the external world. Someone can utter that it is raining, in the hearer of this utterance the intended concepts can become activated, but in the outside external world no rain is happening. In this case one has to state that the utterance of the language expressions “Look, its raining” has no counterpart in the real world, therefore we call the utterance in this case ‘false‘ or  ‘not true‘.

THE DIMENSION OF TIME
The dimension of time based on past experience and combinatoric thinking
Fig.3: The dimension of time based on past experience and combinatoric thinking

The preceding figure 2 of the conceptual space is not yet complete. There is another important dimension based on the ability of the unconscious brain to ‘store’ certain structures in a ‘timely order’ which enables an actor — under certain conditions ! — to decide whether a certain structure X occurred in the consciousness ‘before’ or ‘after’ or ‘at the same time’ as another structure Y.

Evidently the unconscious brain is able do exactly this:  (i) it can arrange the different structures under certain conditions in a ‘timely order’;  (ii)  it can detect ‘differences‘ between timely succeeding structures;  the brain (iii) can conceptualize these changes as ‘change concepts‘ (‘rules of change’), and it can  can classify different kinds of change like ‘deterministic’, ‘non-deterministic’ with different kinds of probabilities, as well as ‘arbitrary’ as in the case of ‘free learning systems‘. Free learning systems are able to behave in a ‘deterministic-like manner’, but they can also change their patterns on account of internal learning and decision processes in nearly any direction.

Based on memories of conceptual structures and derived change concepts (rules of change) the unconscious brain is able to generate different kinds of ‘possible configurations’, whose quality is  depending from the degree of dependencies within the  ‘generating  criteria’: (i) no special restrictions; (ii) empirical restrictions; (iii) empirical restrictions for ‘upcoming states’ (if all drinkable water would be consumed, then one cannot plan any further with drinkable water).