Category Archives: vision statement

OKSIMO APPLICATIONS – Simple Examples – Citizens of a County

eJournal: uffmm.org ISSN 2567-6458

27.March 2022 – 27.March 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Oksimo Application theme which is part of the uffmm blog.

PREFACE

This post shows a simple simulation example with the beta-version of the new Version 2 of the oksimo programming environment. This example shall illustrate the concept of an ‘Everyday Empirical Theory‘ as described in this blog 11 days before. It is intentionally as ‘simple as possible’. Probably some more examples will be shown.

FROM THEORY TO AN APPLICATION

To apply a theory concept in an everyday world there are many formats possible. In this text it will be shown how such an application would look like if one is applying the oksimo programming environment. Until now there exists only a German Blog (oksimo.org) describing the oksimo paradigm a little bit. But the examples there are written with oksimo version 1, which didn’t allow to use math. In version 2 this is possible, accompanied by some visual graph features.

Everyday Experts – Basic Ideas

This figure shows a simple outline of the basic assumptions of the oksimo programming environment constituting the oksimo paradigm: (i) Every human person is assumed to be a ‘natural expert’ being member of a bigger population which shares the same ‘everyday language’ including basic math. (ii) An actor is embedded in some empirical environment including the own body and other human actors. (iii) Human actors are capable of elaborating as inner states different kinds of ‘mental (cognitive) models’ based on their experience of the environment and their own body. (iv) Human actors are further capable to use symbolic languages to ‘represent’ properties of these mental models encoded in symbolic expressions. Such symbolic encoding presupposes an ‘inner meaning function’ which has to be learned. (v) In the oksimo programming environment one needs for the description of a ‘given state’ two kinds of symbolic expressions: (v.1) Language expressions to describe general properties and relations which are assumed to be ‘given’ (= ‘valid by experience’); (v.2) Language expressions to name concrete quantitative properties (simple math expressions).

This figure shows the idea how to change a given state (situation) by so-called ‘change rules’. A change rule encodes experience from the everyday world under which conditions some properties of a given situation S can be ‘changed’ in a way, that a ‘new situation’ S* comes into being. Generally a given state can change if either language expression is ‘deleted’ from the description or ‘contributed’. Another possibility is realized if one of the given quantitative expressions changes its value. In the above simple situation the only change happens by changing the number of citizens by some growth effect. But, as other examples will demonstrate, everything is possible what is possible in the empirical world.

SOME MORE FEATURES

The basic schema of the oksimo paradigm assumes the following components:

  1. The description of a ‘given situation’ as a ‘start state’.
  2. The description of a ‘vision’ functioning as a ‘goal’ which allows a basic ‘Benchmarking’.
  3. A list of ‘change rules’ which describe the assumed possible changes
  4. An ‘inference engine’ called ‘simulator’: Depending from the number of wanted ‘simulation cycles’ (‘inferences’) the simulator applies the change rules onto a given state S and thereby it is producing a ‘follow up state’ S*, which becomes the new given state. The series of generated states represents the ‘history’ of a simulation. Every follow up state is an ‘inference’ and by definition also a ‘forecast’.

All these features (1) – (4) together constitute a full empirical theory in the sense of the mentioned theory post before.

Let us look to a real simulation.

A REAL SIMULATION

The following example has been run with Oksimo v2.0 (Pre-Release) (353e5). Hopefully we can finish the pre-release to a full release the next few weeks.

A VISION

Name: v2026

Expressions:

The Main-Kinzig County exists.

Math expressions:

YEAR>2025 and YEAR<2027

This simple goal assumes the existence of the Main-Kinzig County for the year 2026.

GIVEN START STATE

Name: StartSimple1

Expressions:

The Main-Kinzig County exists.

The number of citizens is known.

Comparing the number of different years one has computed a growth rate.

Math expressions:

YEAR=2018Number

CITIZENS=418950Amount

GROWTH=0.0023Percentage

The start state makes some simple statements which are assumed to be ‘valid’ in a ‘real given situation’ by the participating natural experts.

CHANGE RULES

In this example there is only one change rules (In principle there can be as many change rules as wanted).

Rule name: Growth1

Probability: 1.0

Conditions:

The Main-Kinzig County exists.

Math conditions:

CITIZENS < 450000

Effects plus:

Effects minus:

Effects math:

CITIZENS=CITIZENS+(CITIZENS*GROWTH)

YEAR=YEAR+1

This change rules is rather simple. It looks only to the fact whether the Main-Kinzig County exists and wether the number of citizens is still below 450000. If this is the case, then the year will be incremented and the number of citizens will be incremented according to an extremely simple formula.

For every named quantity in this simulation (YEAR, GROWTH, CITIZENS) the values are collected for every simulation cycle and therefore can be used for evaluations. In this simple case only the quantities of YEAR and CITIZENS have changes:

Simple linear graph for the quantity named YEAR
Simple linear graph for the quantity named CITIZENS

Here the quick log of simulation cycle round 7 – 9:

Round 7

State rules:
Vision rules:
Current states: The number of citizens is known.,Comparing the number of different years one has computed a growth rate.,The Main-Kinzig County exists.
Current visions: The Main-Kinzig County exists.
Current values:
YEAR: 2025Number
CITIZENS: 425741.8149741673Amount
GROWTH: 0.0023Percentage

50.00 percent of your vision was achieved by reaching the following states:
The Main-Kinzig County exists.,
And the following math visions:
None

Round 8

State rules:
Vision rules:
Current states: The number of citizens is known.,Comparing the number of different years one has computed a growth rate.,The Main-Kinzig County exists.
Current visions: The Main-Kinzig County exists.
Current values:
YEAR: 2026Number
CITIZENS: 426721.0211486079Amount
GROWTH: 0.0023Percentage

100.00 percent of your vision was achieved by reaching the following states:
The Main-Kinzig County exists.,
And the following math visions:
YEAR>2025 and YEAR<2027,

Round 9

State rules:
Vision rules:
Current states: The number of citizens is known.,Comparing the number of different years one has computed a growth rate.,The Main-Kinzig County exists.
Current visions: The Main-Kinzig County exists.
Current values:
YEAR: 2027Number
CITIZENS: 427702.4794972497Amount
GROWTH: 0.0023Percentage

50.00 percent of your vision was achieved by reaching the following states:
The Main-Kinzig County exists.,
And the following math visions:
None

In round 8 one can see, that the simulation announces:

100.00 percent of your vision was achieved by reaching the following states: The Main-Kinzig County exists., And the following math visions: YEAR>2025 and YEAR<2027

From this the natural expert can conclude that his requirements given in the vision are ‘fulfilled’/’satisfied’.

WHAT COMES NEXT?

In a loosely order more examples will follow. Here you find the next one.

HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, February 27-March 16, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 16, 2021 (minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 2: Problem & Vision

Context

This text is preceded by the following texts:

Introduction

Before one starts the HMI analysis  some stakeholder  — in our case are the users stakeholder as well as  users in one role —  have to present some given situation — classifiable as a ‘problem’ — to depart from and a vision as the envisioned goal to be realized.

Here we give a short description of the problem for the CM:MI paradigm and the vision, what should be gained.

Problem: Mankind on the Planet Earth

In this project  the mankind  on the planet earth is  understood as the primary problem. ‘Mankind’ is seen here  as the  life form called homo sapiens. Based on the findings of biological evolution one can state that the homo sapiens has — besides many other wonderful capabilities — at least two extraordinary capabilities:

Outside to Inside

The whole body with the brain is  able to convert continuously body-external  events into internal, neural events. And  the brain inside the body receives many events inside the body as external events too. Thus in the brain we can observe a mixup of body-external (outside 1) and body-internal events (outside 2), realized as set of billions of neural processes, highly interrelated.  Most of these neural processes are unconscious, a small part is conscious. Nevertheless  these unconscious and conscious events are  neurally interrelated. This overall conversion from outside 1 and outside 2 into neural processes  can be seen as a mapping. As we know today from biology, psychology and brain sciences this mapping is not a 1-1 mapping. The brain does all the time a kind of filtering — mostly unconscious — sorting out only those events which are judged by the brain to be important. Furthermore the brain is time-slicing all its sensory inputs, storing these time-slices (called ‘memories’), whereby these time-slices again are no 1-1 copies. The storing of time-sclices is a complex (unconscious) process with many kinds of operations like structuring, associating, abstracting, evaluating, and more. From this one can deduce that the content of an individual brain and the surrounding reality of the own body as well as the world outside the own body can be highly different. All kinds of perceived and stored neural events which can be or can become conscious are  here called conscious cognitive substrates or cognitive objects.

Inside to Outside (to Inside)

Generally it is known that the homo sapiens can produce with its body events which have some impact on the world outside the body.  One kind of such events is the production of all kinds of movements, including gestures, running, grasping with hands, painting, writing as well as sounds by his voice. What is of special interest here are forms of communications between different humans, and even more specially those communications enabled by the spoken sounds of a language as well as the written signs of a language. Spoken sounds as well as written signs are here called expressions associated with a known language. Expressions as such have no meaning (A non-speaker of a language L can hear or see expressions of the language L but he/she/x  never will understand anything). But as everyday experience shows nearly every child  starts very soon to learn which kinds of expressions belong to a language and with what kinds of shared experiences they can be associated. This learning is related to many complex neural processes which map expressions internally onto — conscious and unconscious — cognitive objects (including expressions!). This mapping builds up an internal  meaning function from expressions into cognitive objects and vice versa. Because expressions have a dual face (being internal neural structures as well as being body-outside events by conversions from the inside to body-outside) it is possible that a homo sapiens  can transmit its internal encoding of cognitive objects into expressions from his  inside to the outside and thereby another homo sapiens can perceive the produced outside expression and  can map this outside expression into an intern expression. As far as the meaning function of of the receiving homo sapiens  is sufficiently similar to the meaning function of  the sending homo sapiens there exists some probability that the receiving homo sapiens can activate from its memory cognitive objects which have some similarity with those of  the sending  homo sapiens.

Although we know today of different kinds of animals having some form of language, there is no species known which is with regard to language comparable to  the homo sapiens. This explains to a large extend why the homo sapiens population was able to cooperate in a way, which not only can include many persons but also can stretch through long periods of time and  can include highly complex cognitive objects and associated behavior.

Negative Complexity

In 2006 I introduced the term negative complexity in my writings to describe the fact that in the world surrounding an individual person there is an amount of language-encoded meaning available which is beyond the capacity of an  individual brain to be processed. Thus whatever kind of experience or knowledge is accumulated in libraries and data bases, if the negative complexity is higher and higher than this knowledge can no longer help individual persons, whole groups, whole populations in a constructive usage of all this. What happens is that the intended well structured ‘sound’ of knowledge is turned into a noisy environment which crashes all kinds of intended structures into nothing or badly deformed somethings.

Entangled Humans

From Quantum Mechanics we know the idea of entangled states. But we must not dig into quantum mechanics to find other phenomena which manifest entangled states. Look around in your everyday world. There exist many occasions where a human person is acting in a situation, but the bodily separateness is a fake. While sitting before a laptop in a room the person is communicating within an online session with other persons. And depending from the  social role and the  membership in some social institution and being part of some project this person will talk, perceive, feel, decide etc. with regard to the known rules of these social environments which are  represented as cognitive objects in its brain. Thus by knowledge, by cognition, the individual person is in its situation completely entangled with other persons which know from these roles and rules  and following thereby  in their behavior these rules too. Sitting with the body in a certain physical location somewhere on the planet does not matter in this moment. The primary reality is this cognitive space in the brains of the participating persons.

If you continue looking around in your everyday world you will probably detect that the everyday world is full of different kinds of  cognitively induced entangled states of persons. These internalized structures are functioning like protocols, like scripts, like rules in a game, telling everybody what is expected from him/her/x, and to that extend, that people adhere to such internalized protocols, the daily life has some structure, has some stability, enables planning of behavior where cooperation between different persons  is necessary. In a cognitively enabled entangled state the individual person becomes a member of something greater, becoming a super person. Entangled persons can do things which usually are not possible as long you are working as a pure individual person.[1]

Entangled Humans and Negative Complexity

Although entangled human persons can principally enable more complex events, structures,  processes, engineering, cultural work than single persons, human entanglement is still limited by the brain capacities as well as by the limits of normal communication. Increasing the amount of meaning relevant artifacts or increasing the velocity of communication events makes things even more worse. There are objective limits for human processing, which can run into negative complexity.

Future is not Waiting

The term ‘future‘ is cognitively empty: there exists nowhere an object which can  be called ‘future’. What we have is some local actual presence (the Now), which the body is turning into internal representations of some kind (becoming the Past), but something like a future does not exist, nowhere. Our knowledge about the future is radically zero.

Nevertheless, because our bodies are part of a physical world (planet, solar system, …) and our entangled scientific work has identified some regularities of this physical world which can be bused for some predictions what could happen with some probability as assumed states where our clocks are showing a different time stamp. But because there are many processes running in parallel, composed of billions of parameters which can be tuned in many directions, a really good forecast is not simple and depends from so many presuppositions.

Since the appearance of homo sapiens some hundred thousands years ago in Africa the homo sapiens became a game changer which makes all computations nearly impossible. Not in the beginning of the appearance of the homo sapiens, but in the course of time homo sapiens enlarged its number, improved its skills in more and more areas, and meanwhile we know, that homo sapiens indeed has started to crash more and more  the conditions of its own life. And principally thinking points out, that homo sapiens could even crash more than only planet earth. Every exemplar of a homo sapiens has a built-in freedom which allows every time to decide to behave in a different way (although in everyday life we are mostly following some protocols). And this built-in freedom is guided by actual knowledge, by emotions, and by available resources. The same child can become a great musician, a great mathematician, a philosopher, a great political leader, an engineer, … but giving the child no resources, depriving it from important social contexts,  giving it the wrong knowledge, it can not manifest its freedom in full richness. As human population we need the best out of all children.

Because  the processing of the planet, the solar system etc.  is going on, we are in need of good forecasts of possible futures, beyond our classical concepts of sharing knowledge. This is where our vision enters.

VISION: DEVELOPING TOGETHER POSSIBLE FUTURES

To find possible and reliable shapes of possible futures we have to exploit all experiences, all knowledge, all ideas, all kinds of creativity by using maximal diversity. Because present knowledge can be false — as history tells us –, we should not rule out all those ideas, which seem to be too crazy at a first glance. Real innovations are always different to what we are used to at that time. Thus the following text is a first rough outline of the vision:

  1. Find a format
  2. which allows any kinds of people
  3. for any kind of given problem
  4. with at least one vision of a possible improvement
  5. together
  6. to search and to find a path leading from the given problem (Now) to the envisioned improved state (future).
  7. For all needed communication any kind of  everyday language should be enough.
  8. As needed this everyday language should be extendable with special expressions.
  9. These considerations about possible paths into the wanted envisioned future state should continuously be supported  by appropriate automatic simulations of such a path.
  10. These simulations should include automatic evaluations based on the given envisioned state.
  11. As far as possible adaptive algorithms should be available to support the search, finding and identification of the best cases (referenced by the visions)  within human planning.

REFERENCES or COMMENTS

[1] One of the most common entangled state in daily life is the usage of normal language! A normal language L works only because the rules of usage of this language L are shared by all speaker-hearer of this language, and these rules are explicit cognitive structures (not necessarily conscious, mostly unconscious!).

Continuation

Yes, it will happen 🙂 Here.

 

 

 

 

 

 

KOMEGA REQUIREMENTS: Start with a Political Program

Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458, Nov 23-28, 2020
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document is part of the Case Studies section.

CONTENT

Applying the original P-V-Pref Document structure to real cases it became clear that the everyday logic behind the classification of facts into problems [P] or  visions [V] follows a kind of logic hidden in the semantic space of the used expressions. This text explains this hidden logic and what this means for our application.

PDF DOCUMENT

VIDEO [DE]

REMARK

(After first presentations of this video)

(Last change: November 28, 2020)

Confusion by different meanings

While the general view of the whole process is quite clear there arose some hot debate about the everyday situation of the experts (here: citizens)  and the concepts ‘reality [R]‘, ‘vision [V] (imagination of a  state which is not yet real)’, ‘problem [P]‘, and ‘preference [Pref]‘. The members of my zevedi-working group (located at the INM (Frankfurt, Hessen, Germany) as well as a citizen from Dieburg (Hessen, Germany) associated with ‘reality’ also the different kinds of emotions being active in a person and they classified an imagination about a future state also as being real in a concrete person. With such a setting of the concepts it became difficult to motivate the logic illustrated in the video. The video — based on the preceding paper — talks about  a vision v, which can turn a reality r into a problem p, and thereby generating a preference Pref = (v,r). A preference can possibly become a trigger of  some change process.

Looking ahead

Before clarifying this discussion let as have a look ahead to the overall change process which constitutes the heart of the komega-software.  Beginning with October 18, 2020 the idea of this overall change process has been described in this blog. Having some given situation S, the komega software allows the construction of change rules X,  which can be applied onto a given situation S and a builtin simulator [sim] will generate a follow up situation S’ like sim(X,S)=S’ — or short: X(S) = S’ –, a process which can be repeated by using the output S’ as new input for a new cycle. At any time of this cyclic process one can ask whether the actual output S’ can be classified as successful. What is called ‘successful’ depends from the applied criteria. For the komega software at least two criteria are used. The most basic one looks to the ectual end state S’ of the simulation and computes the difference between the occurences of vision statements V in S’ and the occurrences of real statements R having been declared at the beginning as problems P as part of the  start situation S. Ideally the real statements classified as problems should have been disappeared and the vision statements should be present.  If the difference is bigger than some before agreed threshold theta  than the actual end state S’ will be classified as a success, as a goal state in the light of the visions of the preferences, which triggered the change process.

Vision statement

In the context of the whole change process a vision statement is an expression e associated with some everyday language L and which describes in the understanding of the experts a state, which is in our mindes conceivable, imaginable, which is not given as a real state, but can eventually  become a real state in some future. This disctinction presupposes that the expert can distinguish between an idea in his consciousness which is associated with some real state outside his consciousness — associated with a real state — and an idea, which is only inside his consciousness — associated with an imaginated state –.  Looking from a second person to the expert this second person can observe the body of the expert and the world surrounding the body and can speak of the real world and the real body of the expert, but the inner states of the expert are hidden for this second person. Thus from the point of view of this second person there are no real imaginations, no real future states. But the expert can utter some expression e which has a meaning describing some state, which as such is not yet real, but which possibly could become real if one would change the actual reality (the actual everyday life, the actual city …) accordingly.  Thus a vision statement is understood here as an expression e from the everyday language L uttered by some expert having a meaning which can be understood by the other persons describing some imginated state, which is not yet real but could eventually become real in some future ahead.

Creating problems, composing preferences

If at least one vision statement v is known by some experts, then it can happen, that an expert does relate this vision with some given reality r as part of the everyday life or with some absent reality r. Example: if an expert classifies some part of the city as having too much traffic (r1) and he has the vision of changing this into a situation where the traffic is lowered down by X% (v1), then this vision statement v1 can help to understand other experts to interpret the reality r1 in the light of the visiin v1 as a problem v1(r1) = p1. Classifying some reality r1 into a problem p1 is understood in the context of the komega software as making the reality r1 a candidate for a possible change in the sense that r1 should be replaced by v1. Having taken this stance — seeing the reality r1 as a problem p1 by the vision v1 –, than the experts  have created a so-called preference Pref = (v1, p1) saying that the experts are preferring the imaginated possibly future state v1 more than the actual problem p1.

There is the special case, that an expert has uttered a vision statement v but there is no given reality which can be stated in a real statement r. Example: A company thinks that it can produce some vaccine against the  disease Y in two years from now, like  v2=’there is a vaccine against disease Y in yy’. Actually there exists no vaccine, but a disease is attacking the people. Because it is known, that the people can be made immune against the disease by an appropriate vaccine it makes sense to state r2=’There is no vaccine against the disease Y available’. Having the vision v2 this can turn the reality r2 into a problem p2 allowing the preference Pref=(v2,p2).

Triggering actions

If a group of experts generated a vision v — by several and different reaons (including emotions) –, having  associated this with some given eality r, and they decided to generate by v(r)=p  a preference Pr =(v,p),  then it can happen , that these experts decide to start a change process beginning now with the given problem p and ending up with a situation in some future where the problem p disappeared and the vision has become real.

Summing up

The komega software allows the planning and testing of change processes  if the acting experts have at least one preference Pref based on at least one  vision statement v and at least one real statement r.

BITS OF PHILOSOPHY

Shows the framework for the used concepts from the point of view of philosophy
Philosophical point of view

The above video (in German, DE) and the following  lengthy remark after the video how to understand the basic concepts vision statement [v],  real statement [r], problem statement [p], as well as preference [Pref] presuppose both a certain kind of philosophy. This philosophical point of view is outlined above in a simple drawing.

Basically there is a real human person (an actor) with a real brain embedded in some everyday world. The person can perceive parts of the every day world at every point of time. The most important reference point  in time is the actual moment called NOW.

Inside the brain the human person can generate some cognitive structure triggered by perception, by  memory and by some thinking.  Having learned some everyday language L the human person can map the cognitive structure into an expression E associated with the language L. If the cognitive structure correlates with some real situation outside the body then the meaning of the expression E is classified as being a real statement, here named E1.  But the brain can generate also cognitive structures and mapping these in expressions E without being actually correlated with some real situation outside. Such a statement is here called a vision statement, here named E2. A vision statement can eventually become correlated with some real situation outside in some future. In that case the vision statement transforms into a real statement E2, while the before mentioned real statement E1 can lose its correlation with a real situation.

FURTHER DISCUSSIONS

For further discussions have a look to this page too.

 

KOMEGA REQUIREMENTS: From the minimal to the basic version

ISSN 2567-6458, 18.October  2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document is part of the Case Studies section.

CONTENT

Here we present the ideas how to extend the minimal version to a first basic version. At least two more advanced levels will follow.

VIDEO (EN)

(Last change: Oct 17, 2020)

VIDEO(DE)

(last change: Oct 18, 2020)

CASE STUDIES

eJournal: uffmm.org
ISSN 2567-6458, 4.May  – 16.March   2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

In this section several case studies will  be presented. It will be shown, how the DAAI paradigm can be applied to many different contexts . Since the original version of the DAAI-Theory in Jan 18, 2020 the concept has been further developed centering around the concept of a Collective Man-Machine Intelligence [CM:MI] to address now any kinds of experts for any kind of simulation-based development, testing and gaming. Additionally the concept  now can be associated with any kind of embedded algorithmic intelligence [EAI]  (different to the mainstream concept ‘artificial intelligence’). The new concept can be used with every normal language; no need for any special programming language! Go back to the overall framework.

COLLECTION OF PAPERS

There exists only a loosely  order  between the  different papers due to the character of this elaboration process: generally this is an experimental philosophical process. HMI Analysis applied for the CM:MI paradigm.

 

JANUARY 2021 – OCTOBER 2021

  1. HMI Analysis for the CM:MI paradigm. Part 1 (Febr. 25, 2021)(Last change: March 16, 2021)
  2. HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision (Febr. 27, 2021)
  3. HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories (March 2, 2021)
  4. HMI Analysis for the CM:MI paradigm. Part 4. Tool Based Development with Testing and Gaming (March 3-4, 2021, 16:15h)

APRIL 2020 – JANUARY 2021

  1. From Men to Philosophy, to Empirical Sciences, to Real Systems. A Conceptual Network. (Last Change Nov 8, 2020)
  2. FROM DAAI to GCA. Turning Engineering into Generative Cultural Anthropology. This paper gives an outline how one can map the DAAI paradigm directly into the GCA paradigm (April-19,2020): case1-daai-gca-v1
  3. CASE STUDY 1. FROM DAAI to ACA. Transforming HMI into ACA (Applied Cultural Anthropology) (July 28, 2020)
  4. A first GCA open research project [GCA-OR No.1].  This paper outlines a first open research project using the GCA. This will be the framework for the first implementations (May-5, 2020): GCAOR-v0-1
  5. Engineering and Society. A Case Study for the DAAI Paradigm – Introduction. This paper illustrates important aspects of a cultural process looking to the acting actors  where  certain groups of people (experts of different kinds) can realize the generation, the exploration, and the testing of dynamical models as part of a surrounding society. Engineering is clearly  not  separated from society (April-9, 2020): case1-population-start-part0-v1
  6. Bootstrapping some Citizens. This  paper clarifies the set of general assumptions which can and which should be presupposed for every kind of a real world dynamical model (April-4, 2020): case1-population-start-v1-1
  7. Hybrid Simulation Game Environment [HSGE]. This paper outlines the simulation environment by combing a usual web-conference tool with an interactive web-page by our own  (23.May 2020): HSGE-v2 (May-5, 2020): HSGE-v0-1
  8. The Observer-World Framework. This paper describes the foundations of any kind of observer-based modeling or theory construction.(July 16, 2020)
  9. CASE STUDY – SIMULATION GAMES – PHASE 1 – Iterative Development of a Dynamic World Model (June 19.-30., 2020)
  10. KOMEGA REQUIREMENTS No.1. Basic Application Scenario (last change: August 11, 2020)
  11. KOMEGA REQUIREMENTS No.2. Actor Story Overview (last change: August 12, 2020)
  12. KOMEGA REQUIREMENTS No.3, Version 1. Basic Application Scenario – Editing S (last change: August 12, 2020)
  13. The Simulator as a Learning Artificial Actor [LAA]. Version 1 (last change: August 23, 2020)
  14. KOMEGA REQUIREMENTS No.4, Version 1 (last change: August 26, 2020)
  15. KOMEGA REQUIREMENTS No.4, Version 2. Basic Application Scenario (last change: August 28, 2020)
  16. Extended Concept for Meaning Based Inferences. Version 1 (last change: 30.April 2020)
  17. Extended Concept for Meaning Based Inferences – Part 2. Version 1 (last change: 1.September 2020)
  18. Extended Concept for Meaning Based Inferences – Part 2. Version 2 (last change: 2.September 2020)
  19. Actor Epistemology and Semiotics. Version 1 (last change: 3.September 2020)
  20. KOMEGA REQUIREMENTS No.4, Version 3. Basic Application Scenario (last change: 4.September 2020)
  21. KOMEGA REQUIREMENTS No.4, Version 4. Basic Application Scenario (last change: 10.September 2020)
  22. KOMEGA REQUIREMENTS No.4, Version 5. Basic Application Scenario (last change: 13.September 2020)
  23. KOMEGA REQUIREMENTS: From the minimal to the basic Version. An Overview (last change: Oct 18, 2020)
  24. KOMEGA REQUIREMENTS: Basic Version with optional on-demand Computations (last change: Nov 15,2020)
  25. KOMEGA REQUIREMENTS:Interactive Simulations (last change: Nov 12,2020)
  26. KOMEGA REQUIREMENTS: Multi-Group Management (last change: December 13, 2020)
  27. KOMEGA-REQUIREMENTS: Start with a Political Program. (last change: November 28, 2020)
  28. OKSIMO SW: Minimal Basic Requirements (last change: January 8, 2021)