Category Archives: unconscious knowledge

OKSIMO.R – Start . The ‘inside’ of the ‘outside’ – Part 2

eJournal: uffmm.org
ISSN 2567-6458, 13.Januar 2023 – 18.January 2023, 08:08 p.m.
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Parts of this text have been translated with www.DeepL.com/Translator (free version), afterwards only minimally edited.

CONTEXT

This post is part of the book project ‘oksimo.R Editor and Simulator for Theories’.

Part 2

( This text is an direct continuation of the text  “The ‘inside’ of the ‘outside’. Basic Building Blocks”)

Establishment of First Structures

At first sight, the previously described galactic cell association of a human body does not provide a natural clue for a ‘center’ of some kind. Which cell should be more important than others? Each one is active, each one does its ‘job’. Many ‘talk’ to many. Chemical substances are exchanged or by means of chemical substance exchange ‘electrical potentials’ are generated which can travel ‘faster’ and which can generate ‘impulse-like events’ which in turn activate chemical substances again. If one would make this ‘talking with chemical substances and electric potentials’ artificially audible, we would have a symphony of 127 trillion (127 x 10^12) single voices …

And yet, when we experience our human bodies in everyday life, we don’t see a huge cloud of galactic proportions of individual cells, we see a ‘delineated object’ with a surface that is ‘visible’; an object that can make ‘sounds’, that ‘smells’, that is ‘touchable’, that can ‘change’ and ‘move’. Moreover, it can ‘stuff things into itself’, and ‘gases’, ‘liquids’, and ‘more solid components’ also come out of it. Further it is obvious with longer observation that there are areas at the body which react to ‘light’ (eyes), to ‘sounds’ (ears), to ‘smells’ (nose), to ‘touch’ (skin), to ‘body positions’ (among other things sense of balance), to ‘temperature’ (skin), to ‘chemical compositions of substances in the mouth’ (taste organs in the mouth) and some more.

This everyday ‘experience’ suggests the assumption that the cells of our human body have spatially arranged themselves into ‘special networks’ [1], which show a high ‘degree of organization’, so pronounced that these networks appear like ‘one unit’, like a ‘single system’ with ‘input’ and ‘output’, and where complex processes take place between input and output. This opens up the possibility of viewing the galactic space of autonomous cells in a human body as a ‘collective of organized systems’ that appear to be in active exchange with each other.”[2], [4],[5]

In modern technical systems such as a car, an airplane, a computer, there is a ‘meta-level’ from which the whole system can be ‘controlled’. In the car the steering wheel, the brake, the gear shift etc., similarly in the airplane the cockpit with a multiplicity of instruments, or with the computer the input and output devices. However, for years an increasing ‘autonomy’ of these technical devices has been emerging, insofar as many control decisions of humans are shifted to ‘subsystems’, which thereby ‘self-perform’ more and more classical control performance of humans.[6].

In a human body there exists ‘parallel’ to the different body systems among other things the ‘nervous system’ with the ‘brain’ as central area, in which many ‘signals from the body systems’ run together and from which again ‘signals to the body systems’ are sent out. The brain with the nervous system seems to be a system of its own, which processes the incoming signals in different ‘neuronal processes’ and also sends out signals, which can cause ‘effects in the body systems’.[7] From the point of view of ‘functioning’ the brain with the nervous system can be understood as a kind of ‘meta-system’, in which properties of all other ‘body systems’ are ‘mapped’, find a ‘process-like interpretation’, and can be influenced (= ‘controlled’) to a certain degree with the help of these mappings and interpretations.

As the modern empirical sciences make visible more and more by their investigations and subsequent ‘interpretations’ (e.g. [4],[5]), the distinguishable body systems themselves have a very high complexity with their own ‘autonomy’ (stomach, liver, kidney, heart, …), which can be influenced only conditionally by the brain, but which conversely can also influence the brain. In addition, there is a hardly manageable amount of mutual influences via the immense ‘material flows’ in the blood circulation and in the body fluids.

For the context of this book, of particular interest here are those structures that are important for the ‘coordination of the different brains’ by means of ‘language’ and closely related to this are the ‘cognitive’ and ’emotional’ processes in the brain that are responsible for what ‘cognitive images are created in the mind’ with which a brain ‘interprets’ ‘itself’ and ‘everything else’.

How to describe the Human Being?

The description of the human cell galaxy as ‘subsystems’ with their own ‘input’ and ‘output’ and and including ‘inner processes’ – here simply called the ‘system function’ – can appear ‘simple’ at first sight, ‘normal’, or something else. We enter with this question the fundamental question, how we can describe the human cell galaxy – i.e. ‘ourselves’! – at all and furthermore maybe how we ‘should’ describe it: are there any criteria on the basis of which we should prefer a ‘certain way of description’?

In the case of the description of ‘nature’, of the ‘real world’, we may still be able to distinguish between ‘us’ and ‘nature’ (which, however, will later come out as a fallacy)), it becomes somewhat more difficult with the ‘description of ourselves’. If one wants to describe something, one needs certain conditions to be able to make a description. But what are these conditions if we want to describe ourselves? Doesn’t here the famous ‘cat bite into its own tail’?

In ‘normal everyday life’ [8] typical forms with which we describe are e.g. ‘pictures’, ‘photographs’, ‘videos’, ‘music’, ‘body movements’ and others, but above all linguistic expressions (spoken, written; everyday language, technical language; …).

Let’s stay for a moment with ‘everyday language (German, English, Italian, …).

As children we are born into a certain, already existing world with a respective ‘everyday life’ distinctive for each human person. At least one language is spoken in such an environment. If the parents are bilingual even two languages in parallel. If the environment is different from the language of the parents, then perhaps even three languages. And today, where also the environment becomes more and more ‘multi-cultural’, maybe even more than three languages are practiced.

No matter how many languages occur simultaneously for a person, each language has its own ‘rules’, its own ‘pronunciation’, its own ‘contextual reference’, its own ‘meanings’. These contexts can change; the language itself can change. And if someone grows up with not just one language, but more than one, then ‘in the person’, in the ‘speaker-listener’, there can naturally be multiple interactions between the different languages. Since this happens today in many places at the same time with more and more people, there are still hardly sufficient research results available that adequately describe this diversity in its specifics.

So, if we want to describe ‘ourselves’ as ‘part of the real world’, we should first of all accept and ‘consciously assume’ that we do not start at ‘point zero’ at the moment of describing, not as a ‘blank sheet’, but as a biological system which has a more or less long ‘learning process’ behind it. Thereby, the word ‘learning process’ as part of the language the author uses, is not a ‘neutral set of letters’, but likewise a ‘word’ of his language, which he shares with many other speakers of ‘German’. One must assume that each ‘speaker of German’ associates his own ‘individual conceptions’ with the word ‘learning process’. And also this word ‘conception’ is such a word, which as part of the spoken (and written) language normally does not come along ‘meaning-free’. In short, as soon as we speak, as soon as we link words in larger units to statements, we activate a set of ‘knowledge and skills’ that are somehow ‘present in us’, that we use ‘automatically’, and whose use is normally largely ‘unconscious’.”[9],[10]

When I, as the author of this text, now write down statements in the German language, I let myself be carried by a ‘wave of language usage’, so to speak, whose exact nature and effectiveness I cannot fully grasp at the moment of use (and this is the case for every language user). I can, however, when I have expressed myself, look more consciously at what has been expressed, and then — perhaps — see clearer whether and how I can place it in contexts known to me. Since also the ‘known to me’ is largely ‘unconscious’ and passes from ‘unconscious knowledge’ into ‘conscious’ knowledge, the task of a ‘clarification of speaking’ and the ‘meaning’ connected with it is always only fragmentarily, partially possible. The ‘conscious eye of knowledge’ is therefore perhaps comparable to a ‘shining knowledge bubble’ in the black sea of ‘unconscious knowledge’, which seems to be close to ‘not-knowing’ but it isn’t ‘not-knowing’: ‘unconscious knowledge’ is ‘inside the brain ‘real knowledge’, which ‘works’.

… to be continued …

COMMENTS

wkp := Wikipedia

[1] In microbiology as a part of evolutionary biology, one has recognized rudimentarily how the individual cells during the ‘growth process’ ‘communicate’ possible cooperations with other cells via chemical substances, which are ‘controlled’ by their respective individual ‘genetic program’. These processes can very well be described as ‘exchange of signals’, where these ‘signals’ do not occur in isolation, but are ‘related’ by the genetic program to other chemical substances and process steps. Through this ‘relating’, the chemical signal carriers, isolated in themselves, are embedded in a ‘space of meanings’ from which they find an ‘assignment’. This overall process fulfills all requirements of a ‘communication’. In this respect, it seems justified to speak of an ‘agreement’ between the individual cells, an ‘understanding’ about whether and how they want to ‘cooperate’ with each other.

[2] When thinking of complex connections between cells, one may first think of the cells in the brain (‘neurons’), certain types of which may have as many as 1000 dendrites (:= these are projections on an ‘axon’ and an axon is the ‘output’ on a neuron), each dendrite housing multiple synapses.[3] Since each synapse can be the endpoint of a connection to another synapse, it suggests that a complex network of the order of trillions (10^12) connections may exist here in a brain. In addition, there is also the system of blood vessels that run through the entire body and supply the approximately 36 trillion (10^12) body cells with various chemical substances.

[3] wkp [EN], Neuron, URL: https://en.wikipedia.org/wiki/Neuron, section ‚Connectivity‘, citation: „The human brain has some 8.6 x 1010 (eighty six billion neurons. Each neuron has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 1015 synapses (quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 1014 to 5 x 1014 synapses (100 to 500 trillion).”

[4] Robert F.Schmidt, Gerhard Thews (Eds.), 1995, Physiologie des Menschen, 25th edition, Springer

[5] Niels Birbaumer, Robert F.Schmidt, 2006, Biologische Psychologie, 6.th edition, Springer

[6] Famously, the example of the ‘auto-pilot’ on an airplane, software that can ‘steer’ the entire plane without human intervention.

[7] Thus, the position of the joints is continuously sent to the brain and, in the case of a ‘directed movement’, the set of current joint positions is used to trigger an ‘appropriate movement’ by sending appropriate signals ‘from the brain to the muscles’.

[8] Of course, also a certain fiction, because everyone ultimately experiences ‘his everyday life’ to a certain degree, which only partially overlaps with the ‘everyday life of another’.

[9] When children in school are confronted for the first time with the concept of a ‘grammar’, with ‘grammatical rules’, they will not understand what that is. Using concrete examples of language, they will be able to ‘link’ one or another ‘grammatical expression’ with linguistic phenomena, but they will not really understand the concept of grammar. This is due to the fact that the entire processes that take place in the ‘inside of a human being’ have been researched only in a very rudimentary way until today. It is in no way sufficient for the formulation of a grammar close to everyday life.

[10] Karl Erich Heidoplh, Walter Flämig, Wolfgang Motsch (ed.), (1980), Grundzüge einer Deutschen Grammatik, Akademie-Verlag, Berlin. Note: Probably the most systematized grammar of German to date, compiled by a German authors’ collective (at that time still the eastern part of Germany called ‘German Democratic Republic’ (GDR)). Precisely because the approach was very systematic, the authors could clearly see that grammar as a description of ‘regular forms’ reaches its limits where the ‘meaning’ of expressions comes into play. Since ‘meaning’ describes a state of affairs that takes place in the ‘inside of the human being’ (of course in intensive interaction with interactions of the body with the environment), a comprehensive objective description of the factor ‘meaning’ in interaction with the forms is always only partially possible.

CASE STUDIES

eJournal: uffmm.org
ISSN 2567-6458, 4.May  – 16.March   2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

In this section several case studies will  be presented. It will be shown, how the DAAI paradigm can be applied to many different contexts . Since the original version of the DAAI-Theory in Jan 18, 2020 the concept has been further developed centering around the concept of a Collective Man-Machine Intelligence [CM:MI] to address now any kinds of experts for any kind of simulation-based development, testing and gaming. Additionally the concept  now can be associated with any kind of embedded algorithmic intelligence [EAI]  (different to the mainstream concept ‘artificial intelligence’). The new concept can be used with every normal language; no need for any special programming language! Go back to the overall framework.

COLLECTION OF PAPERS

There exists only a loosely  order  between the  different papers due to the character of this elaboration process: generally this is an experimental philosophical process. HMI Analysis applied for the CM:MI paradigm.

 

JANUARY 2021 – OCTOBER 2021

  1. HMI Analysis for the CM:MI paradigm. Part 1 (Febr. 25, 2021)(Last change: March 16, 2021)
  2. HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision (Febr. 27, 2021)
  3. HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories (March 2, 2021)
  4. HMI Analysis for the CM:MI paradigm. Part 4. Tool Based Development with Testing and Gaming (March 3-4, 2021, 16:15h)

APRIL 2020 – JANUARY 2021

  1. From Men to Philosophy, to Empirical Sciences, to Real Systems. A Conceptual Network. (Last Change Nov 8, 2020)
  2. FROM DAAI to GCA. Turning Engineering into Generative Cultural Anthropology. This paper gives an outline how one can map the DAAI paradigm directly into the GCA paradigm (April-19,2020): case1-daai-gca-v1
  3. CASE STUDY 1. FROM DAAI to ACA. Transforming HMI into ACA (Applied Cultural Anthropology) (July 28, 2020)
  4. A first GCA open research project [GCA-OR No.1].  This paper outlines a first open research project using the GCA. This will be the framework for the first implementations (May-5, 2020): GCAOR-v0-1
  5. Engineering and Society. A Case Study for the DAAI Paradigm – Introduction. This paper illustrates important aspects of a cultural process looking to the acting actors  where  certain groups of people (experts of different kinds) can realize the generation, the exploration, and the testing of dynamical models as part of a surrounding society. Engineering is clearly  not  separated from society (April-9, 2020): case1-population-start-part0-v1
  6. Bootstrapping some Citizens. This  paper clarifies the set of general assumptions which can and which should be presupposed for every kind of a real world dynamical model (April-4, 2020): case1-population-start-v1-1
  7. Hybrid Simulation Game Environment [HSGE]. This paper outlines the simulation environment by combing a usual web-conference tool with an interactive web-page by our own  (23.May 2020): HSGE-v2 (May-5, 2020): HSGE-v0-1
  8. The Observer-World Framework. This paper describes the foundations of any kind of observer-based modeling or theory construction.(July 16, 2020)
  9. CASE STUDY – SIMULATION GAMES – PHASE 1 – Iterative Development of a Dynamic World Model (June 19.-30., 2020)
  10. KOMEGA REQUIREMENTS No.1. Basic Application Scenario (last change: August 11, 2020)
  11. KOMEGA REQUIREMENTS No.2. Actor Story Overview (last change: August 12, 2020)
  12. KOMEGA REQUIREMENTS No.3, Version 1. Basic Application Scenario – Editing S (last change: August 12, 2020)
  13. The Simulator as a Learning Artificial Actor [LAA]. Version 1 (last change: August 23, 2020)
  14. KOMEGA REQUIREMENTS No.4, Version 1 (last change: August 26, 2020)
  15. KOMEGA REQUIREMENTS No.4, Version 2. Basic Application Scenario (last change: August 28, 2020)
  16. Extended Concept for Meaning Based Inferences. Version 1 (last change: 30.April 2020)
  17. Extended Concept for Meaning Based Inferences – Part 2. Version 1 (last change: 1.September 2020)
  18. Extended Concept for Meaning Based Inferences – Part 2. Version 2 (last change: 2.September 2020)
  19. Actor Epistemology and Semiotics. Version 1 (last change: 3.September 2020)
  20. KOMEGA REQUIREMENTS No.4, Version 3. Basic Application Scenario (last change: 4.September 2020)
  21. KOMEGA REQUIREMENTS No.4, Version 4. Basic Application Scenario (last change: 10.September 2020)
  22. KOMEGA REQUIREMENTS No.4, Version 5. Basic Application Scenario (last change: 13.September 2020)
  23. KOMEGA REQUIREMENTS: From the minimal to the basic Version. An Overview (last change: Oct 18, 2020)
  24. KOMEGA REQUIREMENTS: Basic Version with optional on-demand Computations (last change: Nov 15,2020)
  25. KOMEGA REQUIREMENTS:Interactive Simulations (last change: Nov 12,2020)
  26. KOMEGA REQUIREMENTS: Multi-Group Management (last change: December 13, 2020)
  27. KOMEGA-REQUIREMENTS: Start with a Political Program. (last change: November 28, 2020)
  28. OKSIMO SW: Minimal Basic Requirements (last change: January 8, 2021)

 

 

ACI – TWO DIFFERENT READINGS

eJournal: uffmm.org
ISSN 2567-6458, 11.-12.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Change: May-17, 2019 (Some Corrections, ACI associations)
Change: May-20, 2019 (Reframing ACI with AAI)
CONTEXT

This text is part of the larger text dealing with the Actor-Actor Interaction (AAI)  paradigm.

HCI – HMI – AAI ==> ACI ?

Who has followed the discussion in this blog remembers several different phases in the conceptual frameworks used here.

The first paradigm called Human-Computer Interface (HCI) has been only mentioned by historical reasons.  The next phase Human-Machine Interaction (HMI) was the main paradigm in the beginning of my lecturing in 2005. Later, somewhere 2011/2012, I switched to the paradigm Actor-Actor Interaction (AAI) because I tried to generalize over  the different participating machines, robots, smart interfaces, humans as well as animals. This worked quite nice and some time I thought that this is now the final formula. But reality is often different compared to  our thinking. Many occasions showed up where the generalization beyond the human actor seemed to hide the real processes which are going on, especially I got the impression that very important factors rooted in the special human actor became invisible although they are playing decisive role in many  processes. Another punch against the AAI view came from application scenarios during the last year when I started to deal with whole cities as actors. At the end  I got the feeling that the more specialized expressions like   Actor-Cognition Interaction (ACI) or  Augmented Collective Intelligence (ACI) can indeed help  to stress certain  special properties  better than the more abstract AAI acronym, but using structures like ACI  within general theories and within complex computing environments it became clear that the more abstract acronym AAI is in the end more versatile and simplifies the general structures. ACI became a special sub-case

HISTORY

To understand this oscillation between AAI and  ACI one has to look back into the history of Human Computer/ Machine Interaction, but not only until the end of the World War II, but into the more extended evolutionary history of mankind on this planet.

It is a widespread opinion under the researchers that the development of tools to help mastering material processes was one of the outstanding events which changed the path of  the evolution a lot.  A next step was the development of tools to support human cognition like scripture, numbers, mathematics, books, libraries etc. In this last case of cognitive tools the material of the cognitive  tools was not the primary subject the processes but the cognitive contents, structures, even processes encoded by the material structures of the tools.

Only slowly mankind understood how the cognitive abilities and capabilities are rooted in the body, in the brain, and that the brain represents a rather complex biological machinery which enables a huge amount of cognitive functions, often interacting with each other;  these cognitive functions show in the light of observable behavior clear limits with regard to the amount of features which can be processed in some time interval, with regard to precision, with regard to working interconnections, and more. And therefore it has been understood that the different kinds of cognitive tools are very important to support human thinking and to enforce it in some ways.

Only in the 20th century mankind was able to built a cognitive tool called computer which could show   capabilities which resembled some human cognitive capabilities and which even surpassed human capabilities in some limited areas. Since then these machines have developed a lot (not by themselves but by the thinking and the engineering of humans!) and meanwhile the number and variety of capabilities where the computer seems to resemble a human person or surpasses human capabilities have extend in a way that it has become a common slang to talk about intelligent machines or smart devices.

While the original intention for the development of computers was to improve the cognitive tools with the intend  to support human beings one can  get today  the impression as if the computer has turned into a goal on its own: the intelligent and then — as supposed — the super-intelligent computer appears now as the primary goal and mankind appears as some old relic which has to be surpassed soon.

As will be shown later in this text this vision of the computer surpassing mankind has some assumptions which are

What seems possible and what seems to be a promising roadmap into the future is a continuous step-wise enhancement of the biological structure of mankind which absorbs the modern computing technology by new cognitive interfaces which in turn presuppose new types of physical interfaces.

To give a precise definition of these new upcoming structures and functions is not yet possible, but to identify the actual driving factors as well as the exciting combinations of factors seems possible.

COGNITION EMBEDDED IN MATTER

Actor-Cognition Interaction (ACI): A simple outline of the whole paradigm
Cognition within the Actor-Actor Interaction (AAI)  paradigm: A simple outline of the whole paradigm

The main idea is the shift of the focus away from the physical grounding of the interaction between actors looking instead more to the cognitive contents and processes, which shall be mediated  by the physical conditions. Clearly the analysis of the physical conditions as well as the optimal design of these physical conditions is still a challenge and a task, but without a clear knowledge manifested in a clear model about the intended cognitive contents and processes one has not enough knowledge for the design of the physical layout.

SOLVING A PROBLEM

Thus the starting point of an engineering process is a group of people (the stakeholders (SH)) which identify some problem (P) in their environment and which have some minimal idea of a possible solution (S) for this problem. This can be commented by some non-functional requirements (NFRs) articulating some more general properties which shall hold through the whole solution (e.g. ‘being save’, ‘being barrier-free’, ‘being real-time’ etc.). If the description of the problem with a first intended solution including the NFRs contains at least one task (T) to be solved, minimal intended users (U) (here called executive actors (eA)), minimal intended assistive actors (aA) to assist the user in doing the task, as well as a description of the environment of the task to do, then the minimal ACI-Check can be passed and the ACI analysis process can be started.

COGNITION AND AUGMENTED COLLECTIVE INTELLIGENCE

If we talk about cognition then we think usually about cognitive processes in an individual person.  But in the real world there is no cognition without an ongoing exchange between different individuals by communicative acts. Furthermore it has to be taken into account that the cognition of an individual person is in itself partitioned into two unequal parts: the unconscious part which covers about 99% of all the processes in the body and in the brain and about 1% which covers the conscious part. That an individual person can think somehow something this person has to trigger his own unconsciousness by stimuli to respond with some messages from his before unknown knowledge. Thus even an individual person alone has to organize a communication with his own unconsciousness to be able to have some conscious knowledge about its own unconscious knowledge. And because no individual person has at a certain point of time a clear knowledge of his unconscious knowledge  the person even does not really know what to look for — if there is no event, not perception, no question and the like which triggers the person to interact with its unconscious knowledge (and experience) to get some messages from this unconscious machinery, which — as it seems — is working all the time.

On account of this   logic of the individual internal communication with the individual cognition  an external communication with the world and the manifested cognition of other persons appears as a possible enrichment in the   interactions with the distributed knowledge in the different persons. While in the following approach it is assumed to represent the different knowledge responses in a common symbolic representation viewable (and hearable)  from all participating persons it is growing up a possible picture of something which is generally more rich, having more facets than a picture generated by an individual person alone. Furthermore can such a procedure help all participants to synchronize their different knowledge fragments in a bigger picture and use it further on as their own picture, which in turn can trigger even more aspects out of the distributed unconscious knowledge.

If one organizes this collective triggering of distributed unconscious knowledge within a communication process not only by static symbolic models but beyond this with dynamic rules for changes, which can be interactively simulated or even played with defined states of interest then the effect of expanding the explicit and shared knowledge will be boosted even more.

From this background it makes some sense to turn the wording Actor-Cognition Interaction into the wording Augmented Collective Intelligence where Intelligence is the component of dynamic cognition in a system — here a human person –, Collective means that different individual person are sharing their unconscious knowledge by communicative interactions, and Augmented can be interpreted that one enhances, extends this sharing of knowledge by using new tools of modeling, simulation and gaming, which expands and intensifies the individual learning as well as the commonly shared opinions. For nearly all problems today this appears to be  absolutely necessary.

ACI ANALYSIS PROCESS

Here it will be assumed that there exists a group of ACI experts which can supervise  other actors (stakeholders, domain experts, …) in a process to analyze the problem P with the explicit goal of finding a satisfying solution (S+).

For the whole ACI analysis process an appropriate ACI software should be available to support the ACI experts as well as all the other domain experts.

In this ACI analysis process one can distinguish two main phases: (1) Construct an actor story (AS) which describes all intended states and intended changes within the actor story. (2) Make several tests of the actor story to exploit their explanatory power.

ACTOR STORY (AS)

The actor story describes all possible states (S) of the tasks (T) to be realized to reach intended goal states (S+). A mapping from one state to a follow-up state will be described by a change rule (X). Thus having start state (S0) and appropriate change rules one can construct the follow-up states from the actual state (S*)  with the aid of the change rules. Formally this computation of the follow-up state (S’) will be computed by a simulator function (σ), written as: σ: S* x X  —> S.

SEVERAL TESTS

With the aid of an explicit actor story (AS) one can define the non-functional requirements (NFRs) in a way that it will become decidable whether  a NFR is valid with regard to an actor story or not. In this case this test of being valid can be done as an automated verification process (AVP). Part of this test paradigm is the so-called oracle function (OF) where one can pose a question to the system and the system will answer the question with regard to all theoretically possible states without the necessity to run a (passive) simulation.

If the size of the group is large and it is important that all members of the group have a sufficient similar knowledge about the problem(s) in question (as it is the usual case in a city with different kinds of citizens) then is can be very helpful to enable interactive simulations or even games, which allow a more direct experience of the possible states and changes. Furthermore, because the participants can act according to their individual reflections and goals the process becomes highly uncertain and nearly unpredictable. Especially for these highly unpredictable processes can interactive simulations (and games) help to improve a common understanding of the involved factors and their effects. The difference between a normal interactive simulation and a game is given in the fact that a game has explicit win-states whereas the interactive simulations doesn’t. Explicit win-states can improve learning a lot.

The other interesting question is whether an actor story AS with a certain idea for an assistive actor (aA) is usable for the executive actors. This requires explicit measurements of the usability which in turn requires a clear norm of reference with which the behavior of an executive actor (eA) during a process can be compared. Usually is the actor Story as such the norm of reference with which the observable behavior of the executing actors will be compared. Thus for the measurement one needs real executive actors which represent the intended executive actors and one needs a physical realization of the intended assistive actors called mock-up. A mock-up is not yet  the final implementation of the intended assistive actor but a physical entity which can show all important physical properties of the intended assistive actor in a way which allows a real test run. While in the past it has been assumed to be sufficient to test a test person only once it is here assumed that a test person has to be tested at least three times. This follows from the assumption that every executive (biological) actor is inherently a learning system. This implies that the test person will behave differently in different tests. The degree of changes can be a hint of the easiness and the learnability of the assistive actor.

COLLECTIVE MEMORY

If an appropriate ACI software is available then one can consider an actor story as a simple theory (ST) embracing a model (M) and a collection of rules (R) — ST(x) iff x = <M,R> –which can be used as a kind of a     building block which in turn can be combined with other such building blocks resulting in a complex network of simple theories. If these simple theories are stored in a  public available data base (like a library of theories) then one can built up in time a large knowledge base on their own.