In this section several case studies will be presented. It will be shown, how the DAAI paradigm can be applied to many different contexts . Since the original version of the DAAI-Theory in Jan 18, 2020 the concept has been further developed centering around the concept of a Collective Man-Machine Intelligence [CM:MI] to address now any kinds of experts for any kind of simulation-based development, testing and gaming. Additionally the concept now can be associated with any kind of embedded algorithmic intelligence [EAI] (different to the mainstream concept ‘artificial intelligence’). The new concept can be used with every normal language; no need for any special programming language! Go back to the overall framework.
COLLECTION OF PAPERS
There exists only a loosely order between the different papers due to the character of this elaboration process: generally this is an experimental philosophical process. HMI Analysis applied for the CM:MI paradigm.
FROM DAAI to GCA. Turning Engineering into Generative Cultural Anthropology. This paper gives an outline how one can map the DAAI paradigm directly into the GCA paradigm (April-19,2020): case1-daai-gca-v1
A first GCA open research project [GCA-OR No.1]. This paper outlines a first open research project using the GCA. This will be the framework for the first implementations (May-5, 2020): GCAOR-v0-1
Engineering and Society. A Case Study for the DAAI Paradigm – Introduction. This paper illustrates important aspects of a cultural process looking to the acting actors where certain groups of people (experts of different kinds) can realize the generation, the exploration, and the testing of dynamical models as part of a surrounding society. Engineering is clearly not separated from society (April-9, 2020): case1-population-start-part0-v1
Bootstrapping some Citizens. This paper clarifies the set of general assumptions which can and which should be presupposed for every kind of a real world dynamical model (April-4, 2020): case1-population-start-v1-1
Hybrid Simulation Game Environment [HSGE]. This paper outlines the simulation environment by combing a usual web-conference tool with an interactive web-page by our own (23.May 2020): HSGE-v2 (May-5, 2020): HSGE-v0-1
The Observer-World Framework. This paper describes the foundations of any kind of observer-based modeling or theory construction.(July 16, 2020)
The last official update of the AAI theory dates back to Oct-2, 2018. Since that time many new thoughts have been detected and have been configured for further extensions and improvements. Here I try to give an overview of all the actual known aspects of the expanded AAI theory as a possible guide for the further elaborations of the main text.
CLARIFYING THE PROBLEM
Generally it is assumed that the AAI theory is embedded in a general systems engineering approach starting with the clarification of a problem.
Two cases will be distinguished:
A stakeholder is associated with a certain domain of affairs with some prominent aspect/ parameter P and the stakeholder wants to clarify whether P poses some ‘problem’ in this domain. This presupposes some explained ‘expectations’ E how it should be and some ‘findings’ x pointing to the fact that P is ‘sufficiently different’ from some y>x. If the stakeholder judges that this difference is ‘important’, than P matching x will be classified as a problem, which will be documented in a ‘problem document D_p’. One interpret this this analysis as a ‘measurement M’ written as M(P,E) = x and x<y.
Given a problem document D_p a stakeholder invites some experts to find a ‘solution’ which transfers the old ‘problem P’ into a ‘configuration S’ which at least should ‘minimize the problem P’. Thus there must exist some ‘measurements’ of the given problem P with regard to certain ‘expectations E’ functioning as a ‘norm’ as M(P,E)=x and some measurements of the new configuration S with regard to the same expectations E as M(S,E)=y and a metric which allows the judgment y > x.
From this follows that already in the beginning of the analysis of a possible solution one has to refer to some measurement process M, otherwise there exists no problem P.
CHECK OF FRAMING CONDITIONS
The definition of a problem P presupposes a domain of affairs which has to be characterized in at least two respects:
A minimal description of an environment ENV of the problem P and
a list of so-called non-functional requirements (NFRs).
Within the environment it mus be possible to identify at least one task T to be realized from some start state to some end state.
Additionally it mus be possible to identify at least one executing actor A_exec doing this task and at least one actor assisting A_ass the executing actor to fulfill the task.
For the following analysis of a possible solution one can distinguish two strategies:
Top-down: There exists a group of experts EXPs which will analyze a possible solution, will test these, and then will propose these as a solution for others.
Bottom-up: There exists a group of experts EXPs too but additionally there exists a group of customers CTMs which will be guided by the experts to use their own experience to find a possible solution.
ACTOR STORY (AS)
The goal of an actor story (AS) is a full specification of all identified necessary tasks T which lead from a start state q* to a goal state q+, including all possible and necessary changes between the different states M.
A state is here considered as a finite set of facts (F) which are structured as an expression from some language L distinguishing names of objects (LIKE ‘d1’, ‘u1’, …) as well as properties of objects (like ‘being open’, ‘being green’, …) or relations between objects (like ‘the user stands before the door’). There can also e a ‘negation’ like ‘the door is not open’. Thus a collection of facts like ‘There is a door D1’ and ‘The door D1 is open’ can represent a state.
Changes from one state q to another successor state q’ are described by the object whose action deletes previous facts or creates new facts.
In this approach at least three different modes of an actor story will be distinguished:
A pictorial mode generating a Pictorial Actor Story (PAS). In a pictorial mode the drawings represent the main objects with their properties and relations in an explicit visual way (like a Comic Strip).
A textual mode generating a Textual Actor Story (TAS): In a textual mode a text in some everyday language (e.g. in English) describes the states and changes in plain English. Because in the case of a written text the meaning of the symbols is hidden in the heads of the writers it can be of help to parallelize the written text with the pictorial mode.
A mathematical mode generating a Mathematical Actor Story (MAS): n the mathematical mode the pictorial and the textual modes are translated into sets of formal expressions forming a graph whose nodes are sets of facts and whose edges are labeled with change-expressions.
TASK INDUCED ACTOR-REQUIREMENTS (TAR)
If an actor story AS is completed, then one can infer from this story all the requirements which are directed at the executing as well as the assistive actors of the story. These requirements are targeting the needed input- as well as output-behavior of the actors from a 3rd person point of view (e.g. what kinds of perception are required, what kinds of motor reactions, etc.).
ACTOR INDUCED ACTOR-REQUIREMENTS (AAR)
Depending from the kinds of actors planned for the real work (biological systems, animals or humans; machines, different kinds of robots), one has to analyze the required internal structures of the actors needed to enable the required perceptions and responses. This has to be done in a 1st person point of view.
ACTOR MODELS (AMs)
Based on the AARs one has to construct explicit actor models which are fulfilling the requirements.
USABILITY TESTING (UTST)
Using the actor as a ‘norm’ for the measurement one has to organized an ‘usability test’ in he way, that a real executing test actor having the required profiles has to use a real assisting actor in the context of the specified actor story. Place in a start state of the actor story the executing test actor has to show that and how he will reach the defined goal state of the actor story. For this he has to use a real assistive actor which usually is an experimental device (a mock-up), which allows the test of the story.
Because an executive actor is usually a ‘learning actor’ one has to repeat the usability test n-times to see, whether the learning curve approaches a minimum. Additionally to such objective tests one should also organize an interview to get some judgments about the subjective states of the test persons.
SIMULATION
With an increasing complexity of an actor story AS it becomes important to built a simulator (SIM) which can take as input the start state of the actor story together with all possible changes. Then the simulator can compute — beginning with the start state — all possible successor states. In the interactive mode participating actors will explicitly be asked to interact with the simulator.
Having a simulator one can use a simulator as part of an usability test to mimic the behavior of an assistive actor. This mode can also be used for training new executive actors.
A TOP-DOWN ACTOR STORY
The elaboration of an actor story will usually be realized in a top-down style: some AAI experts will develop the actor story based on their experience and will only ask for some test persons if they have elaborated everything so far that they can define some tests.
A BOTTOM-UP ACTOR STORY
In a bottom-up style the AAI experts collaborate from the beginning with a group of common users from the application domain. To do this they will (i) extract the knowledge which is distributed in the different users, then (ii) they will start some modeling from these different facts to (iii) enable some basic simulations. This simple simulation (iv) will be enhanced to an interactive simulation which allows serious gaming either (iv.a) to test the model or to enable the users (iv.b) to learn the space of possible states. The test case will (v) generate some data which can be used to evaluate the model with regard to pre-defined goals. Depending from these findings (vi) one can try to improve the model further.
THE COGNITIVE SPACE
To be able to construct executive as well as assistive actors which are close to the way how human persons do communicate one has to set up actor models which are as close as possible with the human style of cognition. This requires the analysis of phenomenal experience as well as the psychological behavior as well as the analysis of a needed neuron-physiological structures.
STATE DYNAMICS
To model in an actor story the possible changes from one given state to another one (or to many successor states) one needs eventually besides explicit deterministic changes different kinds of random rules together with adaptive ones or decision-based behavior depending from a whole network of changing parameters.
1 History: From HCI to AAI …
2 Different Views …
3 Philosophy of the AAI-Expert …
4 Problem (Document) …
5 Check for Analysis …
6 AAI-Analysis …
6.1 Actor Story (AS) . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 Textual Actor Story (TAS) . . . . . . . . . . . . . . .
6.1.2 Pictorial Actor Story (PAT) . . . . . . . . . . . . . .
6.1.3 Mathematical Actor Story (MAS) . . . . . . . . . . .
6.1.4 Simulated Actor Story (SAS) . . . . . . . . . . . . .
6.1.5 Task Induced Actor Requirements (TAR) . . . . . . .
6.1.6 Actor Induced Actor Requirements (UAR) . . . . . .
6.1.7 Interface-Requirements and Interface-Design . . . .
6.2 Actor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Actor and Actor Story . . . . . . . . . . . . . . . . .
6.2.2 Actor Model . . . . . . . . . . . . . . . . . . . . . .
6.2.3 Actor as Input-Output System . . . . . . . . . . . .
6.2.4 Learning Input-Output Systems . . . . . . . . . . . .
6.2.5 General AM . . . . . . . . . . . . . . . . . . . . . .
6.2.6 Sound Functions . . . . . . . . . . . . . . . . . . .
6.2.7 Special AM . . . . . . . . . . . . . . . . . . . . . .
6.2.8 Hypothetical Model of a User – The GOMS Paradigm
6.2.9 Example: An Electronically Locked Door . . . . . . .
6.2.10 A GOMS Model Example . . . . . . . . . . . . . . .
6.2.11 Further Extensions . . . . . . . . . . . . . . . . . .
6.2.12 Design Principles; Interface Design . . . . . . . . .
6.3 Simulation of Actor Models (AMs) within an Actor Story (AS) .
6.4 Assistive Actor-Demonstrator . . . . . . . . . . . . . . . . . .
6.5 Approaching an Optimum Result . . . . .
7 What Comes Next: The Real System
7.1 Logical Design, Implementation, Validation . . . .
7.2 Conceptual Gap In Systems Engineering? . . .
8 The AASE-Paradigm …
References
Abstract
This text is based on the the paper “AAI – Actor-Actor Interaction. A Philosophy of Science View” from 3.Oct.2017 and version 11 of the paper “AAI – Actor-Actor Interaction. An Example Template” and it transforms these views in the new paradigm ‘Actor- Actor Systems Engineering’ understood as a theory as well as a paradigm for and infinite set of applications. In analogy to the slogan ’Object-Oriented Software Engineering (OO SWE)’ one can understand the new acronym AASE as a systems engineering approach where the actor-actor interactions are the base concepts for the whole engineering process. Furthermore it is a clear intention to view the topic AASE explicitly from the point of view of a theory (as understood in Philosophy of Science) as well as from the point of view of possible applications (as understood in systems engineering). Thus the classical term of Human-Machine Interaction (HMI) or even the older Human-Computer Interaction (HCI) is now embedded within the new AASE approach. The same holds for the fuzzy discipline of Artificial Intelligence (AI) or the subset of AI called Machine Learning (ML). Although the AASE-approach is completely in its beginning one can already see how powerful this new conceptual framework is.
eJournal: uffmm.org, ISSN 2567-6458 16.March 2018 Email: info@uffmm.org Gerd Doeben-Henisch Email: gerd@doeben-henisch.de Frankfurt University of Applied Sciences (FRA-UAS) Institut for New Media (INM, Frankfurt)
I A Vision as a Problem to be Solved … 1 II Language, Meaning & Ontology … 2 II-A Language Levels . . . . . . . . . . . 2 II-B Common Empirical Matter . . . . . . 2 II-C Perceptual Levels . . . . . . . . . . . 3 II-D Space & Time . . . . . . . . . . . . . 4 II-E Different Language Modes . . . 4 II-F Meaning of Expressions & Ontology … 4 II-G True Expressions . . . . . . . . . . . 5 II-H The Congruence of Meaning . . . . 5 III Actor Algebra … 6 IV World Algebra … 7 V How to continue … 8 VI References … 8
Abstract
As preparation for this text one should read the chapter about the basic layout of an Actor-Actor Analysis (AAA) as part of an systems engineering process (SEP). In this text it will be described which internal conditions one has to assume for an actor who uses a language to talk about his observations oft he world to someone else in a verifiable way. Topics which are explained in this text are e.g. ’language’,’meaning’, ’ontology’, ’consciousness’, ’true utterance’, ’synonymous expression.
1 Problem ….. 3
2 AAI-Check ….. 3
3 Actor-Story (AS) ….. 3
3.1 AS as a Text . . . . . . . . . . . . . . . . . .3
3.2 Translation of a Textual AS into a Formal AS …… 4
3.3 AS as a Formal Expression . . . . . . . . . .4
3.4 Translation of a Formal AS into a Pictorial AS… 5
4 Actor-Model (AM) ….. 5
4.1 AM for the User as a Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6
4.2 AM for the System as a Text . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5 Combined AS and AM as a Text ….. 6
5.1 AM as an Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
6 Simulation ….. 7
6.1 Simulating the AS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
6.2 Simulating the AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
6.3 Simulating AS with AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
7 Appendix: Formalisms ….. 8
7.1 Set of Strings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9
7.2 Predicate Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
8 Appendix: The Meaning of Expressions …11
8.1 States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
8.2 Changes by Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Abstract
Following the general concepts of the paper ’AAI – Actor-Actor Interaction. A Philosophy of Science View’ from 3.Oct.2017 this paper illustrates a simple application where the difference as well as the
interaction between an actor story and several actor models is shown. The details of interface-design as well as the usability-testing are not part of this example.(This example replaces the paper with the title
’AAI – Case Study Actor Story with Actor Model. Simple Grid-Environment’ from 15.Nov.2017). One special point is the meaning of the formal expressions of the actor story.
Clearly, one can debate whether a ‘toy-example’ makes sens, but the complexity of the concepts in this AAI-approach is to great to illustrate these in the beginning with a realistic example without loosing the idea. The author of the paper has tried many — also very advanced — versions in the last years and this is the first time that he himself has the feeling that at least the idea is now clear enough. And from teaching students it is very clear, if you cannot explain an idea in a toy-example you never will be able to apply it to real big problems…
eJournal: uffmm.org, ISSN 2567-6458, 09.Oct 2017 – April 9, 2022, 13:30 h Email: info@uffmm.org Author: Gerd Doeben-Henisch Email: gerd@doeben-henisch.de
Remark April 2022
This post from Oct 2017 will be reviewed in the new conceptual framework of an Applied Empirical Theory [AET] with an additional Dynamic Format [DF]. For more details see HERE.
OVERVIEW
A short story telling You, (i) how we interface the intelligent machines (IM) part with the actor-actor interaction (AAI) part, (ii) a first working definition of intelligent machines (IM) in this text, and (iii) defining intelligence and how one can this measure.
IM WITHIN AAI
In this blog we see IM not isolated, as a stand alone endeavor, but as embedded in a discipline called actor-actor interaction (AAI)(later called DAAI := Distributed Actor Actor Interaction). AAI investigates complex tasks and looks how different kinds of actors are interacting in these contexts with technical systems. As far as the participating systems have been technical systems one speaks here of a system interface (SI) as that part of a technical system, which is interacting with the human actor. In the case of biological systems (mostly humans, but it could be animals as well), one speaks of the user interface (UI). In this text we generalize both cases by the general concept of an actor — biological and non-biological –, which has some actor interface (ActI), and this actor interface embraces all properties which are relevant for the interactions of the actor.
For the analysis of the behavior of actors in such task-environments one can distinguish two important concepts: the actor story (AS) describing the context as an observable process, as well as different actor models (AM). Actor models are special extensions of an actor story because an actor model describes the observable behavior of actors as a behavior function (BF) with a set of assumptions about possible internal states of the actors. The assumptions about possible internal states (IS) are either completely arbitrary or empirically motivated.
The embedding of IM within AAI can be realized through the concept of an actor model (UM) and the actor story (AS). Whatever is important for something which is called an intelligent machine application (IMA) can be defined as an actor model within an actor story. This embedding of IM within AAI offers many advantages.
This has to be explained with some more details.
An Intelligent Machine (IM) in an Actor Story
Let us assume that there exists a mathematical-graph representation of an actor story written as AS_{L_{ε}}. Such a graph has nodes which represent situations. Formally these are sets of properties, probably more fine-grained by subsets which represent different kinds of actors embedded in this situation as well as different kinds of non-actors.
Actors can be classified (as introduced above) as either biological actors (BA) or non-biological actors (NBA). Both kinds of actors can — in another reading — be subsumed under the general term of input-output-systems (IO-SYS). An input-output system can be a learning system or non-learning. Another basic property is that of being intelligent or non-intelligent. Being a learning system and being an intelligent system is usually strongly connected, but this must not necessarily be so. Being a learning system can be associated with being non-intelligent and being intelligent can be connected with being non-learning.(cf. Figure 1)
While biological systems are always learning and intelligent, one can find non-biological systems of all types: non-learning and non-intelligent, non-intelligent and learning, non-learning and intelligent, and learning and intelligent.
Learning System
To classify a system as a learning system this requires the general ability to change the behavior of this system in time thus that there exists a time-span (t1,t2) after which the behavior as response to certain critical stimuli has changed compared to the time before. [1] From this requirement it follows, that a learning system is an input-output system with at least one internal state which can change. Thus we have the general assumption:
Def: Learning System (LS)
LS(x) iff
x=<I, O, IS, phi >
φ : I x IS —> IS x O
I := Input
O := Output
IS := Internal states
Some x is a learning system (LS) if it is a structure containing sets for input (I), Output (O), as well as internal states (IS). These sets are operated by a behavior function φ which maps inputs and actual internal states to output as well as back to internal states. The set of possible learning functions is infinite.
Intelligent System
The term ‘intelligent’ and ‘intelligence’ is until now not standardized. This means that everybody is using it at little bit arbitrarily.
In this text we take the basic idea of a scientific usage of the term ‘intelligence’ from experimental psychology, which has developed clearly defined operational concepts since the end of the 19. Century which have been proved as quite stable in their empirical applications. [2a,b,c]
The central idea of the psychological concept of the usage of the term ‘intelligence’ is to associate the usage of the term ‘intelligence’ with observable behavior of those actors, which shall be classified according defined methods of measurement.
In the case of experimental psychology the actors have been biological systems, mainly humans, in the first years of the research school children of certain ages. Because nobody did know what ‘intelligence’ means ‘as such’ one agreed to accept the observable behavior of children in certain task environments as ‘manifestations’ of a ‘presupposed unknown intelligence’. Thus the ability of children to solve defined tasks in a certain defined manner became a norm for what is called ‘intelligence’. Solving the tasks in a certain time with less than a certain amount of errors was used as a ‘baseline’ and all behavior deviating from the baseline was ‘better’ or ‘poorer’.
Thus the ‘content’ of the ‘meaning’ of the term ‘intelligence’ has been delegated to historical patterns of behavior which were common in a certain time-span in a certain geographical and cultural region.
While these behavior patterns can change during the course of time the general method of measurement is invariant.
In the time since then experimental psychology has modified and elaborated this first concept in some directions.
One direction is the modification of the kind of tasks which are used for the tests. With regard to the cultural context one has modified the content, thereby looking to find such kinds of task which seem to be ‘invariant’ with regard to the presupposed intelligence factor. This is an ongoing process.
The other direction is the focus on the actors as such. Because biological systems like humans change the development of their intelligence with age one has tried to find out ‘typical tasks for every age’. This too is an ongoing process.
This history of experimental psychology gives very interesting examples how one can approach the problem of the usage and the measurement of some X which we call ‘intelligence’.
In the context of an AAI-approach we have not only biological systems, but also non-biological systems. Thus most of the elaborated parameters of psychology for human actors are not general enough.
One possible strategy to generalize the intelligence-paradigm of experimental psychology could be to ‘free’ the selection of task sets from the narrow human cultures of the past and require only ‘clearly defined task sets with defined interfaces and defined contexts’. All these tasks sets can be arranged either in one super-set or in a parameterized field of sets. The sum of all these sets defines then a space of possible behavior and associated with this a space of possible measurable intelligence.
A task has then to be given as an actor story according to the AAI-paradigm. Such a specified actor story allows the formal definition of a complexity measure which can be used to measure the ‘amount of intelligence necessary to solve such a task’.
With such a more general and extendable approach to the measurement of observable intelligence one can compare all kinds of systems with each other. With such an approach one can further show objectively, where biological and non-biological systems differ generally, where they are similar, and to which extend they differ with regard to concrete circumstances.
Measuring Intelligence by Actor Stories
Presupposing actor stories (AS) (ideally formalized as mathematical graphs) on can define a first operational general measurement of intelligence.
Def: Task-Intelligence of a task τ (TInt(τ))
Every defined task τ represents a graph g with one shortest path pmin(τ)= π_{min} from a start node to a goal node.
Every such shortest path π_{min} has a certain number of nodes path-nodes(π_{min})=ν.
The number of solved nodes (ν_{solved}) can become related against the total number of nodes ν as ν_{solved}/ν. We take TInt(τ)= ν_{solved}/ν. It follows that TInt(τ) is between 0 and 1: 0 ≤ TInt(τ)≤ 1.
To every task a maximal duration Δ_{max} is attached; all nodes which are solved within this maximal duration time Δ_{max} are declared as ‘solved’, all the others as ‘un-solved’.
The usual case will require more than one task to be realized. Thus we introduce the concept of a task field (TF).
Def: Task-Field of type x (TF_{x}) Def: Task-Field Intelligence (TFInt)
A task-field TF of type x includes a finite set of individual tasks like TF_{x} = { τ{x.1}, τ{x.2}, … , τ{x.n} } with n ≥ 2. The sum of all individual task intelligence values TInt(τ{x.i}) has to be normalized to 1, i.e. (TInt(τ{x.1}) + TInt(τ{x.2}) + … + TInt(τ{x.n}))/ n (with 0 in the nominator not allowed). Thus the value of the intelligence of a task field of type x TFInt(TF_{x}) is again in the domain of [0,1].
Because the different tasks in a task field TF can be of different difficulty it should be possible to introduce some weighting for the individual task intelligence values. This should not change the general mechanism.
Def: Combined Task-Fields (TF)
In face of the huge variety of possible task fields in this world it can make sens to introduce more general layers by grouping task fields of different types together to larger combined fields, like TF_{x,…,z} = TF_{x} ∪ TF_{y} ∪ … ∪ TF_{z}. The task field intelligence TFInt of such combined task fields would be computed as before.
Def: Omega Task-Field at time t (TF_{ω}(t))
The most comprehensive assembly of such combinations shall here be called the Omega-Task-Field at time t TF_{ω}(t). This indicates the known maximum of intelligence measurements at that point of time.
Measurement Comments
With these assumptions the term intelligence will be restricted to clearly defined domains either to an individual task, to a task-field of type x, or to some grouped task-fields or being related to the actual omega task-field. In every such domain the intelligence value is in the realm of [0,1] or written as some value between 0 or 100%.
Independent of the type of an actor — biological or not — one can measure the intelligence of such an actor with the same domains of defined tasks. As a result one can easily compare all known actors with regard to such defined task domains.
Because the acting actors can be quite different by their input-output capabilities it follows that every actor has to organize some interface which enables him to use the defined task. There are no special restrictions to the format of such an interface, but there is one requirement which has to be observed strictly: the interface as such is not allowed to do any kind of computation beyond providing only the necessary input from the task domain or to provide the necessary output to the domain. Only then are the different tests able to reveal some difference between the different actors.
If the tests show differences between certain types of actors with regard to a certain task or a task-field then this is a chance to develop smart assistive interfaces which can help the actor in question to overcome his weakness compared to the other type of actor. Thus this kind of measuring intelligence can be a strong supporter for a better world in the future.
Another consequence of the differing intelligence values can be to look to the inner structure of an actor with weaker values and asking how one could improve his capabilities. This can be done e.g. by different kinds of training or by improving his system structures.
COMMENTS
[1] Sara J.Shettleworth, Biological Approaches to the Study of Learning, pp.185 – 219, in: N.J.Mackintosh (Ed.), Animal Learning and Cognition, Academic Press, San Diego, New York, London et.al., 1994
[2a] Ernest R.Hilgard, Rita L.Atkinson, Richard C.Atkinson, Introduction to Psychology, Harcourt Brace Jovanovic, Inc., Psychology, 7th ed., New York, San Diego, Chicago et.al, 1979
On the cover page of this blog you find a first general view on the subject matter of an integrated engineering approach for the future. Here we give a short description of the main idea of the analysis phase of systems engineering how this will be realized within the actor-actor interaction paradigm as described in this text.
INTRODUCTION
As you can see in figure Nr.1 there are the following main topics within the Actor-Actor Interaction (AAI) paradigm as used in this text (Comment: The more traditional formula is known as Human-Machine Interaction (HMI)):
Triggered by a problem document D_p from the problem phase (P) of the engineering process the AAI-experts have to analyze, what are the potential requirements following from this document, all the time also communicating with the stakeholder to keep in touch with the hidden intentions of the stakeholder.
The idea is to identify at least one task (T) with at least one goal state (G) which shall be arrived after running a task.
A task is assumed to represent a sequence of states (at least a start state and a goal state) which can have more than one option in every state, not excluding repetitions.
Every task presupposes some context (C) which gives the environment for the task.
The number of tasks and their length is in principle not limited, but their can be certain constraints (CS) given which have to be fulfilled required by the stakeholder or by some other important rules/ laws. Such constraints will probably limit the number of tasks as well as their length.
Actor Story
Every task as a sequence of states can be viewed as a story which describes a process. A story is a text (TXT) which is static and hides the implicit meaning in the brains of the participating actors. Only if an actor has some (learned) understanding of the used language then the actor is able to translate the perceptions of the process in an appropriate text and vice versa the text into corresponding perceptions or equivalently ‘thoughts’ representing the perceptions.
In this text it is assumed that a story is describing only the observable behavior of the participating actors, not their possible internal states (IS). For to describe the internal states (IS) it is further assumed that one describes the internal states in a new text called actor model (AM). The usual story is called an actor story (AS). Thus the actor story (AS) is the environment for the actor models (AM).
In this text three main modes of actor stories are distinguished:
An actor story written in some everyday language L_0 called AS_L0 .
A translation of the everyday language L_0 into a mathematical language L_math which can represent graphs, called AS_Lmath.
A translation of the hidden meaning which resides in the brains of the AAI-experts into a pictorial language L_pict (like a comic strip), called AS_Lpict.
To make the relationship between the graph-version AS_Lmath and the pictorial version AS_Lpict visible one needs an explicit mapping Int from one version into the other one, like: Int : AS_Lmath <—> AS_Lpict. This mapping Int works like a lexicon from one language into another one.
From a philosophy of science point of view one has to consider that the different kinds of actor stories have a meaning which is rooted in the intended processes assumed to be necessary for the realization of the different tasks. The processes as such are dynamic, but the stories as such are static. Thus a stakeholder (SH) or an AAI-expert who wants to get some understanding of the intended processes has to rely on his internal brain simulations associated with the meaning of these stories. Because every actor has its own internal simulation which can not be perceived from the other actors there is some probability that the simulations of the different actors can be different. This can cause misunderstandings, errors, and frustrations.(Comment: This problem has been discussed in [DHW07])
One remedy to minimize such errors is the construction of automata (AT) derived from the math mode AS_Lmath of the actor stories. Because the math mode represents a graph one can derive Der from this version directly (and automatically) the description of an automaton which can completely simulate the actor story, thus one can assume Der(AS_Lmath) = AT_AS_Lmath.
But, from the point of view of Philosophy of science this derived automaton AT_AS_Lmath is still only a static text. This text describes the potential behavior of an automaton AT. Taking a real computer (COMP) one can feed this real computer with the description of the automaton AT AT_AS_Lmath and make the real computer behave like the described automaton. If we did this then we have a real simulation (SIM) of the theoretical behavior of the theoretical automaton AT realized by the real computer COMP. Thus we have SIM = COMP(AT_AS_Lmath). (Comment: These ideas have been discussed in [EDH11].)
Such a real simulation is dynamic and visible for everybody. All participating actors can see the same simulation and if there is some deviation from the intention of the stakeholder then this can become perceivable for everybody immediately.
Actor Model
As mentioned above the actor story (AS) describes only the observable behavior of some actor, but not possible internal states (IS) which could be responsible for the observable behavior.
If necessary it is possible to define for every actor an individual actor model; indeed one can define more than one model to explore the possibilities of different internal structures to enable a certain behavior.
The general pattern of actor models follows in this text the concept of input-output systems (IOSYS), which are in principle able to learn. What the term ‘learning’ designates concretely will be explained in later sections. The same holds of the term ‘intelligent’ and ‘intelligence’.
The basic assumptions about input-output systems used here reads a follows:
Def: Input-Output System (IOSYS)
IOSYS(x) iff x=< I, O, IS, phi>
phi : I x IS —> IS x O
I := Input
O := Output
IS := Internal
As in the case of the actor story (AS) the primary descriptions of actor models (AM) are static texts. To make the hidden meanings of these descriptions ‘explicit’, ‘visible’ one has again to convert the static texts into descriptions of automata, which can be feed into real computers which in turn then simulate the behavior of these theoretical automata as a real process.
Combining the real simulation of an actor story with the real simulations of all the participating actors described in the actor models can show a dynamic, impressive process which is full visible to all collaborating stakeholders and AAI-experts.
Testing
Having all actor stories and actor models at hand, ideally implemented as real simulations, one has to test the interaction of the elaborated actors with real actors, which are intended to work within these explorative stories and models. This is done by actor tests (former: usability tests) where (i) real actors are confronted with real tasks and have to perform in the intended way; (ii) real actors are interviewed with questionnaires about their subjective feelings during their task completion.
Every such test will yield some new insights how to change the settings a bit to gain eventually some improvements. Repeating these cycles of designing, testing, and modifying can generate a finite set of test-results T where possibly one subset is the ‘best’ compared to all the others. This can give some security that this design is probably the ‘relative best design’ with regards to T.
[DHW07] G. Doeben-Henisch and M. Wagner. Validation within safety critical systems engineering from a computation semiotics point of view.
Proceedings of the IEEE Africon2007 Conference, pages Pages: 1 – 7, 2007.
[EDH11] Louwrence Erasmus and Gerd Doeben-Henisch. A theory of the
system engineering process. In ISEM 2011 International Conference. IEEE, 2011.