Update 4.July 2018 (Chapter 4 Actor Model; improving the terminology of environments with actors, actors as input-output systems, basic and real interface, a first typology of input-output systems…)
Update 20.July 2018 (Disentanglement of chapter ‘Simulation & Verification’ into two independent chapters; corrections in the chapter ‘Introduction’; corrections in chapter ‘AAI Analysis’; extracting ‘Simulation’ from chapter ‘Actor Story’ to new chapter ‘Simulation’; New chapter ‘Simulation’; Rewriting of chapter ‘Looking Forward’)
Update 22.July 2018 (Rewriting the beginning of the chapter ‘Actor Story (AS)’, not completed; converting chapter ‘AS+AM Summary’ to ‘AS and AM Philosophy’, not completed)
Update 23.July 2018 (Attaching a new chapter with a Case Study illustrating an actor story (AS). This case study is still unfinished. It is a case study of a real project!)
Update 8.August 2018 (Modifying chapter AS as Text, Comic, Graph; especially section about the textual mode and the pictorial mode; first sketch for a mapping from the textual mode into the pictorial mode)
Update 9.August 2018 (Modification of the section ‘Mathematical Actor Story (MAS) in chapter 4).
Update 11.August 2018 (Improving chapter 3 ‘Actor Story; nearly complete rewriting of chapter 4 ‘AS as text, comic, graph’.)
Update 13.August 2018 (I am still catched by the chapters 3+4. In chapter the cognitive structure of the actors has been further enhanced; in chapter 4 a complete example of a mathematical actor story could now been attached.)
Update 14.August 2018 (minor corrections to chapter 4 + 5; change-statements define for each state individual combinatorial spaces (a little bit like a quantum state); whether and how these spaces will be concretized/ realized depends completely from the participating actors)
Update 15.August 2018 (Canceled the appendix with the case study stub and replaced it with an overview for a supporting software tool which is needed for the real usage of this theory. At the moment it is open who will write the software.)
Update 2.October 2018 (Configuring the whole book now with 3 parts: I. Theory, II. Application, III. Software. Gerd has his focus on part I, Zeynep will focus on part II and ‘somebody’ will focus on part III (in the worst case we will — nevertheless — have a minimal version :-)). For a first quick overview about everything read the ‘Preface’ and the ‘Introduction’.
Update 4.November 2018 (Rewriting the Introduction (and some minor corrections in the Preface). The idea of the rewriting was to address all the topics which will be discussed in the book and pointing out to the logical connections between them. This induces some wrong links in the following chapters, which are not yet updated. Some chapters are yet completely missing. But to improve the clearness of the focus and the logical inter-dependencies helps to elaborate the missing texts a lot. Another change is the wording of the title. Until now it is difficult to find a title which is exactly matching the content. The new proposal shows the focus ‘AAI’ but lists the keywords of the main topics within AAA analysis because these topics are usually not necessarily associated with AAI.)
ACTOR-ACTOR INTERACTION [AAI]. An Actor Centered Approach to Problem Solving. Combining Engineering and Philosophy
by
GERD DOEBEN-HENISCH in cooperation with LOUWRENCE ERASMUS, ZEYNEP TUNCER
PRE-VIEW: NEW EXPANDED AAI THEORY 23.January 2019: Outline of the new expanded AAI Paradigm. Before re-writing the main text with these ideas the new advanced AAI theory will first be tested during the summer 2019 within a lecture with student teams as well as in several workshops outside the Frankfurt University of Applied Sciences with members of different institutions.
1 History: From HCI to AAI …
2 Different Views …
3 Philosophy of the AAI-Expert …
4 Problem (Document) …
5 Check for Analysis …
6 AAI-Analysis …
6.1 Actor Story (AS) . . . . . . . . . . . . . . . . . . . . . . . . .
6.1.1 Textual Actor Story (TAS) . . . . . . . . . . . . . . .
6.1.2 Pictorial Actor Story (PAT) . . . . . . . . . . . . . .
6.1.3 Mathematical Actor Story (MAS) . . . . . . . . . . .
6.1.4 Simulated Actor Story (SAS) . . . . . . . . . . . . .
6.1.5 Task Induced Actor Requirements (TAR) . . . . . . .
6.1.6 Actor Induced Actor Requirements (UAR) . . . . . .
6.1.7 Interface-Requirements and Interface-Design . . . .
6.2 Actor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.1 Actor and Actor Story . . . . . . . . . . . . . . . . .
6.2.2 Actor Model . . . . . . . . . . . . . . . . . . . . . .
6.2.3 Actor as Input-Output System . . . . . . . . . . . .
6.2.4 Learning Input-Output Systems . . . . . . . . . . . .
6.2.5 General AM . . . . . . . . . . . . . . . . . . . . . .
6.2.6 Sound Functions . . . . . . . . . . . . . . . . . . .
6.2.7 Special AM . . . . . . . . . . . . . . . . . . . . . .
6.2.8 Hypothetical Model of a User – The GOMS Paradigm
6.2.9 Example: An Electronically Locked Door . . . . . . .
6.2.10 A GOMS Model Example . . . . . . . . . . . . . . .
6.2.11 Further Extensions . . . . . . . . . . . . . . . . . .
6.2.12 Design Principles; Interface Design . . . . . . . . .
6.3 Simulation of Actor Models (AMs) within an Actor Story (AS) .
6.4 Assistive Actor-Demonstrator . . . . . . . . . . . . . . . . . .
6.5 Approaching an Optimum Result . . . . .
7 What Comes Next: The Real System
7.1 Logical Design, Implementation, Validation . . . .
7.2 Conceptual Gap In Systems Engineering? . . .
8 The AASE-Paradigm …
References
Abstract
This text is based on the the paper “AAI – Actor-Actor Interaction. A Philosophy of Science View” from 3.Oct.2017 and version 11 of the paper “AAI – Actor-Actor Interaction. An Example Template” and it transforms these views in the new paradigm ‘Actor- Actor Systems Engineering’ understood as a theory as well as a paradigm for and infinite set of applications. In analogy to the slogan ’Object-Oriented Software Engineering (OO SWE)’ one can understand the new acronym AASE as a systems engineering approach where the actor-actor interactions are the base concepts for the whole engineering process. Furthermore it is a clear intention to view the topic AASE explicitly from the point of view of a theory (as understood in Philosophy of Science) as well as from the point of view of possible applications (as understood in systems engineering). Thus the classical term of Human-Machine Interaction (HMI) or even the older Human-Computer Interaction (HCI) is now embedded within the new AASE approach. The same holds for the fuzzy discipline of Artificial Intelligence (AI) or the subset of AI called Machine Learning (ML). Although the AASE-approach is completely in its beginning one can already see how powerful this new conceptual framework is.
eJournal: uffmm.org, ISSN 2567-6458, 09.Oct 2017 – April 9, 2022, 13:30 h Email: info@uffmm.org Author: Gerd Doeben-Henisch Email: gerd@doeben-henisch.de
Remark April 2022
This post from Oct 2017 will be reviewed in the new conceptual framework of an Applied Empirical Theory [AET] with an additional Dynamic Format [DF]. For more details see HERE.
OVERVIEW
A short story telling You, (i) how we interface the intelligent machines (IM) part with the actor-actor interaction (AAI) part, (ii) a first working definition of intelligent machines (IM) in this text, and (iii) defining intelligence and how one can this measure.
IM WITHIN AAI
In this blog we see IM not isolated, as a stand alone endeavor, but as embedded in a discipline called actor-actor interaction (AAI)(later called DAAI := Distributed Actor Actor Interaction). AAI investigates complex tasks and looks how different kinds of actors are interacting in these contexts with technical systems. As far as the participating systems have been technical systems one speaks here of a system interface (SI) as that part of a technical system, which is interacting with the human actor. In the case of biological systems (mostly humans, but it could be animals as well), one speaks of the user interface (UI). In this text we generalize both cases by the general concept of an actor — biological and non-biological –, which has some actor interface (ActI), and this actor interface embraces all properties which are relevant for the interactions of the actor.
For the analysis of the behavior of actors in such task-environments one can distinguish two important concepts: the actor story (AS) describing the context as an observable process, as well as different actor models (AM). Actor models are special extensions of an actor story because an actor model describes the observable behavior of actors as a behavior function (BF) with a set of assumptions about possible internal states of the actors. The assumptions about possible internal states (IS) are either completely arbitrary or empirically motivated.
The embedding of IM within AAI can be realized through the concept of an actor model (UM) and the actor story (AS). Whatever is important for something which is called an intelligent machine application (IMA) can be defined as an actor model within an actor story. This embedding of IM within AAI offers many advantages.
This has to be explained with some more details.
An Intelligent Machine (IM) in an Actor Story
Let us assume that there exists a mathematical-graph representation of an actor story written as AS_{L_{ε}}. Such a graph has nodes which represent situations. Formally these are sets of properties, probably more fine-grained by subsets which represent different kinds of actors embedded in this situation as well as different kinds of non-actors.
Actors can be classified (as introduced above) as either biological actors (BA) or non-biological actors (NBA). Both kinds of actors can — in another reading — be subsumed under the general term of input-output-systems (IO-SYS). An input-output system can be a learning system or non-learning. Another basic property is that of being intelligent or non-intelligent. Being a learning system and being an intelligent system is usually strongly connected, but this must not necessarily be so. Being a learning system can be associated with being non-intelligent and being intelligent can be connected with being non-learning.(cf. Figure 1)
While biological systems are always learning and intelligent, one can find non-biological systems of all types: non-learning and non-intelligent, non-intelligent and learning, non-learning and intelligent, and learning and intelligent.
Learning System
To classify a system as a learning system this requires the general ability to change the behavior of this system in time thus that there exists a time-span (t1,t2) after which the behavior as response to certain critical stimuli has changed compared to the time before. [1] From this requirement it follows, that a learning system is an input-output system with at least one internal state which can change. Thus we have the general assumption:
Def: Learning System (LS)
LS(x) iff
x=<I, O, IS, phi >
φ : I x IS —> IS x O
I := Input
O := Output
IS := Internal states
Some x is a learning system (LS) if it is a structure containing sets for input (I), Output (O), as well as internal states (IS). These sets are operated by a behavior function φ which maps inputs and actual internal states to output as well as back to internal states. The set of possible learning functions is infinite.
Intelligent System
The term ‘intelligent’ and ‘intelligence’ is until now not standardized. This means that everybody is using it at little bit arbitrarily.
In this text we take the basic idea of a scientific usage of the term ‘intelligence’ from experimental psychology, which has developed clearly defined operational concepts since the end of the 19. Century which have been proved as quite stable in their empirical applications. [2a,b,c]
The central idea of the psychological concept of the usage of the term ‘intelligence’ is to associate the usage of the term ‘intelligence’ with observable behavior of those actors, which shall be classified according defined methods of measurement.
In the case of experimental psychology the actors have been biological systems, mainly humans, in the first years of the research school children of certain ages. Because nobody did know what ‘intelligence’ means ‘as such’ one agreed to accept the observable behavior of children in certain task environments as ‘manifestations’ of a ‘presupposed unknown intelligence’. Thus the ability of children to solve defined tasks in a certain defined manner became a norm for what is called ‘intelligence’. Solving the tasks in a certain time with less than a certain amount of errors was used as a ‘baseline’ and all behavior deviating from the baseline was ‘better’ or ‘poorer’.
Thus the ‘content’ of the ‘meaning’ of the term ‘intelligence’ has been delegated to historical patterns of behavior which were common in a certain time-span in a certain geographical and cultural region.
While these behavior patterns can change during the course of time the general method of measurement is invariant.
In the time since then experimental psychology has modified and elaborated this first concept in some directions.
One direction is the modification of the kind of tasks which are used for the tests. With regard to the cultural context one has modified the content, thereby looking to find such kinds of task which seem to be ‘invariant’ with regard to the presupposed intelligence factor. This is an ongoing process.
The other direction is the focus on the actors as such. Because biological systems like humans change the development of their intelligence with age one has tried to find out ‘typical tasks for every age’. This too is an ongoing process.
This history of experimental psychology gives very interesting examples how one can approach the problem of the usage and the measurement of some X which we call ‘intelligence’.
In the context of an AAI-approach we have not only biological systems, but also non-biological systems. Thus most of the elaborated parameters of psychology for human actors are not general enough.
One possible strategy to generalize the intelligence-paradigm of experimental psychology could be to ‘free’ the selection of task sets from the narrow human cultures of the past and require only ‘clearly defined task sets with defined interfaces and defined contexts’. All these tasks sets can be arranged either in one super-set or in a parameterized field of sets. The sum of all these sets defines then a space of possible behavior and associated with this a space of possible measurable intelligence.
A task has then to be given as an actor story according to the AAI-paradigm. Such a specified actor story allows the formal definition of a complexity measure which can be used to measure the ‘amount of intelligence necessary to solve such a task’.
With such a more general and extendable approach to the measurement of observable intelligence one can compare all kinds of systems with each other. With such an approach one can further show objectively, where biological and non-biological systems differ generally, where they are similar, and to which extend they differ with regard to concrete circumstances.
Measuring Intelligence by Actor Stories
Presupposing actor stories (AS) (ideally formalized as mathematical graphs) on can define a first operational general measurement of intelligence.
Def: Task-Intelligence of a task τ (TInt(τ))
Every defined task τ represents a graph g with one shortest path pmin(τ)= π_{min} from a start node to a goal node.
Every such shortest path π_{min} has a certain number of nodes path-nodes(π_{min})=ν.
The number of solved nodes (ν_{solved}) can become related against the total number of nodes ν as ν_{solved}/ν. We take TInt(τ)= ν_{solved}/ν. It follows that TInt(τ) is between 0 and 1: 0 ≤ TInt(τ)≤ 1.
To every task a maximal duration Δ_{max} is attached; all nodes which are solved within this maximal duration time Δ_{max} are declared as ‘solved’, all the others as ‘un-solved’.
The usual case will require more than one task to be realized. Thus we introduce the concept of a task field (TF).
Def: Task-Field of type x (TF_{x}) Def: Task-Field Intelligence (TFInt)
A task-field TF of type x includes a finite set of individual tasks like TF_{x} = { τ{x.1}, τ{x.2}, … , τ{x.n} } with n ≥ 2. The sum of all individual task intelligence values TInt(τ{x.i}) has to be normalized to 1, i.e. (TInt(τ{x.1}) + TInt(τ{x.2}) + … + TInt(τ{x.n}))/ n (with 0 in the nominator not allowed). Thus the value of the intelligence of a task field of type x TFInt(TF_{x}) is again in the domain of [0,1].
Because the different tasks in a task field TF can be of different difficulty it should be possible to introduce some weighting for the individual task intelligence values. This should not change the general mechanism.
Def: Combined Task-Fields (TF)
In face of the huge variety of possible task fields in this world it can make sens to introduce more general layers by grouping task fields of different types together to larger combined fields, like TF_{x,…,z} = TF_{x} ∪ TF_{y} ∪ … ∪ TF_{z}. The task field intelligence TFInt of such combined task fields would be computed as before.
Def: Omega Task-Field at time t (TF_{ω}(t))
The most comprehensive assembly of such combinations shall here be called the Omega-Task-Field at time t TF_{ω}(t). This indicates the known maximum of intelligence measurements at that point of time.
Measurement Comments
With these assumptions the term intelligence will be restricted to clearly defined domains either to an individual task, to a task-field of type x, or to some grouped task-fields or being related to the actual omega task-field. In every such domain the intelligence value is in the realm of [0,1] or written as some value between 0 or 100%.
Independent of the type of an actor — biological or not — one can measure the intelligence of such an actor with the same domains of defined tasks. As a result one can easily compare all known actors with regard to such defined task domains.
Because the acting actors can be quite different by their input-output capabilities it follows that every actor has to organize some interface which enables him to use the defined task. There are no special restrictions to the format of such an interface, but there is one requirement which has to be observed strictly: the interface as such is not allowed to do any kind of computation beyond providing only the necessary input from the task domain or to provide the necessary output to the domain. Only then are the different tests able to reveal some difference between the different actors.
If the tests show differences between certain types of actors with regard to a certain task or a task-field then this is a chance to develop smart assistive interfaces which can help the actor in question to overcome his weakness compared to the other type of actor. Thus this kind of measuring intelligence can be a strong supporter for a better world in the future.
Another consequence of the differing intelligence values can be to look to the inner structure of an actor with weaker values and asking how one could improve his capabilities. This can be done e.g. by different kinds of training or by improving his system structures.
COMMENTS
[1] Sara J.Shettleworth, Biological Approaches to the Study of Learning, pp.185 – 219, in: N.J.Mackintosh (Ed.), Animal Learning and Cognition, Academic Press, San Diego, New York, London et.al., 1994
[2a] Ernest R.Hilgard, Rita L.Atkinson, Richard C.Atkinson, Introduction to Psychology, Harcourt Brace Jovanovic, Inc., Psychology, 7th ed., New York, San Diego, Chicago et.al, 1979
On the cover page of this blog you find a first general view on the subject matter of an integrated engineering approach for the future. Here we give a short description of the main idea of the analysis phase of systems engineering how this will be realized within the actor-actor interaction paradigm as described in this text.
INTRODUCTION
As you can see in figure Nr.1 there are the following main topics within the Actor-Actor Interaction (AAI) paradigm as used in this text (Comment: The more traditional formula is known as Human-Machine Interaction (HMI)):
Triggered by a problem document D_p from the problem phase (P) of the engineering process the AAI-experts have to analyze, what are the potential requirements following from this document, all the time also communicating with the stakeholder to keep in touch with the hidden intentions of the stakeholder.
The idea is to identify at least one task (T) with at least one goal state (G) which shall be arrived after running a task.
A task is assumed to represent a sequence of states (at least a start state and a goal state) which can have more than one option in every state, not excluding repetitions.
Every task presupposes some context (C) which gives the environment for the task.
The number of tasks and their length is in principle not limited, but their can be certain constraints (CS) given which have to be fulfilled required by the stakeholder or by some other important rules/ laws. Such constraints will probably limit the number of tasks as well as their length.
Actor Story
Every task as a sequence of states can be viewed as a story which describes a process. A story is a text (TXT) which is static and hides the implicit meaning in the brains of the participating actors. Only if an actor has some (learned) understanding of the used language then the actor is able to translate the perceptions of the process in an appropriate text and vice versa the text into corresponding perceptions or equivalently ‘thoughts’ representing the perceptions.
In this text it is assumed that a story is describing only the observable behavior of the participating actors, not their possible internal states (IS). For to describe the internal states (IS) it is further assumed that one describes the internal states in a new text called actor model (AM). The usual story is called an actor story (AS). Thus the actor story (AS) is the environment for the actor models (AM).
In this text three main modes of actor stories are distinguished:
An actor story written in some everyday language L_0 called AS_L0 .
A translation of the everyday language L_0 into a mathematical language L_math which can represent graphs, called AS_Lmath.
A translation of the hidden meaning which resides in the brains of the AAI-experts into a pictorial language L_pict (like a comic strip), called AS_Lpict.
To make the relationship between the graph-version AS_Lmath and the pictorial version AS_Lpict visible one needs an explicit mapping Int from one version into the other one, like: Int : AS_Lmath <—> AS_Lpict. This mapping Int works like a lexicon from one language into another one.
From a philosophy of science point of view one has to consider that the different kinds of actor stories have a meaning which is rooted in the intended processes assumed to be necessary for the realization of the different tasks. The processes as such are dynamic, but the stories as such are static. Thus a stakeholder (SH) or an AAI-expert who wants to get some understanding of the intended processes has to rely on his internal brain simulations associated with the meaning of these stories. Because every actor has its own internal simulation which can not be perceived from the other actors there is some probability that the simulations of the different actors can be different. This can cause misunderstandings, errors, and frustrations.(Comment: This problem has been discussed in [DHW07])
One remedy to minimize such errors is the construction of automata (AT) derived from the math mode AS_Lmath of the actor stories. Because the math mode represents a graph one can derive Der from this version directly (and automatically) the description of an automaton which can completely simulate the actor story, thus one can assume Der(AS_Lmath) = AT_AS_Lmath.
But, from the point of view of Philosophy of science this derived automaton AT_AS_Lmath is still only a static text. This text describes the potential behavior of an automaton AT. Taking a real computer (COMP) one can feed this real computer with the description of the automaton AT AT_AS_Lmath and make the real computer behave like the described automaton. If we did this then we have a real simulation (SIM) of the theoretical behavior of the theoretical automaton AT realized by the real computer COMP. Thus we have SIM = COMP(AT_AS_Lmath). (Comment: These ideas have been discussed in [EDH11].)
Such a real simulation is dynamic and visible for everybody. All participating actors can see the same simulation and if there is some deviation from the intention of the stakeholder then this can become perceivable for everybody immediately.
Actor Model
As mentioned above the actor story (AS) describes only the observable behavior of some actor, but not possible internal states (IS) which could be responsible for the observable behavior.
If necessary it is possible to define for every actor an individual actor model; indeed one can define more than one model to explore the possibilities of different internal structures to enable a certain behavior.
The general pattern of actor models follows in this text the concept of input-output systems (IOSYS), which are in principle able to learn. What the term ‘learning’ designates concretely will be explained in later sections. The same holds of the term ‘intelligent’ and ‘intelligence’.
The basic assumptions about input-output systems used here reads a follows:
Def: Input-Output System (IOSYS)
IOSYS(x) iff x=< I, O, IS, phi>
phi : I x IS —> IS x O
I := Input
O := Output
IS := Internal
As in the case of the actor story (AS) the primary descriptions of actor models (AM) are static texts. To make the hidden meanings of these descriptions ‘explicit’, ‘visible’ one has again to convert the static texts into descriptions of automata, which can be feed into real computers which in turn then simulate the behavior of these theoretical automata as a real process.
Combining the real simulation of an actor story with the real simulations of all the participating actors described in the actor models can show a dynamic, impressive process which is full visible to all collaborating stakeholders and AAI-experts.
Testing
Having all actor stories and actor models at hand, ideally implemented as real simulations, one has to test the interaction of the elaborated actors with real actors, which are intended to work within these explorative stories and models. This is done by actor tests (former: usability tests) where (i) real actors are confronted with real tasks and have to perform in the intended way; (ii) real actors are interviewed with questionnaires about their subjective feelings during their task completion.
Every such test will yield some new insights how to change the settings a bit to gain eventually some improvements. Repeating these cycles of designing, testing, and modifying can generate a finite set of test-results T where possibly one subset is the ‘best’ compared to all the others. This can give some security that this design is probably the ‘relative best design’ with regards to T.
[DHW07] G. Doeben-Henisch and M. Wagner. Validation within safety critical systems engineering from a computation semiotics point of view.
Proceedings of the IEEE Africon2007 Conference, pages Pages: 1 – 7, 2007.
[EDH11] Louwrence Erasmus and Gerd Doeben-Henisch. A theory of the
system engineering process. In ISEM 2011 International Conference. IEEE, 2011.