Change: May 20, 2019 (Stopping Circulating Acronyms :-))
Change: May 21, 2019 (Adding the Slavery-Empowerment topic)
Change: May 26, 2019 (Improving the general introduction of this first page)
This text describes the general procedure how engineers turn a problem into a functioning solution. Usually known under the label of Systems Engineering (SE) the focus in this text is on the first phase of this process where some experts try to analyse a given problem with a first vision of a possible solution to enable a complete solution. This analysis centers around the interaction between different kinds of executive actors (eA) which have to do the job and different kinds of assistive actors (aA) which shall support the executive actors. Historically these interactions have been analyzed under headings like Humanc-Computer Interaction (HCI) or Human-Machine Interaction (HMI). It is due to the developments during the beginning of the 21st century that the author of this text recently has introduced the wording Actor-Actor Interaction (AAI) to cope with the explosion of different kinds of actors on the side of the executive as well as assistive actors. As a consequence the nature of the interactions changed as well.These changes induced a general re-writing of the traditional HCI/ HMI subject which is not yet finished.
HISTORY OF THIS TEXT
After a previous post of the new AAI approach I started the first re-formulation of the general framework of the AAI theory, which later has been replaced by a more advanced AAI version V2. But even this version became a change candidate and mutated to the Actor-Cognition Interaction (ACI) paradigm, which still was not the endpoint. Then new arguments grew up to talk rather from the Augmented Collective Intelligence (ACI). Because even this view on the subject can change again I stopped following the different aspects of the general Actor-Actor Interaction paradigm and decided to keep the general AAI paradigm as the main attractor capable of several more specialized readings.
Change: June 18, 2019 (Returning to the PDF-approach: one coherent pdf-document, which will be updated as a whole… the network of posts is too confusing)
Who has followed the discussion in this blog remembers several different phases in the conceptual frameworks used here.
The first paradigm called Human-Computer Interface (HCI) has been only mentioned by historical reasons. The next phase Human-Machine Interaction (HMI) was the main paradigm in the beginning of my lecturing in 2005. Later, somewhere 2011/2012, I switched to the paradigm Actor-Actor Interaction (AAI) because I tried to generalize over the different participating machines, robots, smart interfaces, humans as well as animals. This worked quite nice and some time I thought that this is now the final formula. But reality is often different compared to our thinking. Many occasions showed up where the generalization beyond the human actor seemed to hide the real processes which are going on, especially I got the impression that very important factors rooted in the special human actor became invisible although they are playing decisive role in many processes. Another punch against the AAI view came from application scenarios during the last year when I started to deal with whole cities as actors. At the end I got the feeling that the more specialized expressions like Actor-Cognition Interaction (ACI) or Augmented Collective Intelligence (ACI) can indeed help to stress certain special properties better than the more abstract AAI acronym, but using structures like ACI within general theories and within complex computing environments it became clear that the more abstract acronym AAI is in the end more versatile and simplifies the general structures. ACI became a special sub-case
To understand this oscillation between AAI and ACI one has to look back into the history of Human Computer/ Machine Interaction, but not only until the end of the World War II, but into the more extended evolutionary history of mankind on this planet.
It is a widespread opinion under the researchers that the development of tools to help mastering material processes was one of the outstanding events which changed the path of the evolution a lot. A next step was the development of tools to support human cognition like scripture, numbers, mathematics, books, libraries etc. In this last case of cognitive tools the material of the cognitive tools was not the primary subject the processes but the cognitive contents, structures, even processes encoded by the material structures of the tools.
Only slowly mankind understood how the cognitive abilities and capabilities are rooted in the body, in the brain, and that the brain represents a rather complex biological machinery which enables a huge amount of cognitive functions, often interacting with each other; these cognitive functions show in the light of observable behavior clear limits with regard to the amount of features which can be processed in some time interval, with regard to precision, with regard to working interconnections, and more. And therefore it has been understood that the different kinds of cognitive tools are very important to support human thinking and to enforce it in some ways.
Only in the 20th century mankind was able to built a cognitive tool called computer which could show capabilities which resembled some human cognitive capabilities and which even surpassed human capabilities in some limited areas. Since then these machines have developed a lot (not by themselves but by the thinking and the engineering of humans!) and meanwhile the number and variety of capabilities where the computer seems to resemble a human person or surpasses human capabilities have extend in a way that it has become a common slang to talk about intelligent machines or smart devices.
While the original intention for the development of computers was to improve the cognitive tools with the intend to support human beings one can get today the impression as if the computer has turned into a goal on its own: the intelligent and then — as supposed — the super-intelligent computer appears now as the primary goal and mankind appears as some old relic which has to be surpassed soon.
As will be shown later in this text this vision of the computer surpassing mankind has some assumptions which are
What seems possible and what seems to be a promising roadmap into the future is a continuous step-wise enhancement of the biological structure of mankind which absorbs the modern computing technology by new cognitive interfaces which in turn presuppose new types of physical interfaces.
To give a precise definition of these new upcoming structures and functions is not yet possible, but to identify the actual driving factors as well as the exciting combinations of factors seems possible.
COGNITION EMBEDDED IN MATTER
The main idea is the shift of the focus away from the physical grounding of the interaction between actors looking instead more to the cognitive contents and processes, which shall be mediated by the physical conditions. Clearly the analysis of the physical conditions as well as the optimal design of these physical conditions is still a challenge and a task, but without a clear knowledge manifested in a clear model about the intended cognitive contents and processes one has not enough knowledge for the design of the physical layout.
SOLVING A PROBLEM
Thus the starting point of an engineering process is a group of people (the stakeholders (SH)) which identify some problem (P) in their environment and which have some minimal idea of a possible solution (S) for this problem. This can be commented by some non-functional requirements (NFRs) articulating some more general properties which shall hold through the whole solution (e.g. ‘being save’, ‘being barrier-free’, ‘being real-time’ etc.). If the description of the problem with a first intended solution including the NFRs contains at least one task (T) to be solved, minimal intended users (U) (here called executive actors (eA)), minimal intended assistive actors (aA) to assist the user in doing the task, as well as a description of the environment of the task to do, then the minimal ACI-Check can be passed and the ACI analysis process can be started.
COGNITION AND AUGMENTED COLLECTIVE INTELLIGENCE
If we talk about cognition then we think usually about cognitive processes in an individual person. But in the real world there is no cognition without an ongoing exchange between different individuals by communicative acts. Furthermore it has to be taken into account that the cognition of an individual person is in itself partitioned into two unequal parts: the unconscious part which covers about 99% of all the processes in the body and in the brain and about 1% which covers the conscious part. That an individual person can think somehow something this person has to trigger his own unconsciousness by stimuli to respond with some messages from his before unknown knowledge. Thus even an individual person alone has to organize a communication with his own unconsciousness to be able to have some conscious knowledge about its own unconscious knowledge. And because no individual person has at a certain point of time a clear knowledge of his unconscious knowledge the person even does not really know what to look for — if there is no event, not perception, no question and the like which triggers the person to interact with its unconscious knowledge (and experience) to get some messages from this unconscious machinery, which — as it seems — is working all the time.
On account of this logic of the individual internal communication with the individual cognition an external communication with the world and the manifested cognition of other persons appears as a possible enrichment in the interactions with the distributed knowledge in the different persons. While in the following approach it is assumed to represent the different knowledge responses in a common symbolic representation viewable (and hearable) from all participating persons it is growing up a possible picture of something which is generally more rich, having more facets than a picture generated by an individual person alone. Furthermore can such a procedure help all participants to synchronize their different knowledge fragments in a bigger picture and use it further on as their own picture, which in turn can trigger even more aspects out of the distributed unconscious knowledge.
If one organizes this collective triggering of distributed unconscious knowledge within a communication process not only by static symbolic models but beyond this with dynamic rules for changes, which can be interactively simulated or even played with defined states of interest then the effect of expanding the explicit and shared knowledge will be boosted even more.
From this background it makes some sense to turn the wording Actor-Cognition Interaction into the wording Augmented Collective Intelligence where Intelligence is the component of dynamic cognition in a system — here a human person –, Collective means that different individual person are sharing their unconscious knowledge by communicative interactions, and Augmented can be interpreted that one enhances, extends this sharing of knowledge by using new tools of modeling, simulation and gaming, which expands and intensifies the individual learning as well as the commonly shared opinions. For nearly all problems today this appears to be absolutely necessary.
ACI ANALYSIS PROCESS
Here it will be assumed that there exists a group of ACI experts which can supervise other actors (stakeholders, domain experts, …) in a process to analyze the problem P with the explicit goal of finding a satisfying solution (S+).
For the whole ACI analysis process an appropriate ACI software should be available to support the ACI experts as well as all the other domain experts.
In this ACI analysis process one can distinguish two main phases: (1) Construct an actor story (AS) which describes all intended states and intended changes within the actor story. (2) Make several tests of the actor story to exploit their explanatory power.
ACTOR STORY (AS)
The actor story describes all possible states (S) of the tasks (T) to be realized to reach intended goal states (S+). A mapping from one state to a follow-up state will be described by a change rule (X). Thus having start state (S0) and appropriate change rules one can construct the follow-up states from the actual state (S*) with the aid of the change rules. Formally this computation of the follow-up state (S’) will be computed by a simulator function (σ), written as: σ: S* x X —> S.
With the aid of an explicit actor story (AS) one can define the non-functional requirements (NFRs) in a way that it will become decidable whether a NFR is valid with regard to an actor story or not. In this case this test of being valid can be done as an automated verification process (AVP). Part of this test paradigm is the so-called oracle function (OF) where one can pose a question to the system and the system will answer the question with regard to all theoretically possible states without the necessity to run a (passive) simulation.
If the size of the group is large and it is important that all members of the group have a sufficient similar knowledge about the problem(s) in question (as it is the usual case in a city with different kinds of citizens) then is can be very helpful to enable interactive simulations or even games, which allow a more direct experience of the possible states and changes. Furthermore, because the participants can act according to their individual reflections and goals the process becomes highly uncertain and nearly unpredictable. Especially for these highly unpredictable processes can interactive simulations (and games) help to improve a common understanding of the involved factors and their effects. The difference between a normal interactive simulation and a game is given in the fact that a game has explicit win-states whereas the interactive simulations doesn’t. Explicit win-states can improve learning a lot.
The other interesting question is whether an actor story AS with a certain idea for an assistive actor (aA) is usable for the executive actors. This requires explicit measurements of the usability which in turn requires a clear norm of reference with which the behavior of an executive actor (eA) during a process can be compared. Usually is the actor Story as such the norm of reference with which the observable behavior of the executing actors will be compared. Thus for the measurement one needs real executive actors which represent the intended executive actors and one needs a physical realization of the intended assistive actors called mock-up. A mock-up is not yet the final implementation of the intended assistive actor but a physical entity which can show all important physical properties of the intended assistive actor in a way which allows a real test run. While in the past it has been assumed to be sufficient to test a test person only once it is here assumed that a test person has to be tested at least three times. This follows from the assumption that every executive (biological) actor is inherently a learning system. This implies that the test person will behave differently in different tests. The degree of changes can be a hint of the easiness and the learnability of the assistive actor.
If an appropriate ACI software is available then one can consider an actor story as a simple theory (ST) embracing a model (M) and a collection of rules (R) — ST(x) iff x = <M,R> –which can be used as a kind of a building block which in turn can be combined with other such building blocks resulting in a complex network of simple theories. If these simple theories are stored in a public available data base (like a library of theories) then one can built up in time a large knowledge base on their own.
An overview of the enhanced AAI theory version 2 you can find here. In this post we talk about the tenth chapter dealing with Measuring Usability
As has been delineated in the post “Usability and Usefulness” statements about the quality of the usability of some assisting actor are based on some kinds of measurement: mapping some target (here the interactions of an executive actor with some assistive actor) into some predefined norm (e.g. ‘number of errors’, ‘time needed for completion’, …). These remarks are here embedded in a larger perspective following Dumas and Fox (2008).
From the three main types of usability testing with regard to the position in the life-cycle of a system we focus here primarily on the usability testing as part of the analysis phase where the developers want to get direct feedback for the concepts embedded in an actor story. Depending from this feedback the actor story and its related models can become modified and this can result in a modified exploratory mock-up for a new test. The challenge is not to be ‘complete’ in finding ‘disturbing’ factors during an interaction but to increase the probability to detect possible disturbing factors by facing the symbolically represented concepts of the actor story with a sample of real world actors. Experiments point to the number of 5-10 test persons which seem to be sufficient to detect the most severe disturbing factors of the concepts.
A good description of usability testing can be found in the book Lauesen (2005), especially chapters 1 +13. According to this one can infer the following basic schema for a usability test:
One needs 5 – 10 test persons whose input-output profile (AAR) comes close to the profile (TAR) required by the actor story.
One needs a mock-up of the assistive actor; this mock-up should correspond ‘sufficiently well’ with the input-output profile (TAR) required by the actor story. In the simplest case one has a ‘paper model’, whose sheets can be changed on demand.
One needs a facilitator who is receiving the test person, introduces the test person into the task (orally and/ or by a short document (less than a page)), then accompanies the test without interacting further with the test person until the end of the test. The end is either reached by completing the task or by reaching the end of a predefined duration time.
After the test person has finished the test a debriefing happens by interrogating the test person about his/ her subjective feelings about the test. Because interviews are always very fuzzy and not very reliable one should keep this interrogation simple, short, and associated with concrete points. One strategy could be to ask the test person first about the general feeling: Was it ‘very good’, ‘good’, ‘OK’, ‘undefined’, ‘not OK’, ‘bad’, ‘very bad’ (+3 … 0 … -3). If the stated feeling is announced then one can ask back which kinds of circumstances caused these feelings.
During the test at least two observers are observing the behavior of the test person. The observer are using as their ‘norm’ the actor story which tells what ‘should happen in the ideal case’. If a test person is deviating from the actor story this will be noted as a ‘deviation of kind X’, and this counts as an error. Because an actor story in the mathematical format represents a graph it is simple to quantify the behavior of the test person with regard to how many nodes of a solution path have been positively passed. This gives a count for the percentage of how much has been done. Thus the observer can deliver data about at least the ‘percentage of task completion’, ‘the number (and kind) of errors by deviations’, and ‘the processing time’. The advantage of having the actor story as a norm is that all observers will use the same ‘observation categories’.
From the debriefing one gets data about the ‘good/ bad’ feelings on a scale, and some hints what could have caused the reported feelings.
STANDARDS – CIF (Common Industry Format)
There are many standards around describing different aspects of usability testing. Although standards can help in practice from the point of research standards are not only good, they can hinder creative alternative approaches. Nevertheless I myself are looking to standards to check for some possible ‘references’. One standard I am using very often is the “Common Industry Format (CIF)” for usability reporting. It is an ISO standard (ISO/IEC 25062:2006) since 2006. CIF describes a method for reporting the findings of usability tests that collect quantitative measurements of user performance. CIF does not describe how to carry out a usability test, but it does require that the test include measurements of the application’s effectiveness and efficiency as well as a measure of the users’ satisfaction. These are the three elements that define the concept of usability.
Applied to the AAI paradigm these terms are fitting well.
Effectiveness in CIF is targeting the accuracy and completeness with which users achieve their goal. Because the actor story in AAI his represented as a graph where the individual paths represents a way to approach a defined goal one can measure directly the accuracy by comparing the ‘observed path’ in a test and the ‘intended ideal path’ in the actor story. In the same way one can compute the completeness by comparing the observed path and the intended ideal path of the actor story.
Efficiency in CIF covers the resources expended to achieve the goals. A simple and direct measure is the measuring of the time needed.
Users’ satisfaction in CIF means ‘freedom from discomfort’ and ‘positive attitudes towards the use of the product‘. These are ‘subjective feelings’ which cannot directly be observed. Only ‘indirect’ measures are possible based on interrogations (or interactions with certain tasks) which inherently are fuzzy and not very reliable. One possibility how to interrogate is mentioned above.
Because the term usability in CIF is defined by the before mentioned terms of effectiveness, efficiency as well as users’ satisfaction, which in turn can be measured in many different ways the meaning of ‘usability’ is still a bit vague.
DYNAMIC ACTORS – CHANGING CONTEXTS
With regard to the AAI paradigm one has further to mention that the possibility of adaptive, learning systems embedded in dynamic, changing environments requires for a new type of usability testing. Because learning actors change by every exercise one should run a test several times to observe how the dynamic learning rates of an actor are developing in time. In such a dynamic framework a system would only be ‘badly usable‘ when the learning curves of the actors can not approach a certain threshold after a defined ‘typical learning time’. And, moreover, there could be additional effects occurring only in a long-term usage and observation, which can not be measured in a single test.
Joseph S. Dumas and Jean E. Fox. Usability testing: Current practice and future directions. chapter 57, pp.1129 – 1149, in J.A. Jacko and A. Sears, editors, The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and Emerging Applications. 2nd edition, 2008
S. Lauesen. User Interface Design. A software Engineering Perspective.
Pearson – Addison Wesley, London et al., 2005
This text has to be reviewed again on account of the new aspect of gaming as discussed in the post Engineering and Society.
An overview of the enhanced AAI theory version 2 you can find here. In this post we talk about the sixth chapter dealing with usability and usefulness.
USABILITY AND USEFULNESS
In the AAI paradigm the concept of usability is seen as a sub-topic of the more broader concept of usefulness. Furthermore Usefulness as well as usability are understood as measurements comparing some target with some presupposed norm.
Example: If someone wants to buy a product A whose prize fits well with the available budget and this product A shows only an average usability then the product is probably ‘more useful’ for the buyer than another product B which does not fit with the budget although it has a better usability. A conflict can arise if the weaker value of the usability of product A causes during the usage of product A ‘bad effects’ onto the user of product A which in turn produce additional negative costs which enhance the original ‘nice price’ to a degree where the product A becomes finally ‘more costly’ than product B.
Therefore the concept usefulness will be defined independently from the concept usability and depends completely from the person or company who is searching for the solution of a problem. The concept of usability depends directly on the real structure of an actor, a biological one or a non-biological one. Thus independent of the definition of the actual usefulness the given structure of an actor implies certain capabilities with regard to input, output as well as to internal processing. Therefore if an X seems to be highly useful for someone and to get X needs a certain actor story to become realized with certain actors then it can matter whether this process includes a ‘good usability’ for the participating actors or not.
In the AAI paradigm both concepts usefulness as well as usability will be analyzed to provide a chance to check the contributions of both concepts in some predefined duration of usage. This allows the analysis of the sustainability of the wanted usefulness restricted to usability as a parameter. There can be even more parameters included in the evaluation of the actor story to enhance the scope of sustainability. Depending from the definition of the concept of resilience one can interpret the concept of sustainability used in this AAI paradigm as compatible with the resilience concept too.
To speak about ‘usefulness’, ‘usability’, ‘sustainability’ (or ‘resilience’) requires some kind of a scale of values with an ordering relation R allowing to state about some values x,y whether R(x,y) or R(y,x) or EQUAL(x,y). The values used in the scale have to be generated by some defined process P which is understood as a measurement process M which basically compares some target X with some predefined norm N and gives as a result a pair (v,N) telling a number v associated with the applied norm N. Written: M : X x N —> V x N.
A measurement procedure M must be transparent and repeatable in the sense that the repeated application of the measurement procedure M will generate the same results than before. Associated with the measurement procedure there can exist many additional parameters like ‘location’, ‘time’, ‘temperature’, ‘humidity’, ‘used technologies’, etc.
Because there exist targets X which are not static it can be a problem when and how often one has to measure these targets to get some reliable value. And this problem becomes even worse if the target includes adaptive systems which are changing constantly like in the case of biological systems.
All biological systems have some degree of learnability. Thus if a human actor is acting as part of an actor story the human actor will learn every time he is working through the process. Thus making errors during his first run of the process does not imply that he will repeat these errors the next time. Usually one can observe a learning curve associated with n-many runs which show — mostly — a decrease in errors, a decrease in processing time, and — in general — a change of all parameters, which can be measured. Thus a certain actor story can receive a good usability value after a defined number of usages. But there are other possible subjective parameters like satisfaction, being excited, being interested and the like which can change in the opposite direction, because to become well adapted to the process can be boring which in turn can lead to less concentrations with many different negative consequences.
An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the second chapter where you have to define the context of the problem, which should be analyzed.
DEFINING THE CONTEXT OF PROBLEM P
A defined problem P identifies at least one property associated with a configuration which has a lower level x than a value y inferred by an accepted standard E.
The property P is always part of some environment ENV which interacts with the problem P.
To approach an improved configuration S measured by some standard E starting with a problem P one needs a process characterized by a set of necessary states Q which are connected by necessary changes X.
Such a process can be described by an actor story AS.
All properties which belong to the whole actor story and therefore have to be satisfied by every state q of the actor story are called non-functional process requirements (NFPRs). If required properties are are associate with only one state but for the whole state, then these requirements are called non-functional state requirements (NFSRs).
An actor story can include many different sequences, where every sequence is called a path PTH. A finite set of paths can represent a task T which has to be fulfilled. Within the environment of the defined problem P it mus be possible to identify at least one task T to be realized from some start state to some goal state. The realization of a task T is assumed to be ‘driven’ by input-output-systems which are called actors A.
Additionally it mus be possible to identify at least one executing actor A_exec doing a task and at least one actor assisting A_ass the executing actor to fulfill the task.
A state q represents all needed actors as part of the associated environment ENV. Therefore a state q can be analyzed as a network of elements interacting with each other. But this is only one possible structure for an analysis besides others.
For the analysis of a possible solution one can distinguish at least two overall strategies:
Top-down: There exists a group of experts EXPs which will analyze a possible solution, will test these, and then will propose these as a solution for others.
Bottom-up: There exists a group of experts EXPs too but additionally there exists a group of customers CTMs which will be guided by the experts to use their own experience to find a possible solution.
The mayor of a city has identified as a problem the relationship between the actual population number POP, the amount of actual available living space LSP0, and the amount of recommended living space LSPr by some standard E. The population of his city is steadily interacting with populations in the environment: citizens are moving into the environment MIGR- and citizens from the environment are arriving MIGR+. The population, the city as well as the environment can be characterized by a set of parameters <P1, …, Pn> called a configuration which represents a certain state q at a certain point of time t. To convert the actual configuration called a start state q0 to a new configuration S called a goal state q+ with better values requires the application of a defined set of changes Xs which change the start state q0 stepwise into a sequence of states qi which finally will end up in the desired goal state q+. A description of all these states necessary for the conversion of the start state q0 into the goal state q+ is called here an actor story AS. Because a democratic elected mayor of the city wants to be ‘liked’ by his citizens he will require that this conversion process should end up in a goal state which is ‘not harmful’ for his citizens, which should support a ‘secure’ and ‘safety’ environment, ‘good transportation’ and things like that. This illustrates non-functional state requirements (NFSRs). Because the mayor wants also not to much trouble during the conversion process he will also require some limits for the whole conversion process, this is for the whole actor story. This illustrates non-functional process requirements (NFPRs). To realize the intended conversion process the mayor needs several executing actors which are doing the job and several other assistive actors helping the executing actors. To be able to use the available time and resources ‘effectively’ the executing actors need defined tasks which have to be realized to come from one state to the next. Often there are more than one sequences of states possible either alternatively or in parallel. A certain state at a certain point of time t can be viewed as a network where all participating actors are in many ways connected with each other, interacting in several ways and thereby influencing each other. This realizes different kinds of communications with different kinds of contents and allows the exchange of material and can imply the change of the environment. Until today the mayors of cities use as their preferred strategy to realize conversion processes selected small teams of experts doing their job in a top-down manner leaving the citizens more or less untouched, at least without a serious participation in the whole process. From now on it is possible and desirable to twist the strategy from top-down to bottom up. This implies that the selected experts enable a broad communication with potentially all citizens which are touched by a conversion and including the knowledge, experience, skills, visions etc. of these citizens by applying new methods possible in the new digital age.