DIGITAL SLAVERY OR DIGITAL EMPOWERMENT?

eJournal: uffmm.org,
ISSN 2567-6458, 21.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Change: 26.May 2019 (Extending from two to three selected authors)

CONTEXT

This text is part of the larger text dealing with the Actor-Actor Interaction (AAI)  paradigm.

OBJECTIVE

In the following text the focus is on the global environment for the AAI approach, the cooperation and/ or competition between societies and the strong impact induced by the new digital technologies. Numerous articles and books are dealing with these phenomena. For the actual  focus  I like to mention especially three books: (i) from the point of view of the technology driver the book of Eric Schmidt and Jared Cohen (2013)  — both from google — seems to be an impressive illustration of what will be possible in the near future;  (ii) from the point of a technology-aware historian the book of Yuval Noah Harari (2018) can help to deepen the impressions and pointing to the more and more difficult role of mankind itself; finally (iii) from the point of view of  a society-thriller author Eric Elsberg (2019) who shows within a quite realistic scenario how a global lack of understanding can turn the countries world wide into a desaster which seems to be un-necessary.

The many, many different aspects of the views of the first two mentioned authors I transform  into one confrontation only: Digital Slavery vs. Digital Empowerment.

DIGITAL SLAVERY
Digital slavery as the actual leading trend in the nations worldwide induced by a digital technology which can be used for this, but this is only one of several options.

Stepping back from the stream of events in everyday life, and looking onto the more general structure working behind and implicit in all these events then one can recognize an incredible collecting behavior of only a few services behind the scene. While the individual user is mostly separated from all the others, empowered by a limited individual knowledge, individual experiences, skills, and preferences, mostly unconscious, his/ her behavior will be stored in central cloud spaces which as such are only single, individual data with a bigger importance. People asked about their data usually do not bother too much about questions of security. An often heared argument in this context says, that they have nothing to hide. They are only normal persons.

What they do not see and which  they cannot see because this is completely hided for others is the fact that  there exists hidden algorithms which can synthesize all these individual data, extracting different kinds of patterns, reconstructing time lines, adding divers connotations, and which can compute some dynamics pointing into a possible future. The hidden owners (the ‘dark lords’) of these storage spaces and algorithms can built up  with these individual data of normal users overall pictures of many hundreds of Millions of users, companies, offices, communal institutions etc., which enable very specific plans and actions to exploit economical, political and military opportunities. With this knowledge they can destroy nearly any kind of company at will and they can introduce new companies before anybody elsewhere has the faintiest idea, that there is a new possibility. This pattern concentrates the capital more and more in a decreasing number of owners and turns more and more people into an anonymous mass of being poor.

The individual user does not know about all this. In his/ her Not-Knowing the user is mutating from a free democratic citizen to a formally perhaps still  free, but materially completely manipulated something. This is not limited to the normal citizen but it holds for Managers, mayors and most kinds of politicians too. Traditional societies are sucked out and are turned into more and more zombie-states.

Is there no chance to overcome this destructive overall scenario?

DIGITAL EMPOWERMENT
Digital empowerment as an alternative approach using digital technologies to empower people, groups, whole nations without the hidden ‘dark lords’.

There are alternatives to the actual digital slavery paradigm.  These alternatives try to help the individual user, citizen, manager, mayor etc. to bridge his/ her isolation by supporting a new kind of team-based modeling of the common reality, which is  stored on public storage spaces, reachable 24 hours every day during a year by all. Here too one can add algorithms which can support the contributing users by simulations, playing modes, oracle-computations, connecting different models into one model, and much more. Such an approach frees the individual out of his individual enclosures, sharing creative ideas, searching together for better solutions, and using modeling techniques, simulation techniques, and several kinds of machine learning (ML) and artificial intelligence (AI) to improve the construction of possible futures much beyond the individual capacities alone.

This alternative approach allows real open knowledge and  real informed democracies. This is not slavery by dark lords but common empowerment by free people.

Who has already read some of the texts related to the AAI paradigm will know that the AAI paradigm covers exactly this empowering  view of  a modern open democratic society.

COOPERATIVE SOCIETIES

At a first glance this idea of a digital empowered society may look as an empty procedure: everybody is enabled to communicate and think with others, but what is with the daily economy which organizes the stream of goods, services, and resources? The main mode of interactions in the beginning of the 21st century seems to demonstrate the inability of the actual open liberal societies to fight inequalities really. The political system appears to be too weak to enable efficient structures.

It is known since years based on mahematical models that a truly cooperative society is much, much more stable as any other kind of a system and even much, much more productive. These insights are not at work world wide. The reason is that the financial and political systems follow models in their heads which are different and which until now are opposing any kind of a change. Several cultural and emotional factors stand against different views, different practices, different procedures. Here improved communication and planning capabilites can be of help.

REFERENCES
  • Marc Elsberg. Gier. Wie weit würdest Du gehen? Blanvalet Publisher (part of  Random House), Munich, 2019
  • Yuval Noah Harari. 21 Lessons for the 21st Century. Spiegel & Grau,
    Penguin Random House, New York, 2018.
  • Eric Schmidt and Jared Cohen. The New Digital Age. Reshaping the
    Future of People, Nations and Business. John Murray, London (UK),
    1 edition, 2013. URL https://www.google.com/search?client=
    ubuntu&channel=fs&q=eric+schmidt+the+new+digital+age+pdf&
    ie=utf-8&oe=utf-8 .

DAAI V4 FRONTPAGE

eJournal: uffmm.org,
ISSN 2567-6458, 12.May – 18.Jan 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

HISTORY OF THIS PAGE

See end of this page.

CONTEXT

This Theory of Engineering section is part of the uffmm science blog.

HISTORY OF THE (D)AAI-TEXT

See below

ACTUAL VERSION

DISTRIBUTED ACTOR ACTOR INTERACTION [DAAI]. Version 15.06, From  Dec 13, 2019 until Jan 18, 2020

aaicourse-15-06-07(PDF, Chapter 8 new (but not yet completed))

aaicourse-15-06-05(PDF, Chapter 7 new)

aaicourse-15-06-04(PDF, Chapter 6 modified)

aaicourse-15-06-03(PDF, Chapter 5 modified)

aaicourse-15-06-02(PDF, Chapter 4 modified)

aaicourse-15-06-01(PDF, Chapter 1 modified)

aaicourse-15-06 (PDF, chapters 1-6)

aaicourse-15-05-2 (PDF, chapters 1-6; chapter 6 only as a first stub)

DISTRIBUTED ACTOR ACTOR INTERACTION [DAAI]. Version 15.05.1, Dec 2, 2019:

aaicourse-15-05-1(PDF, chapters 1-5; minor corrections)

aaicourse-15-05 (PDF, chapters 1-5 of the new version 15.05)

Changes: Extension of title, extension of preface!, extension of chapter 4, new: chapter 5 MAS, extension of bibliography and indices.

HISTORY OF UPDATES

ACTOR ACTOR INTERACTION [AAI]. Version: June 17, 2019 – V.7: aaicourse-17june2019-incomplete

Change: June 19, 2019 (Update  to version 8; chapter 5 has been rewritten completely).

ACTOR ACTOR INTERACTION [AAI]. Version: June 19, 2019 – V.8: aaicourse-june 19-2019-v8-incomplete

Change: June 19, 2019 (Update to version 8.1; minor corrections in chapter 5)

ACTOR ACTOR INTERACTION [AAI]. Version: June 19, 2019 – V.8.1: aaicourse-june19-2019-v8.1-incomplete

Change: June 23, 2019 (Update to version 9; adding chapter 6 (Dynamic AS) and chapter 7 (Example of dynamic AS with two actors)

ACTOR ACTOR INTERACTION [AAI]. Version: June 23, 2019 – V.9: aaicourse-June-23-2019-V9-incomplete

Change: June 25, 2019 (Update to version 9.1; minor corrections in chapters 1+2)

ACTOR ACTOR INTERACTION [AAI]. Version: June 25, 2019 – V.9.1aaicourse-June25-2019-V9-1-incomplete

Change: June 29, 2019 (Update to version 10; )rewriting of chapter 4 Actor Story on account of changes in the chapters 5-7)

ACTOR ACTOR INTERACTION [AAI]. Version: June 29, 2019 – V.10: aaicourse-June-29-2019-V10-incomplete

Change: June 30, 2019 (Update to version 11; ) completing  chapter  3 Problem Definition)

ACTOR ACTOR INTERACTION [AAI]. Version: June 30, 2019 – V.11: aaicourse-June30-2019-V11-incomplete

Change: June 30, 2019 (Update to version 12; ) new chapter 5 for normative actor stories (NAS) Problem Definition)

ACTOR ACTOR INTERACTION [AAI]. Version: June 30, 2019 – V.12: aaicourse-June30-2019-V12-incomplete

Change: June 30, 2019 (Update to version 13; ) extending chapter 9 with the section about usability testing)

ACTOR ACTOR INTERACTION [AAI]. Version: June 30, 2019 – V.13aaicourse-June30-2019-V13-incomplete

Change: July 8, 2019 (Update to version 13.1 ) some more references to chapter 4; formatting the bibliography alphabetically)

ACTOR ACTOR INTERACTION [AAI]. Version: July 8, 2019 – V.13.1: aaicourse-July8-2019-V13.1-incomplete

Change: July 15, 2019 (Update to version 13.3 ) (In chapter 9 Testing an AS extending the description of Usability Testing with more concrete details to the test procedure)

ACTOR ACTOR INTERACTION [AAI]. Version: July 15, 2019 – V.13.3: aaicourse-13-3

Change: Aug 7, 2019 (Only some minor changes in Chapt. 1 Introduction, pp.15ff, but these changes make clear, that the scope of the AAI analysis can go far beyond the normal analysis. An AAI analysis without explicit actor models (AMs) corresponds to the analysis phase of a systems engineering process (SEP), but an AAI analysis including explicit actor models will cover 50 – 90% of the (logical) design phase too. How much exactly could only be answered if  there would exist an elaborated formal SEP theory with quantifications, but there exists  no such theory. The quantification here is an estimate.)

ACTOR ACTOR INTERACTION [AAI]. Version: Aug 7, 2019 – V.14:aaicourse-14

ACTOR ACTOR INTERACTION [AAI]. Version 15, Nov 9, 2019:

aaicourse-15(PDF, 1st chapter of the new version 15)

ACTOR ACTOR INTERACTION [AAI]. Version 15.01, Nov 11, 2019:

aaicourse-15-01 (PDF, 1st chapter of the new version 15.01)

ACTOR ACTOR INTERACTION [AAI]. Version 15.02, Nov 11, 2019:

aaicourse-15-02 (PDF, 1st chapter of the new version 15.02)

ACTOR ACTOR INTERACTION [AAI]. Version 15.03, Nov 13, 2019:

aaicourse-15-03 (PDF, 1st chapter of the new version 15.03)

ACTOR ACTOR INTERACTION [AAI]. Version 15.04, Nov 19, 2019:

(update of chapter 3, new created chapter 4)

aaicourse-15-04 (PDF, chapters 1-4 of the new version 15.04)

HISTORY OF CHANGES OF THIS PAGE

Change: May 20, 2019 (Stopping Circulating Acronyms :-))

Change: May 21,  2019 (Adding the Slavery-Empowerment topic)

Change: May 26, 2019 (Improving the general introduction of this first page)

HISTORY OF AAI-TEXT

After a previous post of the new AAI approach I started the first  re-formulation of the general framework of  the AAI theory, which later has been replaced by a more advanced AAI version V2. But even this version became a change candidate and mutated to the   Actor-Cognition Interaction (ACI) paradigm, which still was not the endpoint. Then new arguments grew up to talk rather from the Augmented Collective Intelligence (ACI). Because even this view on the subject can  change again I stopped following the different aspects of the general Actor-Actor Interaction paradigm and decided to keep the general AAI paradigm as the main attractor capable of several more specialized readings.

ACI – TWO DIFFERENT READINGS

eJournal: uffmm.org
ISSN 2567-6458, 11.-12.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Change: May-17, 2019 (Some Corrections, ACI associations)
Change: May-20, 2019 (Reframing ACI with AAI)
CONTEXT

This text is part of the larger text dealing with the Actor-Actor Interaction (AAI)  paradigm.

HCI – HMI – AAI ==> ACI ?

Who has followed the discussion in this blog remembers several different phases in the conceptual frameworks used here.

The first paradigm called Human-Computer Interface (HCI) has been only mentioned by historical reasons.  The next phase Human-Machine Interaction (HMI) was the main paradigm in the beginning of my lecturing in 2005. Later, somewhere 2011/2012, I switched to the paradigm Actor-Actor Interaction (AAI) because I tried to generalize over  the different participating machines, robots, smart interfaces, humans as well as animals. This worked quite nice and some time I thought that this is now the final formula. But reality is often different compared to  our thinking. Many occasions showed up where the generalization beyond the human actor seemed to hide the real processes which are going on, especially I got the impression that very important factors rooted in the special human actor became invisible although they are playing decisive role in many  processes. Another punch against the AAI view came from application scenarios during the last year when I started to deal with whole cities as actors. At the end  I got the feeling that the more specialized expressions like   Actor-Cognition Interaction (ACI) or  Augmented Collective Intelligence (ACI) can indeed help  to stress certain  special properties  better than the more abstract AAI acronym, but using structures like ACI  within general theories and within complex computing environments it became clear that the more abstract acronym AAI is in the end more versatile and simplifies the general structures. ACI became a special sub-case

HISTORY

To understand this oscillation between AAI and  ACI one has to look back into the history of Human Computer/ Machine Interaction, but not only until the end of the World War II, but into the more extended evolutionary history of mankind on this planet.

It is a widespread opinion under the researchers that the development of tools to help mastering material processes was one of the outstanding events which changed the path of  the evolution a lot.  A next step was the development of tools to support human cognition like scripture, numbers, mathematics, books, libraries etc. In this last case of cognitive tools the material of the cognitive  tools was not the primary subject the processes but the cognitive contents, structures, even processes encoded by the material structures of the tools.

Only slowly mankind understood how the cognitive abilities and capabilities are rooted in the body, in the brain, and that the brain represents a rather complex biological machinery which enables a huge amount of cognitive functions, often interacting with each other;  these cognitive functions show in the light of observable behavior clear limits with regard to the amount of features which can be processed in some time interval, with regard to precision, with regard to working interconnections, and more. And therefore it has been understood that the different kinds of cognitive tools are very important to support human thinking and to enforce it in some ways.

Only in the 20th century mankind was able to built a cognitive tool called computer which could show   capabilities which resembled some human cognitive capabilities and which even surpassed human capabilities in some limited areas. Since then these machines have developed a lot (not by themselves but by the thinking and the engineering of humans!) and meanwhile the number and variety of capabilities where the computer seems to resemble a human person or surpasses human capabilities have extend in a way that it has become a common slang to talk about intelligent machines or smart devices.

While the original intention for the development of computers was to improve the cognitive tools with the intend  to support human beings one can  get today  the impression as if the computer has turned into a goal on its own: the intelligent and then — as supposed — the super-intelligent computer appears now as the primary goal and mankind appears as some old relic which has to be surpassed soon.

As will be shown later in this text this vision of the computer surpassing mankind has some assumptions which are

What seems possible and what seems to be a promising roadmap into the future is a continuous step-wise enhancement of the biological structure of mankind which absorbs the modern computing technology by new cognitive interfaces which in turn presuppose new types of physical interfaces.

To give a precise definition of these new upcoming structures and functions is not yet possible, but to identify the actual driving factors as well as the exciting combinations of factors seems possible.

COGNITION EMBEDDED IN MATTER
Actor-Cognition Interaction (ACI): A simple outline of the whole paradigm
Cognition within the Actor-Actor Interaction (AAI)  paradigm: A simple outline of the whole paradigm

The main idea is the shift of the focus away from the physical grounding of the interaction between actors looking instead more to the cognitive contents and processes, which shall be mediated  by the physical conditions. Clearly the analysis of the physical conditions as well as the optimal design of these physical conditions is still a challenge and a task, but without a clear knowledge manifested in a clear model about the intended cognitive contents and processes one has not enough knowledge for the design of the physical layout.

SOLVING A PROBLEM

Thus the starting point of an engineering process is a group of people (the stakeholders (SH)) which identify some problem (P) in their environment and which have some minimal idea of a possible solution (S) for this problem. This can be commented by some non-functional requirements (NFRs) articulating some more general properties which shall hold through the whole solution (e.g. ‘being save’, ‘being barrier-free’, ‘being real-time’ etc.). If the description of the problem with a first intended solution including the NFRs contains at least one task (T) to be solved, minimal intended users (U) (here called executive actors (eA)), minimal intended assistive actors (aA) to assist the user in doing the task, as well as a description of the environment of the task to do, then the minimal ACI-Check can be passed and the ACI analysis process can be started.

COGNITION AND AUGMENTED COLLECTIVE INTELLIGENCE

If we talk about cognition then we think usually about cognitive processes in an individual person.  But in the real world there is no cognition without an ongoing exchange between different individuals by communicative acts. Furthermore it has to be taken into account that the cognition of an individual person is in itself partitioned into two unequal parts: the unconscious part which covers about 99% of all the processes in the body and in the brain and about 1% which covers the conscious part. That an individual person can think somehow something this person has to trigger his own unconsciousness by stimuli to respond with some messages from his before unknown knowledge. Thus even an individual person alone has to organize a communication with his own unconsciousness to be able to have some conscious knowledge about its own unconscious knowledge. And because no individual person has at a certain point of time a clear knowledge of his unconscious knowledge  the person even does not really know what to look for — if there is no event, not perception, no question and the like which triggers the person to interact with its unconscious knowledge (and experience) to get some messages from this unconscious machinery, which — as it seems — is working all the time.

On account of this   logic of the individual internal communication with the individual cognition  an external communication with the world and the manifested cognition of other persons appears as a possible enrichment in the   interactions with the distributed knowledge in the different persons. While in the following approach it is assumed to represent the different knowledge responses in a common symbolic representation viewable (and hearable)  from all participating persons it is growing up a possible picture of something which is generally more rich, having more facets than a picture generated by an individual person alone. Furthermore can such a procedure help all participants to synchronize their different knowledge fragments in a bigger picture and use it further on as their own picture, which in turn can trigger even more aspects out of the distributed unconscious knowledge.

If one organizes this collective triggering of distributed unconscious knowledge within a communication process not only by static symbolic models but beyond this with dynamic rules for changes, which can be interactively simulated or even played with defined states of interest then the effect of expanding the explicit and shared knowledge will be boosted even more.

From this background it makes some sense to turn the wording Actor-Cognition Interaction into the wording Augmented Collective Intelligence where Intelligence is the component of dynamic cognition in a system — here a human person –, Collective means that different individual person are sharing their unconscious knowledge by communicative interactions, and Augmented can be interpreted that one enhances, extends this sharing of knowledge by using new tools of modeling, simulation and gaming, which expands and intensifies the individual learning as well as the commonly shared opinions. For nearly all problems today this appears to be  absolutely necessary.

ACI ANALYSIS PROCESS

Here it will be assumed that there exists a group of ACI experts which can supervise  other actors (stakeholders, domain experts, …) in a process to analyze the problem P with the explicit goal of finding a satisfying solution (S+).

For the whole ACI analysis process an appropriate ACI software should be available to support the ACI experts as well as all the other domain experts.

In this ACI analysis process one can distinguish two main phases: (1) Construct an actor story (AS) which describes all intended states and intended changes within the actor story. (2) Make several tests of the actor story to exploit their explanatory power.

ACTOR STORY (AS)

The actor story describes all possible states (S) of the tasks (T) to be realized to reach intended goal states (S+). A mapping from one state to a follow-up state will be described by a change rule (X). Thus having start state (S0) and appropriate change rules one can construct the follow-up states from the actual state (S*)  with the aid of the change rules. Formally this computation of the follow-up state (S’) will be computed by a simulator function (σ), written as: σ: S* x X  —> S.

SEVERAL TESTS

With the aid of an explicit actor story (AS) one can define the non-functional requirements (NFRs) in a way that it will become decidable whether  a NFR is valid with regard to an actor story or not. In this case this test of being valid can be done as an automated verification process (AVP). Part of this test paradigm is the so-called oracle function (OF) where one can pose a question to the system and the system will answer the question with regard to all theoretically possible states without the necessity to run a (passive) simulation.

If the size of the group is large and it is important that all members of the group have a sufficient similar knowledge about the problem(s) in question (as it is the usual case in a city with different kinds of citizens) then is can be very helpful to enable interactive simulations or even games, which allow a more direct experience of the possible states and changes. Furthermore, because the participants can act according to their individual reflections and goals the process becomes highly uncertain and nearly unpredictable. Especially for these highly unpredictable processes can interactive simulations (and games) help to improve a common understanding of the involved factors and their effects. The difference between a normal interactive simulation and a game is given in the fact that a game has explicit win-states whereas the interactive simulations doesn’t. Explicit win-states can improve learning a lot.

The other interesting question is whether an actor story AS with a certain idea for an assistive actor (aA) is usable for the executive actors. This requires explicit measurements of the usability which in turn requires a clear norm of reference with which the behavior of an executive actor (eA) during a process can be compared. Usually is the actor Story as such the norm of reference with which the observable behavior of the executing actors will be compared. Thus for the measurement one needs real executive actors which represent the intended executive actors and one needs a physical realization of the intended assistive actors called mock-up. A mock-up is not yet  the final implementation of the intended assistive actor but a physical entity which can show all important physical properties of the intended assistive actor in a way which allows a real test run. While in the past it has been assumed to be sufficient to test a test person only once it is here assumed that a test person has to be tested at least three times. This follows from the assumption that every executive (biological) actor is inherently a learning system. This implies that the test person will behave differently in different tests. The degree of changes can be a hint of the easiness and the learnability of the assistive actor.

COLLECTIVE MEMORY

If an appropriate ACI software is available then one can consider an actor story as a simple theory (ST) embracing a model (M) and a collection of rules (R) — ST(x) iff x = <M,R> –which can be used as a kind of a     building block which in turn can be combined with other such building blocks resulting in a complex network of simple theories. If these simple theories are stored in a  public available data base (like a library of theories) then one can built up in time a large knowledge base on their own.

 

 

ENGINEERING AND SOCIETY: The Role of Preferences

eJournal: uffmm.org,
ISSN 2567-6458, 4.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

FINAL HYPOTHESIS

This suggests that a symbiosis between creative humans and computing algorithms is an attractive pairing. For this we have to re-invent our official  learning processes in schools and universities to train the next generation of humans in a more inspired and creative usage of algorithms in a game-like learning processes.

CONTEXT

The overall context is given by the description of the Actor-Actor Interaction (AAI) paradigm as a whole.  In this text the special relationship between engineering and the surrounding society is in the focus. And within this very broad and rich relationship the main interest lies in the ethical dimension here understood as those preferences of a society which are more supported than others. It is assumed that such preferences manifesting themselves  in real actions within a space of many other options are pointing to hidden values which guide the decisions of the members of a society. Thus values are hypothetical constructs based on observable actions within a cognitively assumed space of possible alternatives. These cognitively represented possibilities are usually only given in a mixture of explicitly stated symbolic statements and different unconscious factors which are influencing the decisions which are causing the observable actions.

These assumptions represent  until today not a common opinion and are not condensed in some theoretical text. Nevertheless I am using these assumptions here because they help to shed some light on the rather complex process of finding a real solution to a stated problem which is rooted in the cognitive space of the participants of the engineering process. To work with these assumptions in concrete development processes can support a further clarification of all these concepts.

ENGINEERING AND SOCIETY

DUAL: REAL AND COGNITIVE

The relationship between an engineering process and the preferences of a society
The relationship between an engineering process and the preferences of a society

As assumed in the AAI paradigm the engineering process is that process which connects the  event of  stating a problem combined with a first vision of a solution with a final concrete working solution.

The main characteristic of such an engineering process is the dual character of a continuous interaction between the cognitive space of all participants of the process with real world objects, actions, and processes. The real world as such is a lose collection of real things, to some extend connected by regularities inherent in natural things, but the visions of possible states, possible different connections, possible new processes is bound to the cognitive space of biological actors, especially to humans as exemplars of the homo sapiens species.

Thus it is a major factor of training, learning, and education in general to see how the real world can be mapped into some cognitive structures, how the cognitive structures can be transformed by cognitive operations into new structures and how these new cognitive structures can be re-mapped into the real world of bodies.

Within the cognitive dimension exists nearly infinite sets of possible alternatives, which all indicate possible states of a world, whose feasibility is more or less convincing. Limited by time and resources it is usually not possible to explore all these cognitively tapped spaces whether and how they work, what are possible side effects etc.

PREFERENCES

Somehow by nature, somehow by past experience biological system — like the home sapiens — have developed   cultural procedures to induce preferences how one selects possible options, which one should be selected, under which circumstances and with even more constraints. In some situations these preferences can be helpful, in others they can  hide possibilities which afterwards can be  re-detected as being very valuable.

Thus every engineering process which starts  a transformation process from some cognitively given point of view to a new cognitively point of view with a following up translation into some real thing is sharing its cognitive space with possible preferences of  the cognitive space of the surrounding society.

It is an open case whether the engineers as the experts have an experimental, creative attitude to explore without dogmatic constraints the   possible cognitive spaces to find new solutions which can improve life or not. If one assumes that there exist no absolute preferences on account of the substantially limit knowledge of mankind at every point of time and inferring from this fact the necessity to extend an actual knowledge further to enable the mastering of an open unknown future then the engineers will try to explore seriously all possibilities without constraints to extend the power of engineering deeper into the heart of the known as well as unknown universe.

EXPLORING COGNITIVE POSSIBILITIES

At the start one has only a rough description of the problem and a rough vision of a wanted solution which gives some direction for the search of an optimal solution. This direction represents also a kind of a preference what is wanted as the outcome of the process.

On account of the inherent duality of human thinking and communication embracing the cognitive space as well as the realm of real things which both are connected by complex mappings realized by the brain which operates  nearly completely unconscious a long process of concrete real and cognitive actions is necessary to materialize cognitive realities within a  communication process. Main modes of materialization are the usage of symbolic languages, paintings (diagrams), physical models, algorithms for computation and simulations, and especially gaming (in several different modes).

As everybody can know  these communication processes are not simple, can be a source of  confusions, and the coordination of different brains with different cognitive spaces as well as different layouts of unconscious factors  is a difficult and very demanding endeavor.

The communication mode gaming is of a special interest here  because it is one of the oldest and most natural modes to learn but in the official education processes in schools and  universities (and in companies) it was until recently not part of the official curricula. But it is the only mode where one can exercise the dimensions of preferences explicitly in combination with an exploring process and — if one wants — with the explicit social dimension of having more than one brain involved.

In the last about 50 – 100 years the term project has gained more and more acceptance and indeed the organization of projects resembles a game but it is usually handled as a hierarchical, constraints-driven process where creativity and concurrent developing (= gaming) is not a main topic. Even if companies allow concurrent development teams these teams are cognitively separated and the implicit cognitive structures are black boxes which can not be evaluated as such.

In the presupposed AAI paradigm here the open creative space has a high priority to increase the chance for innovation. Innovation is the most valuable property in face of an unknown future!

While the open space for a real creativity has to be executed in all the mentioned modes of communication the final gaming mode is of special importance.  To enable a gaming process one has explicitly to define explicit win-lose states. This  objectifies values/ preferences hidden   in the cognitive space before. Such an  objectification makes things transparent, enables more rationality and allows the explicit testing of these defined win-lose states as feasible or not. Only tested hypothesis represent tested empirical knowledge. And because in a gaming mode whole groups or even all members of a social network can participate in a  learning process of the functioning and possible outcome of a presented solution everybody can be included.  This implies a common sharing of experience and knowledge which simplifies the communication and therefore the coordination of the different brains with their unconsciousness a lot.

TESTING AND EVALUATION

Testing a proposed solution is another expression for measuring the solution. Measuring is understood here as a basic comparison between the target to be measured (here the proposed solution) and the before agreed norm which shall be used as point of reference for the comparison.

But what can be a before agreed norm?

Some aspects can be mentioned here:

  1. First of all there is the proposed solution as such, which is here a proposal for a possible assistive actor in an assumed environment for some intended executive actors which has to fulfill some job (task).
  2. Part of this proposed solution are given constraints and non-functional requirements.
  3. Part of this proposed solution are some preferences as win-lose states which have to be reached.
  4. Another difficult to define factor are the executive actors if they are biological systems. Biological systems with their basic built in ability to act free, to be learning systems, and this associated with a not-definable large unconscious realm.

Given the explicit preferences constrained by many assumptions one can test only, whether the invited test persons understood as possible instances of the  intended executive actors are able to fulfill the defined task(s) in some predefined amount of time within an allowed threshold of making errors with an expected percentage of solved sub-tasks together with a sufficient subjective satisfaction with the whole process.

But because biological executive actors are learning systems they  will behave in different repeated  tests differently, they can furthermore change their motivations and   their interests, they can change their emotional commitment, and because of their   built-in basic freedom to act there can be no 100% probability that they will act at time t as they have acted all the time before.

Thus for all kinds of jobs where the process is more or less fixed, where nothing new  will happen, the participation of biological executive actors in such a process is questionable. It seems (hypothesis), that biological executing actors are better placed  in jobs where there is some minimal rate of curiosity, of innovation, and of creativity combined with learning.

If this hypothesis is empirically sound (as it seems), then all jobs where human persons are involved should have more the character of games then something else.

It is an interesting side note that the actual research in robotics under the label of developmental robotics is struck by the problem how one can make robots continuously learning following interesting preferences. Given a preference an algorithm can work — under certain circumstances — often better than a human person to find an optimal solution, but lacking such a preference the algorithm is lost. And actually there exists not the faintest idea how algorithms should acquire that kind of preferences which are interesting and important for an unknown future.

On the contrary, humans are known to be creative, innovative, detecting new preferences etc. but they have only limited capacities to explore these creative findings until some telling endpoint.

This suggests that a symbiosis between creative humans and computing algorithms is an attractive pairing. For this we have to re-invent our official  learning processes in schools and universities to train the next generation of humans in a more inspired and creative usage of algorithms in a game-like learning processes.

 

 

 

 

SIMULATION AND GAMING

eJournal: uffmm.org,
ISSN 2567-6458, 3.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

Within the AAI paradigm the following steps will usually be distinguished:

  1. A given problem and a wanted solution.
  2. An assumed context and intended executing and assisting actors.
  3. Assumed non-functional requirements (NFRs).
  4. An actor story (AS) describing at least one task including all the functional requirements.
  5. An usability test, often enhanced with passive or interactive simulations.
  6. An evaluation of the test.
  7. Some repeated improvements.

With these elements one can analyze and design the behavior surface of an  assistive actor which can approach the requirements of the stakeholder.

SIMULATION AND GAMING

Comparing these elements with a (computer) game then one can detect immediately that  a game characteristically allows to win or to lose. The possible win-states or lose-states stop a game. Often the winning state includes additionally  some measures how ‘strong’ or how ‘big’ someone has won or lost a game.

Thus in a game one has besides the rules of the game R which determine what is allowed to do in a game some set of value lables V which indicate some property, some object, some state as associated with some value v,  optionally associated further with some numbers to quantify the value between a maximum and a minimum.

In most board games you will reach an end state where you are the winner or the loser independent of some value. In other games one plays as often as necessary to reach some accumulated value which gives a measure who is better than someone else.

Doing AAI analysis as part of engineering it is usually sufficient to develop an assistive actor with a behavior surface  which satisfies all requirements and some basic needs of the executive actors (the classical users).

But this newly developed product (the assistive actor for certain tasks) will be part of a social situation with different human actors. In these social situations there are often more requirements, demands, emotions around than only the original  design criteria for the technical product.

For some people the aesthetic properties of a technical product can be important or some other cultural code which associates the technical product with these cultural codes making it precious or not.

Furthermore there can be whole processes within which a product can be used or not, making it precious or not.

COLLECTIVE INTELLIGENCE AND AUTOPOIETIC GAMES

In the case of simulations one has already from the beginning a special context given by the experience and the knowledge of the executive actors.  In some cases this knowledge is assumed to be stable or closed. Therefore there is no need to think of the assistive actor as a tool which has not only to support the fulfilling of a defined task but additionally to support the development of the knowledge and experience of the executive actor further. But there are social situations in a city, in an institution, in learning in general where the assumed knowledge and experience is neither complete nor stable. On the contrary in these situations there is a strong need to develop the assumed knowledge further and do this as a joined effort to improve the collective intelligence collectively.

If one sees the models and rules underlying a simulation as a kind of a representation of the assumed collective knowledge then  a simulation can help to visualize this knowledge, make it an experience, explore its different consequences.  And as far as the executive actors are writing the rules of change by themselves, they understand the simulation and they can change the rules to understand better, what can improve the process and possible goal states. This kind of collective development of models and rules as well as testing can be called autopoietic because the executing actors have two roles:(i)  following some rules (which they have defined by themselves) they explore what will happen, when one adheres to these rules; (ii) changing the rules to change the possible outcomes.

This procedure establishes some kind of collective learning within an autopoietic process.

If one enriches this setting with explicit goal states, states of assumed advantages, then one can look at this collective learning as a serious pattern of collective learning by autopoietic games.

For many context like cities, educational institutions, and even companies  this kind of collective learning by autopoietic games can be a very productive way to develop the collective intelligence of many people at the same time gaining knowledge by having some exciting fun.

Autopoietic gaming as support for collective learning processes
Autopoietic gaming as support for collective learning processes