DIGITAL SLAVERY OR DIGITAL EMPOWERMENT?

eJournal: uffmm.org,
ISSN 2567-6458, 21.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Change: 26.May 2019 (Extending from two to three selected authors)

CONTEXT

This text is part of the larger text dealing with the Actor-Actor Interaction (AAI)  paradigm.

OBJECTIVE

In the following text the focus is on the global environment for the AAI approach, the cooperation and/ or competition between societies and the strong impact induced by the new digital technologies. Numerous articles and books are dealing with these phenomena. For the actual  focus  I like to mention especially three books: (i) from the point of view of the technology driver the book of Eric Schmidt and Jared Cohen (2013)  — both from google — seems to be an impressive illustration of what will be possible in the near future;  (ii) from the point of a technology-aware historian the book of Yuval Noah Harari (2018) can help to deepen the impressions and pointing to the more and more difficult role of mankind itself; finally (iii) from the point of view of  a society-thriller author Eric Elsberg (2019) who shows within a quite realistic scenario how a global lack of understanding can turn the countries world wide into a desaster which seems to be un-necessary.

The many, many different aspects of the views of the first two mentioned authors I transform  into one confrontation only: Digital Slavery vs. Digital Empowerment.

DIGITAL SLAVERY
Digital slavery as the actual leading trend in the nations worldwide induced by a digital technology which can be used for this, but this is only one of several options.

Stepping back from the stream of events in everyday life, and looking onto the more general structure working behind and implicit in all these events then one can recognize an incredible collecting behavior of only a few services behind the scene. While the individual user is mostly separated from all the others, empowered by a limited individual knowledge, individual experiences, skills, and preferences, mostly unconscious, his/ her behavior will be stored in central cloud spaces which as such are only single, individual data with a bigger importance. People asked about their data usually do not bother too much about questions of security. An often heared argument in this context says, that they have nothing to hide. They are only normal persons.

What they do not see and which  they cannot see because this is completely hided for others is the fact that  there exists hidden algorithms which can synthesize all these individual data, extracting different kinds of patterns, reconstructing time lines, adding divers connotations, and which can compute some dynamics pointing into a possible future. The hidden owners (the ‘dark lords’) of these storage spaces and algorithms can built up  with these individual data of normal users overall pictures of many hundreds of Millions of users, companies, offices, communal institutions etc., which enable very specific plans and actions to exploit economical, political and military opportunities. With this knowledge they can destroy nearly any kind of company at will and they can introduce new companies before anybody elsewhere has the faintiest idea, that there is a new possibility. This pattern concentrates the capital more and more in a decreasing number of owners and turns more and more people into an anonymous mass of being poor.

The individual user does not know about all this. In his/ her Not-Knowing the user is mutating from a free democratic citizen to a formally perhaps still  free, but materially completely manipulated something. This is not limited to the normal citizen but it holds for Managers, mayors and most kinds of politicians too. Traditional societies are sucked out and are turned into more and more zombie-states.

Is there no chance to overcome this destructive overall scenario?

DIGITAL EMPOWERMENT
Digital empowerment as an alternative approach using digital technologies to empower people, groups, whole nations without the hidden ‘dark lords’.

There are alternatives to the actual digital slavery paradigm.  These alternatives try to help the individual user, citizen, manager, mayor etc. to bridge his/ her isolation by supporting a new kind of team-based modeling of the common reality, which is  stored on public storage spaces, reachable 24 hours every day during a year by all. Here too one can add algorithms which can support the contributing users by simulations, playing modes, oracle-computations, connecting different models into one model, and much more. Such an approach frees the individual out of his individual enclosures, sharing creative ideas, searching together for better solutions, and using modeling techniques, simulation techniques, and several kinds of machine learning (ML) and artificial intelligence (AI) to improve the construction of possible futures much beyond the individual capacities alone.

This alternative approach allows real open knowledge and  real informed democracies. This is not slavery by dark lords but common empowerment by free people.

Who has already read some of the texts related to the AAI paradigm will know that the AAI paradigm covers exactly this empowering  view of  a modern open democratic society.

COOPERATIVE SOCIETIES

At a first glance this idea of a digital empowered society may look as an empty procedure: everybody is enabled to communicate and think with others, but what is with the daily economy which organizes the stream of goods, services, and resources? The main mode of interactions in the beginning of the 21st century seems to demonstrate the inability of the actual open liberal societies to fight inequalities really. The political system appears to be too weak to enable efficient structures.

It is known since years based on mahematical models that a truly cooperative society is much, much more stable as any other kind of a system and even much, much more productive. These insights are not at work world wide. The reason is that the financial and political systems follow models in their heads which are different and which until now are opposing any kind of a change. Several cultural and emotional factors stand against different views, different practices, different procedures. Here improved communication and planning capabilites can be of help.

REFERENCES
  • Marc Elsberg. Gier. Wie weit würdest Du gehen? Blanvalet Publisher (part of  Random House), Munich, 2019
  • Yuval Noah Harari. 21 Lessons for the 21st Century. Spiegel & Grau,
    Penguin Random House, New York, 2018.
  • Eric Schmidt and Jared Cohen. The New Digital Age. Reshaping the
    Future of People, Nations and Business. John Murray, London (UK),
    1 edition, 2013. URL https://www.google.com/search?client=
    ubuntu&channel=fs&q=eric+schmidt+the+new+digital+age+pdf&
    ie=utf-8&oe=utf-8 .

AAI V3 FRONTPAGE

eJournal: uffmm.org,
ISSN 2567-6458, 12.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Change: 20.May 2019 (Stopping Circulating Acronyms :-))

Change: 21.May 2019 (Adding the Slavery-Empowerment topic)

Change: 26.May 2019 (Improving the general introduction of this first page)

OBJECTIVE

This text describes the general procedure how engineers turn a problem into a functioning solution. Usually known under the label of Systems Engineering (SE) the focus in this text is on the first phase of this process where some experts try to analyse a given problem with a first vision of a possible solution to enable a complete solution. This analysis centers around the interaction between different kinds of executive actors (eA) which have to do the job and different kinds of assistive actors (aA) which shall support the executive actors. Historically these interactions have been analyzed under headings like Humanc-Computer Interaction (HCI) or Human-Machine Interaction (HMI). It is due to the developments during the beginning of the 21st century that the author of this text recently has introduced the wording Actor-Actor Interaction (AAI) to cope with the explosion of different kinds of actors on the side of the executive as well as assistive actors. As a consequence the nature of the interactions changed as well.These changes induced a  general re-writing of the traditional HCI/ HMI subject which is not yet finished.

HISTORY OF THIS TEXT

After a previous post of the new AAI approach I started the first  re-formulation of the general framework of  the AAI theory, which later has been replaced by a more advanced AAI version V2. But even this version became a change candidate and mutated to the   Actor-Cognition Interaction (ACI) paradigm, which still was not the endpoint. Then new arguments grew up to talk rather from the Augmented Collective Intelligence (ACI). Because even this view on the subject can  change again I stopped following the different aspects of the general Actor-Actor Interaction paradigm and decided to keep the general AAI paradigm as the main attractor capable of several more specialized readings.

STRUCTURE OF THIS TEXT

For the whole online-project the basic idea is still  to use one main post for the overview of all topics and then for every topic an individual post with possibly more detailed extensions. This will generate a tree-like structure with the root-post at level 0 and from this following the links you will reach the posts of level 1, then level 2 and so forth. The posts from level 0 and level 1 will be highly informal; the posts from level 2 and higher will increasingly become more specialized and associated with references to scientific literature. This block is inspired by many hundreds of scientific papers and books. I will start to give an explicit list of references as soon as the main structure has become fixed.

OUTLINE

The following posts present some considerations which shall illuminate the main idea of the new AAI paradigm.  This online version is not the final text of the book, but these posts are  important to support the development of the book. Two other important sources for the official book are given by two different lectures: the one lecture deals regularily with the topic Human-Machine Interaction (HMI) and the other lecture deals regularily with the new topic of empowering whole cities with a new approach of communication as part of a new approach to communal planning. This communal case represents a new  application of the AAI paradigm to communal planning.

  1. Because the new acronym ‘AAI’ is not yet well known here some explanations of what it means and how it is related to the older better known acronyms ‘HCI’ as well as ‘HMI’. Some Considerations to the AAI Variants
  2. In more traditional approaches to engineering the dimension of society is usually not mentioned. But today where technology, especially digital technology, has penetrated nearly all aspects of daily life and where engineers become more and more aware of society  as the main source of all resources as well as the main source for norms to follow, the topics of society should be an essential part of engineering thinking. In the beginning of the 21st century the main paradigm of the digitalized society has taken a format which is highly dangerous from the point of view of democraic societies. The wonderful new possibilities of the internet, of cloud computing and more are organized in a way which can turn open societies into digital prisons. It seems that the AAI paradigm is exactly that paradigm which can be of help in this situation:  Digital Slavery or Digital Empowerment?

ACI – TWO DIFFERENT READINGS

eJournal: uffmm.org
ISSN 2567-6458, 11.-12.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Change: May-17, 2019 (Some Corrections, ACI associations)
Change: May-20, 2019 (Reframing ACI with AAI)
CONTEXT

This text is part of the larger text dealing with the Actor-Actor Interaction (AAI)  paradigm.

HCI – HMI – AAI ==> ACI ?

Who has followed the discussion in this blog remembers several different phases in the conceptual frameworks used here.

The first paradigm called Human-Computer Interface (HCI) has been only mentioned by historical reasons.  The next phase Human-Machine Interaction (HMI) was the main paradigm in the beginning of my lecturing in 2005. Later, somewhere 2011/2012, I switched to the paradigm Actor-Actor Interaction (AAI) because I tried to generalize over  the different participating machines, robots, smart interfaces, humans as well as animals. This worked quite nice and some time I thought that this is now the final formula. But reality is often different compared to  our thinking. Many occasions showed up where the generalization beyond the human actor seemed to hide the real processes which are going on, especially I got the impression that very important factors rooted in the special human actor became invisible although they are playing decisive role in many  processes. Another punch against the AAI view came from application scenarios during the last year when I started to deal with whole cities as actors. At the end  I got the feeling that the more specialized expressions like   Actor-Cognition Interaction (ACI) or  Augmented Collective Intelligence (ACI) can indeed help  to stress certain  special properties  better than the more abstract AAI acronym, but using structures like ACI  within general theories and within complex computing environments it became clear that the more abstract acronym AAI is in the end more versatile and simplifies the general structures. ACI became a special sub-case

HISTORY

To understand this oscillation between AAI and  ACI one has to look back into the history of Human Computer/ Machine Interaction, but not only until the end of the World War II, but into the more extended evolutionary history of mankind on this planet.

It is a widespread opinion under the researchers that the development of tools to help mastering material processes was one of the outstanding events which changed the path of  the evolution a lot.  A next step was the development of tools to support human cognition like scripture, numbers, mathematics, books, libraries etc. In this last case of cognitive tools the material of the cognitive  tools was not the primary subject the processes but the cognitive contents, structures, even processes encoded by the material structures of the tools.

Only slowly mankind understood how the cognitive abilities and capabilities are rooted in the body, in the brain, and that the brain represents a rather complex biological machinery which enables a huge amount of cognitive functions, often interacting with each other;  these cognitive functions show in the light of observable behavior clear limits with regard to the amount of features which can be processed in some time interval, with regard to precision, with regard to working interconnections, and more. And therefore it has been understood that the different kinds of cognitive tools are very important to support human thinking and to enforce it in some ways.

Only in the 20th century mankind was able to built a cognitive tool called computer which could show   capabilities which resembled some human cognitive capabilities and which even surpassed human capabilities in some limited areas. Since then these machines have developed a lot (not by themselves but by the thinking and the engineering of humans!) and meanwhile the number and variety of capabilities where the computer seems to resemble a human person or surpasses human capabilities have extend in a way that it has become a common slang to talk about intelligent machines or smart devices.

While the original intention for the development of computers was to improve the cognitive tools with the intend  to support human beings one can  get today  the impression as if the computer has turned into a goal on its own: the intelligent and then — as supposed — the super-intelligent computer appears now as the primary goal and mankind appears as some old relic which has to be surpassed soon.

As will be shown later in this text this vision of the computer surpassing mankind has some assumptions which are

What seems possible and what seems to be a promising roadmap into the future is a continuous step-wise enhancement of the biological structure of mankind which absorbs the modern computing technology by new cognitive interfaces which in turn presuppose new types of physical interfaces.

To give a precise definition of these new upcoming structures and functions is not yet possible, but to identify the actual driving factors as well as the exciting combinations of factors seems possible.

COGNITION EMBEDDED IN MATTER
Actor-Cognition Interaction (ACI): A simple outline of the whole paradigm
Cognition within the Actor-Actor Interaction (AAI)  paradigm: A simple outline of the whole paradigm

The main idea is the shift of the focus away from the physical grounding of the interaction between actors looking instead more to the cognitive contents and processes, which shall be mediated  by the physical conditions. Clearly the analysis of the physical conditions as well as the optimal design of these physical conditions is still a challenge and a task, but without a clear knowledge manifested in a clear model about the intended cognitive contents and processes one has not enough knowledge for the design of the physical layout.

SOLVING A PROBLEM

Thus the starting point of an engineering process is a group of people (the stakeholders (SH)) which identify some problem (P) in their environment and which have some minimal idea of a possible solution (S) for this problem. This can be commented by some non-functional requirements (NFRs) articulating some more general properties which shall hold through the whole solution (e.g. ‘being save’, ‘being barrier-free’, ‘being real-time’ etc.). If the description of the problem with a first intended solution including the NFRs contains at least one task (T) to be solved, minimal intended users (U) (here called executive actors (eA)), minimal intended assistive actors (aA) to assist the user in doing the task, as well as a description of the environment of the task to do, then the minimal ACI-Check can be passed and the ACI analysis process can be started.

COGNITION AND AUGMENTED COLLECTIVE INTELLIGENCE

If we talk about cognition then we think usually about cognitive processes in an individual person.  But in the real world there is no cognition without an ongoing exchange between different individuals by communicative acts. Furthermore it has to be taken into account that the cognition of an individual person is in itself partitioned into two unequal parts: the unconscious part which covers about 99% of all the processes in the body and in the brain and about 1% which covers the conscious part. That an individual person can think somehow something this person has to trigger his own unconsciousness by stimuli to respond with some messages from his before unknown knowledge. Thus even an individual person alone has to organize a communication with his own unconsciousness to be able to have some conscious knowledge about its own unconscious knowledge. And because no individual person has at a certain point of time a clear knowledge of his unconscious knowledge  the person even does not really know what to look for — if there is no event, not perception, no question and the like which triggers the person to interact with its unconscious knowledge (and experience) to get some messages from this unconscious machinery, which — as it seems — is working all the time.

On account of this   logic of the individual internal communication with the individual cognition  an external communication with the world and the manifested cognition of other persons appears as a possible enrichment in the   interactions with the distributed knowledge in the different persons. While in the following approach it is assumed to represent the different knowledge responses in a common symbolic representation viewable (and hearable)  from all participating persons it is growing up a possible picture of something which is generally more rich, having more facets than a picture generated by an individual person alone. Furthermore can such a procedure help all participants to synchronize their different knowledge fragments in a bigger picture and use it further on as their own picture, which in turn can trigger even more aspects out of the distributed unconscious knowledge.

If one organizes this collective triggering of distributed unconscious knowledge within a communication process not only by static symbolic models but beyond this with dynamic rules for changes, which can be interactively simulated or even played with defined states of interest then the effect of expanding the explicit and shared knowledge will be boosted even more.

From this background it makes some sense to turn the wording Actor-Cognition Interaction into the wording Augmented Collective Intelligence where Intelligence is the component of dynamic cognition in a system — here a human person –, Collective means that different individual person are sharing their unconscious knowledge by communicative interactions, and Augmented can be interpreted that one enhances, extends this sharing of knowledge by using new tools of modeling, simulation and gaming, which expands and intensifies the individual learning as well as the commonly shared opinions. For nearly all problems today this appears to be  absolutely necessary.

ACI ANALYSIS PROCESS

Here it will be assumed that there exists a group of ACI experts which can supervise  other actors (stakeholders, domain experts, …) in a process to analyze the problem P with the explicit goal of finding a satisfying solution (S+).

For the whole ACI analysis process an appropriate ACI software should be available to support the ACI experts as well as all the other domain experts.

In this ACI analysis process one can distinguish two main phases: (1) Construct an actor story (AS) which describes all intended states and intended changes within the actor story. (2) Make several tests of the actor story to exploit their explanatory power.

ACTOR STORY (AS)

The actor story describes all possible states (S) of the tasks (T) to be realized to reach intended goal states (S+). A mapping from one state to a follow-up state will be described by a change rule (X). Thus having start state (S0) and appropriate change rules one can construct the follow-up states from the actual state (S*)  with the aid of the change rules. Formally this computation of the follow-up state (S’) will be computed by a simulator function (σ), written as: σ: S* x X  —> S.

SEVERAL TESTS

With the aid of an explicit actor story (AS) one can define the non-functional requirements (NFRs) in a way that it will become decidable whether  a NFR is valid with regard to an actor story or not. In this case this test of being valid can be done as an automated verification process (AVP). Part of this test paradigm is the so-called oracle function (OF) where one can pose a question to the system and the system will answer the question with regard to all theoretically possible states without the necessity to run a (passive) simulation.

If the size of the group is large and it is important that all members of the group have a sufficient similar knowledge about the problem(s) in question (as it is the usual case in a city with different kinds of citizens) then is can be very helpful to enable interactive simulations or even games, which allow a more direct experience of the possible states and changes. Furthermore, because the participants can act according to their individual reflections and goals the process becomes highly uncertain and nearly unpredictable. Especially for these highly unpredictable processes can interactive simulations (and games) help to improve a common understanding of the involved factors and their effects. The difference between a normal interactive simulation and a game is given in the fact that a game has explicit win-states whereas the interactive simulations doesn’t. Explicit win-states can improve learning a lot.

The other interesting question is whether an actor story AS with a certain idea for an assistive actor (aA) is usable for the executive actors. This requires explicit measurements of the usability which in turn requires a clear norm of reference with which the behavior of an executive actor (eA) during a process can be compared. Usually is the actor Story as such the norm of reference with which the observable behavior of the executing actors will be compared. Thus for the measurement one needs real executive actors which represent the intended executive actors and one needs a physical realization of the intended assistive actors called mock-up. A mock-up is not yet  the final implementation of the intended assistive actor but a physical entity which can show all important physical properties of the intended assistive actor in a way which allows a real test run. While in the past it has been assumed to be sufficient to test a test person only once it is here assumed that a test person has to be tested at least three times. This follows from the assumption that every executive (biological) actor is inherently a learning system. This implies that the test person will behave differently in different tests. The degree of changes can be a hint of the easiness and the learnability of the assistive actor.

COLLECTIVE MEMORY

If an appropriate ACI software is available then one can consider an actor story as a simple theory (ST) embracing a model (M) and a collection of rules (R) — ST(x) iff x = <M,R> –which can be used as a kind of a     building block which in turn can be combined with other such building blocks resulting in a complex network of simple theories. If these simple theories are stored in a  public available data base (like a library of theories) then one can built up in time a large knowledge base on their own.

 

 

ENGINEERING AND SOCIETY: The Role of Preferences

eJournal: uffmm.org,
ISSN 2567-6458, 4.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

FINAL HYPOTHESIS

This suggests that a symbiosis between creative humans and computing algorithms is an attractive pairing. For this we have to re-invent our official  learning processes in schools and universities to train the next generation of humans in a more inspired and creative usage of algorithms in a game-like learning processes.

CONTEXT

The overall context is given by the description of the Actor-Actor Interaction (AAI) paradigm as a whole.  In this text the special relationship between engineering and the surrounding society is in the focus. And within this very broad and rich relationship the main interest lies in the ethical dimension here understood as those preferences of a society which are more supported than others. It is assumed that such preferences manifesting themselves  in real actions within a space of many other options are pointing to hidden values which guide the decisions of the members of a society. Thus values are hypothetical constructs based on observable actions within a cognitively assumed space of possible alternatives. These cognitively represented possibilities are usually only given in a mixture of explicitly stated symbolic statements and different unconscious factors which are influencing the decisions which are causing the observable actions.

These assumptions represent  until today not a common opinion and are not condensed in some theoretical text. Nevertheless I am using these assumptions here because they help to shed some light on the rather complex process of finding a real solution to a stated problem which is rooted in the cognitive space of the participants of the engineering process. To work with these assumptions in concrete development processes can support a further clarification of all these concepts.

ENGINEERING AND SOCIETY

DUAL: REAL AND COGNITIVE

The relationship between an engineering process and the preferences of a society
The relationship between an engineering process and the preferences of a society

As assumed in the AAI paradigm the engineering process is that process which connects the  event of  stating a problem combined with a first vision of a solution with a final concrete working solution.

The main characteristic of such an engineering process is the dual character of a continuous interaction between the cognitive space of all participants of the process with real world objects, actions, and processes. The real world as such is a lose collection of real things, to some extend connected by regularities inherent in natural things, but the visions of possible states, possible different connections, possible new processes is bound to the cognitive space of biological actors, especially to humans as exemplars of the homo sapiens species.

Thus it is a major factor of training, learning, and education in general to see how the real world can be mapped into some cognitive structures, how the cognitive structures can be transformed by cognitive operations into new structures and how these new cognitive structures can be re-mapped into the real world of bodies.

Within the cognitive dimension exists nearly infinite sets of possible alternatives, which all indicate possible states of a world, whose feasibility is more or less convincing. Limited by time and resources it is usually not possible to explore all these cognitively tapped spaces whether and how they work, what are possible side effects etc.

PREFERENCES

Somehow by nature, somehow by past experience biological system — like the home sapiens — have developed   cultural procedures to induce preferences how one selects possible options, which one should be selected, under which circumstances and with even more constraints. In some situations these preferences can be helpful, in others they can  hide possibilities which afterwards can be  re-detected as being very valuable.

Thus every engineering process which starts  a transformation process from some cognitively given point of view to a new cognitively point of view with a following up translation into some real thing is sharing its cognitive space with possible preferences of  the cognitive space of the surrounding society.

It is an open case whether the engineers as the experts have an experimental, creative attitude to explore without dogmatic constraints the   possible cognitive spaces to find new solutions which can improve life or not. If one assumes that there exist no absolute preferences on account of the substantially limit knowledge of mankind at every point of time and inferring from this fact the necessity to extend an actual knowledge further to enable the mastering of an open unknown future then the engineers will try to explore seriously all possibilities without constraints to extend the power of engineering deeper into the heart of the known as well as unknown universe.

EXPLORING COGNITIVE POSSIBILITIES

At the start one has only a rough description of the problem and a rough vision of a wanted solution which gives some direction for the search of an optimal solution. This direction represents also a kind of a preference what is wanted as the outcome of the process.

On account of the inherent duality of human thinking and communication embracing the cognitive space as well as the realm of real things which both are connected by complex mappings realized by the brain which operates  nearly completely unconscious a long process of concrete real and cognitive actions is necessary to materialize cognitive realities within a  communication process. Main modes of materialization are the usage of symbolic languages, paintings (diagrams), physical models, algorithms for computation and simulations, and especially gaming (in several different modes).

As everybody can know  these communication processes are not simple, can be a source of  confusions, and the coordination of different brains with different cognitive spaces as well as different layouts of unconscious factors  is a difficult and very demanding endeavor.

The communication mode gaming is of a special interest here  because it is one of the oldest and most natural modes to learn but in the official education processes in schools and  universities (and in companies) it was until recently not part of the official curricula. But it is the only mode where one can exercise the dimensions of preferences explicitly in combination with an exploring process and — if one wants — with the explicit social dimension of having more than one brain involved.

In the last about 50 – 100 years the term project has gained more and more acceptance and indeed the organization of projects resembles a game but it is usually handled as a hierarchical, constraints-driven process where creativity and concurrent developing (= gaming) is not a main topic. Even if companies allow concurrent development teams these teams are cognitively separated and the implicit cognitive structures are black boxes which can not be evaluated as such.

In the presupposed AAI paradigm here the open creative space has a high priority to increase the chance for innovation. Innovation is the most valuable property in face of an unknown future!

While the open space for a real creativity has to be executed in all the mentioned modes of communication the final gaming mode is of special importance.  To enable a gaming process one has explicitly to define explicit win-lose states. This  objectifies values/ preferences hidden   in the cognitive space before. Such an  objectification makes things transparent, enables more rationality and allows the explicit testing of these defined win-lose states as feasible or not. Only tested hypothesis represent tested empirical knowledge. And because in a gaming mode whole groups or even all members of a social network can participate in a  learning process of the functioning and possible outcome of a presented solution everybody can be included.  This implies a common sharing of experience and knowledge which simplifies the communication and therefore the coordination of the different brains with their unconsciousness a lot.

TESTING AND EVALUATION

Testing a proposed solution is another expression for measuring the solution. Measuring is understood here as a basic comparison between the target to be measured (here the proposed solution) and the before agreed norm which shall be used as point of reference for the comparison.

But what can be a before agreed norm?

Some aspects can be mentioned here:

  1. First of all there is the proposed solution as such, which is here a proposal for a possible assistive actor in an assumed environment for some intended executive actors which has to fulfill some job (task).
  2. Part of this proposed solution are given constraints and non-functional requirements.
  3. Part of this proposed solution are some preferences as win-lose states which have to be reached.
  4. Another difficult to define factor are the executive actors if they are biological systems. Biological systems with their basic built in ability to act free, to be learning systems, and this associated with a not-definable large unconscious realm.

Given the explicit preferences constrained by many assumptions one can test only, whether the invited test persons understood as possible instances of the  intended executive actors are able to fulfill the defined task(s) in some predefined amount of time within an allowed threshold of making errors with an expected percentage of solved sub-tasks together with a sufficient subjective satisfaction with the whole process.

But because biological executive actors are learning systems they  will behave in different repeated  tests differently, they can furthermore change their motivations and   their interests, they can change their emotional commitment, and because of their   built-in basic freedom to act there can be no 100% probability that they will act at time t as they have acted all the time before.

Thus for all kinds of jobs where the process is more or less fixed, where nothing new  will happen, the participation of biological executive actors in such a process is questionable. It seems (hypothesis), that biological executing actors are better placed  in jobs where there is some minimal rate of curiosity, of innovation, and of creativity combined with learning.

If this hypothesis is empirically sound (as it seems), then all jobs where human persons are involved should have more the character of games then something else.

It is an interesting side note that the actual research in robotics under the label of developmental robotics is struck by the problem how one can make robots continuously learning following interesting preferences. Given a preference an algorithm can work — under certain circumstances — often better than a human person to find an optimal solution, but lacking such a preference the algorithm is lost. And actually there exists not the faintest idea how algorithms should acquire that kind of preferences which are interesting and important for an unknown future.

On the contrary, humans are known to be creative, innovative, detecting new preferences etc. but they have only limited capacities to explore these creative findings until some telling endpoint.

This suggests that a symbiosis between creative humans and computing algorithms is an attractive pairing. For this we have to re-invent our official  learning processes in schools and universities to train the next generation of humans in a more inspired and creative usage of algorithms in a game-like learning processes.

 

 

 

 

SIMULATION AND GAMING

eJournal: uffmm.org,
ISSN 2567-6458, 3.May 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

Within the AAI paradigm the following steps will usually be distinguished:

  1. A given problem and a wanted solution.
  2. An assumed context and intended executing and assisting actors.
  3. Assumed non-functional requirements (NFRs).
  4. An actor story (AS) describing at least one task including all the functional requirements.
  5. An usability test, often enhanced with passive or interactive simulations.
  6. An evaluation of the test.
  7. Some repeated improvements.

With these elements one can analyze and design the behavior surface of an  assistive actor which can approach the requirements of the stakeholder.

SIMULATION AND GAMING

Comparing these elements with a (computer) game then one can detect immediately that  a game characteristically allows to win or to lose. The possible win-states or lose-states stop a game. Often the winning state includes additionally  some measures how ‘strong’ or how ‘big’ someone has won or lost a game.

Thus in a game one has besides the rules of the game R which determine what is allowed to do in a game some set of value lables V which indicate some property, some object, some state as associated with some value v,  optionally associated further with some numbers to quantify the value between a maximum and a minimum.

In most board games you will reach an end state where you are the winner or the loser independent of some value. In other games one plays as often as necessary to reach some accumulated value which gives a measure who is better than someone else.

Doing AAI analysis as part of engineering it is usually sufficient to develop an assistive actor with a behavior surface  which satisfies all requirements and some basic needs of the executive actors (the classical users).

But this newly developed product (the assistive actor for certain tasks) will be part of a social situation with different human actors. In these social situations there are often more requirements, demands, emotions around than only the original  design criteria for the technical product.

For some people the aesthetic properties of a technical product can be important or some other cultural code which associates the technical product with these cultural codes making it precious or not.

Furthermore there can be whole processes within which a product can be used or not, making it precious or not.

COLLECTIVE INTELLIGENCE AND AUTOPOIETIC GAMES

In the case of simulations one has already from the beginning a special context given by the experience and the knowledge of the executive actors.  In some cases this knowledge is assumed to be stable or closed. Therefore there is no need to think of the assistive actor as a tool which has not only to support the fulfilling of a defined task but additionally to support the development of the knowledge and experience of the executive actor further. But there are social situations in a city, in an institution, in learning in general where the assumed knowledge and experience is neither complete nor stable. On the contrary in these situations there is a strong need to develop the assumed knowledge further and do this as a joined effort to improve the collective intelligence collectively.

If one sees the models and rules underlying a simulation as a kind of a representation of the assumed collective knowledge then  a simulation can help to visualize this knowledge, make it an experience, explore its different consequences.  And as far as the executive actors are writing the rules of change by themselves, they understand the simulation and they can change the rules to understand better, what can improve the process and possible goal states. This kind of collective development of models and rules as well as testing can be called autopoietic because the executing actors have two roles:(i)  following some rules (which they have defined by themselves) they explore what will happen, when one adheres to these rules; (ii) changing the rules to change the possible outcomes.

This procedure establishes some kind of collective learning within an autopoietic process.

If one enriches this setting with explicit goal states, states of assumed advantages, then one can look at this collective learning as a serious pattern of collective learning by autopoietic games.

For many context like cities, educational institutions, and even companies  this kind of collective learning by autopoietic games can be a very productive way to develop the collective intelligence of many people at the same time gaining knowledge by having some exciting fun.

Autopoietic gaming as support for collective learning processes
Autopoietic gaming as support for collective learning processes

 

 

THE BIG PICTURE: HCI – HMI – AAI in History – Engineering – Society – Philosophy

eJournal: uffmm.org,
ISSN 2567-6458, 20.April 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

A first draft version …

CONTEXT

The context for this text is the whole block dedicated to the AAI (Actor-Actor Interaction)  paradigm. The aim of this text is to give the big picture of all dimensions and components of this subject as it shows up during April 2019.

The first dimension introduced is the historical dimension, because this allows a first orientation in the course of events which lead  to the actual situation. It starts with the early days of real computers in the thirties and forties of the 20 century.

The second dimension is the engineering dimension which describes the special view within which we are looking onto the overall topic of interactions between human persons and computers (or machines or technology or society). We are interested how to transform a given problem into a valuable solution in a methodological sound way called engineering.

The third dimension is the whole of society because engineering happens always as some process within a society.  Society provides the resources which can be used and spends the preferences (values) what is understood as ‘valuable’, as ‘good’.

The fourth dimension is Philosophy as that kind of thinking which takes everything into account which can be thought and within thinking Philosophy clarifies conditions of thinking, possible tools of thinking and has to clarify when some symbolic expression becomes true.

HISTORY

In history we are looking back in the course of events. And this looking back is in a first step guided by the  concepts of HCI (Human-Computer Interface) and  HMI (Human-Machine Interaction).

It is an interesting phenomenon how the original focus of the interface between human persons and the early computers shifted to  the more general picture of interaction because the computer as machine developed rapidly on account of the rapid development of the enabling hardware (HW)  the enabling software (SW).

Within the general framework of hardware and software the so-called artificial intelligence (AI) developed first as a sub-topic on its own. Since the last 10 – 20 years it became in a way productive that it now  seems to become a normal part of every kind of software. Software and smart software seem to be   interchangeable. Thus the  new wording of augmented or collective intelligence is emerging intending to bridge the possible gap between humans with their human intelligence and machine intelligence. There is some motivation from the side of society not to allow the impression that the smart (intelligent) machines will replace some day the humans. Instead one is propagating the vision of a new collective shape of intelligence where human and machine intelligence allows a symbiosis where each side gives hist best and receives a maximum in a win-win situation.

What is revealing about the actual situation is the fact that the mainstream is always talking about intelligence but not seriously about learning! Intelligence is by its roots a static concept representing some capabilities at a certain point of time, while learning is the more general dynamic concept that a system can change its behavior depending from actual external stimuli as well as internal states. And such a change includes real changes of some of its internal states. Intelligence does not communicate this dynamics! The most demanding aspect of learning is the need for preferences. Without preferences learning is impossible. Today machine learning is a very weak example of learning because the question of preferences is not a real topic there. One assumes that some reward is available, but one does not really investigate this topic. The rare research trying to do this job is stating that there is not the faintest idea around how a general continuous learning could happen. Human society is of no help for this problem while human societies have a clash of many, often opposite, values, and they have no commonly accepted view how to improve this situation.

ENGINEERING

Engineering is the art and the science to transform a given problem into a valuable and working solution. What is valuable decides the surrounding enabling society and this judgment can change during the course of time.  Whether some solution is judged to be working can change during the course of time too but the criteria used for this judgment are more stable because of their adherence to concrete capabilities of technical solutions.

While engineering was and is  always  a kind of an art and needs such aspects like creativity, innovation, intuition etc. it is also and as far as possible a procedure driven by defined methods how to do things, and these methods are as far as possible backed up by scientific theories. The real engineer therefore synthesizes art, technology and science in a unique way which can not completely be learned in the schools.

In the past as well as in the present engineering has to happen in teams of many, often many thousands or even more, people which coordinate their brains by communication which enables in the individual brains some kind of understanding, of emerging world pictures,  which in turn guide the perception, the decisions, and the concrete behavior of everybody. And these cognitive processes are embedded — in every individual team member — in mixtures of desires, emotions, as well as motivations, which can support the cognitive processes or obstruct them. Therefore an optimal result can only be reached if the communication serves all necessary cognitive processes and the interactions between the team members enable the necessary constructive desires, emotions, and motivations.

If an engineering process is done by a small group of dedicated experts  — usually triggered by the given problem of an individual stakeholder — this can work well for many situations. It has the flavor of a so-called top-down approach. If the engineering deals with states of affairs where different kinds of people, citizens of some town etc. are affected by the results of such a process, the restriction to  a small group of experts  can become highly counterproductive. In those cases of a widespread interest it seems promising to include representatives of all the involved persons into the executing team to recognize their experiences and their kinds of preferences. This has to be done in a way which is understandable and appreciative, showing esteem for the others. This manner of extending the team of usual experts by situative experts can be termed bottom-up approach. In this usage of the term bottom-up this is not the opposite to top-down but  is reflecting the extend in which members of a society are included insofar they are affected by the results of a process.

SOCIETY

Societies in the past and the present occur in a great variety of value systems, organizational structures, systems of power etc.  Engineering processes within a society  are depending completely on the available resources of a society and of its value systems.

The population dynamics, the needs and wishes of the people, the real territories, the climate, housing, traffic, and many different things are constantly producing demands to be solved if life shall be able and continue during the course of time.

The self-understanding and the self-management of societies is crucial for their ability to used engineering to improve life. This deserves communication and education to a sufficient extend, appropriate public rules of management, otherwise the necessary understanding and the freedom to act is lacking to use engineering  in the right way.

PHILOSOPHY

Without communication no common constructive process can happen. Communication happens according to many  implicit rules compressed in the formula who when can speak how about what with whom etc. Communication enables cognitive processes of for instance  understanding, explanations, lines of arguments.  Especially important for survival is the ability to make true descriptions and the ability to decide whether a statement is true or not. Without this basic ability communication will break down, coordination will break down, life will break down.

The basic discipline to clarify the rules and conditions of true communication, of cognition in general, is called Philosophy. All the more modern empirical disciplines are specializations of the general scope of Philosophy and it is Philosophy which integrates all the special disciplines in one, coherent framework (this is the ideal; actually we are far from this ideal).

Thus to describe the process of engineering driven by different kinds of actors which are coordinating themselves by communication is primarily the task of philosophy with all their sub-disciplines.

Thus some of the topics of Philosophy are language, text, theory, verification of a  theory, functions within theories as algorithms, computation in general, inferences of true statements from given theories, and the like.

In this text I apply Philosophy as far as necessary. Especially I am introducing a new process model extending the classical systems engineering approach by including the driving actors explicitly in the formal representation of the process. Learning machines are included as standard tools to improve human thinking and communication. You can name this Augmented Social Learning Systems (ASLS). Compared to the wording Augmented Intelligence (AI) (as used for instance by the IBM marketing) the ASLS concept stresses that the primary point of reference are the biological systems which created and create machine intelligence as a new tool to enhance biological intelligence as part of biological learning systems. Compared to the wording Collective Intelligence (CI) (as propagated by the MIT, especially by Thomas W.Malone and colleagues) the spirit of the CI concept seems to be   similar, but perhaps only a weak similarity.

Example python3: popShow0 – simple file-reader

eJournal: uffmm.org,
ISSN 2567-6458, 6.April 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is a possible 5th step in the overall topic ‘Co-Learning python3′. After downloading WinPython and activating the integrated editor ‘spyder’ (see here),  one can edit another simple program dealing with population dynamics in a most simple way (see the source code below under the title ‘EXAMPLE: popShow0.py’). This program is a continuation of the program pop0e.py, which has been described here.

COMMENTS

In this post I comment only on the changes between the actual program and the version before.

IMPORT

In this program the following two libraries are used:

import numpy as np # Lib for math
from tkinter.filedialog import askopenfilename

numpy is known from previous programs while tkinter is here used to enable a file dialog to find a certain file while browing the different folders.

The program opens such a windows for browsing:

print(‘A window is asking you for a filename\n’)
infilename = askopenfilename()

The variable ‘infilename’ is a variable for strings, which in this case saves the content of the file.  This content looks like this:

# br=0.03,dr=0.019
# X-Column, Y-Column
1.000000000000000000e+00   7.466964000000000000e+06
2.000000000000000000e+00   7.549100604000000283e+06
3.000000000000000000e+00   7.632140710644000210e+06
4.000000000000000000e+00   7.716094258461084217e+06
5.000000000000000000e+00   7.800971295304155909e+06
6.000000000000000000e+00   7.886781979552501813e+06
7.000000000000000000e+00   7.973536581327578984e+06
8.000000000000000000e+00   8.061245483722181991e+06
9.000000000000000000e+00   8.149919184043126181e+06
1.000000000000000000e+01    8.239568295067600906e+06
1.100000000000000000e+01    8.330203546313344501e+06
1.200000000000000000e+01    8.421835785322790965e+06
1.300000000000000000e+01    8.514475978961341083e+06
1.400000000000000000e+01    8.608135214729916304e+06
1.500000000000000000e+01    8.702824702091945335e+06
1.600000000000000000e+01    8.798555773814957589e+06

This context will then be formatted by the following lines:

data = np.loadtxt(infilename)
x = data[:, 0]
y = data[:, 1]

The leading header will automatically be discarded and the main content will be stored in two columns. These formatted two-columns data will then be printed with:

for i in range(len(x)):
print(‘Year %d = Citizens. %9.0f \n’ % (x[i],y[i]))

For each of the values to print x[i] and y[i] there are formatting options telling that the x[i] represents a ‘year’ understood as an integer, and  that y[i] represents the population number Citizens understood as a floting point number with zero signs behind the floating point.

The output looks then like this (Date are from the UN for 2016; the simulation computes this into a possible future):

Year 1 = Citizens. 7466964

Year 2 = Citizens. 7549101

Year 3 = Citizens. 7632141

Year 4 = Citizens. 7716094

Year 5 = Citizens. 7800971

Year 6 = Citizens. 7886782

Year 7 = Citizens. 7973537

Year 8 = Citizens. 8061245

Year 9 = Citizens. 8149919

Year 10 = Citizens. 8239568

Year 11 = Citizens. 8330204

Year 12 = Citizens. 8421836

Year 13 = Citizens. 8514476

Year 14 = Citizens. 8608135

Year 15 = Citizens. 8702825

Year 16 = Citizens. 8798556

What is missing here is the information about the ‘real years’ as 1 = 2016 etc.

SOURCE CODE for popShow0.py

popShow0.py as popShow0.pdf

Example python3: pop0e – simple population program

eJournal: uffmm.org,
ISSN 2567-6458, 4-6.April 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is a possible 4th step in the overall topic ‘Co-Learning python3′. After downloading WinPython and activating the integrated editor ‘spyder’ (see here),  one can edit another simple program dealing with population dynamics in a most simple way (see the source code below under the title ‘EXAMPLE: pop0e.py’). This program is a continuation of the program pop0d.py, which has been described here.

COMMENTS

In this post I comment only on the changes between the actual program and the version before.

IMPORTS

In the new version one more liibrary is used for the handling of time stamps:

import time # Lib for time

STORING DATA IN A FILE

The only extension in the new version of the small program are some lines enabling the storage of the data from the simulation in a file.

data = np.column_stack((x,pop))

This line is formating the plot-values as x and x axes written as two columns bedides each other in a file.

What comes next is a construction of a file name which includes the actual time as well as the values of the br and the dr variable:

ts = time.gmtime()
t=time.strftime(“%c”, ts) # format time data into ISO format
t=t.replace(‘ ‘,’-‘)
t=t.replace(‘:’,’-‘)
header=’br=’+str(br)+’,’+’dr=’+str(dr)+’\n’+’X-Column, Y-Column’
fname=’hxyPTL’+t+’br=’+str(br)+’-‘+’dr=’+str(dr)

After this construction a file is generated with the plotting data as well as an expressive name.

np.savetxt(fname+’.txt’, data,header=header)

SOURCE CODE

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
“””
Created on Thu 4-6 April Konga (Sweden) 2019

@author: gerd doeben-henisch
Email: gerd@doeben-henisch.de
“””

##################################
# pop0e()
###################################
#
# IDEA
#
# Simple program to compute the increase/ decrease of a population with
# the parameters population number (p), birth-rate (br) and death-rate (dr).
# In this version an extension with the following features:
# – a loop to repeat the computation for n-many cycles
# – a storage of the data in an array
# – an additional automatic storage of the plotting data in a file
# with the actual time in the file-name
# – a plot of the stored data for n-many cycles
# – the overall change of the population in %

#########################################################
# IMPORTS
# As part of the distribution of Winpython there are already many
# libraries pre-installed, which can be activated by the import command.
# Other libraries outside of the distribution have to be downloaded
# with the pip command
# Lib for plotting

import matplotlib.pyplot as plt # Lib for plotting
import numpy as np # Lib for math
import time # Lib for time

###########################################################
# DEFINITION

def pop0(p,br,dr):

p=p+(p*br)-(p*dr)

return p

###################################
# INPUT OF DATA

p = int(input(‘Population number ? ‘))

br = float(input(‘Birthrate in % ? ‘))
br = br/100

dr = float(input(‘Deathrate in % ? ‘))
dr = dr/100

n = int(input(‘How many cycles ? ‘))

baseYear = int(input(‘What is your Base Year ? ‘))

#############################################
# GLOBAL VARIABLES

pop = [] # storage for the pop-numbers for plotting
pop.append(p)

#################################################
# COMPUTE

for i in range(n):
p=pop0(p,br,dr)
pop.append(p)

##################################################
# SHOW RESULTS

for i in range(n+1):
print(‘Year %5d = Citizens. %8d \n’ %(baseYear+i, pop[i]) )

x = np.linspace(1,len(pop),len(pop))

plt.plot(x, pop, ‘bo’)
plt.show()

#####################################################
# STORE VALUES ON DISK
#
# For this see the online article
# https://www.pythonforthelab.com/blog/introduction-to-storing-data-in-files/
#
# Saves the plot data automatically in a file with a header and two columns

data = np.column_stack((x,pop))
ts = time.gmtime()
t=time.strftime(“%c”, ts) # format time data into ISO format
t=t.replace(‘ ‘,’-‘)
t=t.replace(‘:’,’-‘)
header=’br=’+str(br)+’,’+’dr=’+str(dr)+’\n’+’X-Column, Y-Column’
fname=’hxyPTL’+t+’br=’+str(br)+’-‘+’dr=’+str(dr)
np.savetxt(fname+’.txt’, data,header=header)

###########################################
# Compute Change of POP

n1=pop[0]
n2=pop[len(pop)-1]
Increase=(n2-n1)/(n1/100)

print(“From Year %5d, until Year %5d, a change of %2.2f percent \n” % (baseYear, baseYear+n,Increase) )

plt.close()

####################################################
# REAL DATA
#
# UN Demographic Yearbook 2017
# https://unstats.un.org/unsd/demographic-social/products/dyb/dybsets/2017.pdf
#
# Basic Tables UN
# https://unstats.un.org/unsd/demographic-social/products/vitstats/seratab1.pdf
#
# UN public tables
# http://data.un.org/Explorer.aspx?d=POP
#
# UN Rate of population change
# http://data.un.org/Data.aspx?d=PopDiv&f=variableID%3a47
# https://www.un.org/en/development/desa/population/index.asp
”’
Population number ? 6958169

Birthrate in % ? 1.9

Deathrate in % ? 0.77

How many cycles ? 15

What is your Base Year ? 2010
Year 2010 = Citizens. 6958169

Year 2011 = Citizens. 7036796

Year 2012 = Citizens. 7116312

Year 2013 = Citizens. 7196726

Year 2014 = Citizens. 7278049

Year 2015 = Citizens. 7360291

Year 2016 = Citizens. 7443462

Year 2017 = Citizens. 7527573

Year 2018 = Citizens. 7612635

Year 2019 = Citizens. 7698658

Year 2020 = Citizens. 7785653

Year 2021 = Citizens. 7873630

Year 2022 = Citizens. 7962602

Year 2023 = Citizens. 8052580

Year 2024 = Citizens. 8143574

Year 2025 = Citizens. 8235596

From Year 2010, until Year 2025, a change of 18.36 percent

Real UN data for 2010 – 2015
2010 2011 2012 2013 2014 2015
6 958 169 7 043 009 7 128 177 7 213 426 7 298 453 7.383.009

########################################################
# STORING DATA
#
# https://www.pythonforthelab.com/blog/introduction-to-storing-data-in-files/
#
# Example of saved file:
file name:
hxyPTLSun-Apr–7-08-55-46-2019br=0.019-dr=0.0077.txt

# br=0.019,dr=0.0077
# X-Column, Y-Column
1.000000000000000000e+00 6.958169000000000000e+06
2.000000000000000000e+00 7.036796309700000100e+06
3.000000000000000000e+00 7.116312107999609783e+06
4.000000000000000000e+00 7.196726434820005670e+06
5.000000000000000000e+00 7.278049443533471785e+06
6.000000000000000000e+00 7.360291402245399542e+06
7.000000000000000000e+00 7.443462695090772584e+06
8.000000000000000000e+00 7.527573823545298539e+06
9.000000000000000000e+00 7.612635407751359977e+06
1.000000000000000000e+01 7.698658187858950347e+06
1.100000000000000000e+01 7.785653025381756946e+06
1.200000000000000000e+01 7.873630904568570666e+06
1.300000000000000000e+01 7.962602933790194802e+06
1.400000000000000000e+01 8.052580346942024305e+06
1.500000000000000000e+01 8.143574504862469621e+06
1.600000000000000000e+01 8.235596896767416038e+06
”’

 

 

Example python3: pop0d – simple population program

eJournal: uffmm.org,
ISSN 2567-6458, 2-4.April 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is a possible 3rd step in the overall topic ‘Co-Learning python3′. After downloading WinPython and activating the integrated editor ‘spyder’ (see here),  one can edit another simple program dealing with population dynamics in a most simple way (see the source code below under the title ‘EXAMPLE: pop0d.py’). This program is a continuation of the program pop0.py, which has been described here.

COMMENTS

In this post I comment only on the changes between the actual program and the version before.

IMPORTS

In the new version two libraries are used:

import matplotlib.pyplot as plt # Lib for plotting
import numpy as np # Lib for math

This extends the set of possible functions by functions for plotting and some more math.

MORE INPUT DATA

In this version one can enter a base year, thus allowing a direct relation to real year numbers in the history or the future. In the example run at the end of the program I am using the official population numbers of the UN for the world population in the years 2016 and 2017, which will hypothetically be forecasted for 15 years.

MORE GLOBAL VARIABLES

As explained in the previous version there are local variables restricted in their meaning to a certain function and global variables outside a function. In this version a datastructure called pop is introduced to store information in a sequential order. In this case the first population number given in the variable p is stored on the first position by the append operation which is part of the data structure pop.

pop = [] # storage for the pop-numbers for plotting
pop.append(p)

Later a data structure x is needed  a s a sequence of consecutive numbers starting with 1, ending with the number of entries in the pop data strucure and with as many positions as pop has entries. The number of elements in pop can be computed by applying the len() operator to pop.

x = np.linspace(1,len(pop),len(pop))

EXTENDED COMPUTATION

The computation has extended a little bit by the new line appending the actual value of p to the pop storage:

for i in range(n):
       p=pop0(p,br,dr)
      pop.append(p)

Thus the pop data structure stores every new population value in p in the sequential order of its occurence. Moreover it has been here realized a loop with the for function. The variable i is running through a sequence of values provided by the range() function. This function builts an array of numbers from 0 to n-1 and repeats therefor the call of pop0() n times.

But the most important point here is that the value of the global variable p is handed over to the function pop0(), and the local value of the variable p inside the pop0() function is again handed over by the return command to the outside of the function. Outside there is waiting the global p variable which is receiving the new value by the =-operation. In the next call of pop0() pop0() receives as new input the global variable p with the new value.

SHOWING THE RESULTS

There are now three different data show actions: (i) the list of all years with their numbers, (ii) a graphical plot of data points, and (iii) a print out of the increase of the population compared the final result with the base year.

LIST OF ALL YEARS

for i in range(n+1):
print(‘Year %5d = Citizens. %8d \n’ %(baseYear+i, pop[i]) )

Again is the for function in action ranging through the number of cycles given by the variable n. The number of the population in the different years is catched from the pop data structure by indexing the individual elements of pop by the bracket command []. pop[i] represents the i-th element of pop.

PLOTING THE POPULATION VALUES

x = np.linspace(1,len(pop),len(pop))
plt.plot(x, pop, ‘bo’)
plt.show()

The plotfunction plot of the library plt needs an arry of numbers for the x-axis given here by the x data structure, an arry of numbers for the y-axis given here by the pop-data structures, and optionally some parameter for the format of the plot symbolds. Here with ‘bo’ a black small circle. After all values are prepared will the command plt.show() make the plotted data visible.

Plot with the UN data for the possible growth of the world population
Plot with the UN data for the possible growth of the world population

OVERALL PERCENTAGE OF CHANGE

n1=pop[0]
n2=pop[len(pop)-1]
Increase=(n2-n1)/(n1/100)

print(“From Year %5d, until Year %5d, a change of %2.2f percent \n” % (baseYear, baseYear+n,Increase) )

Getting the base year from pop[0] and the last year from pop[len(pop)-1] one can compute the difference as the increase translated into a percentage.

REAL DATA

Although the population program is still very simple it is usefull to compute real numbers of the real world. One example are the official data of the United Nations (UN) which are collecting world wide data since their foundation 1948.

But, as a first surprise, although the UN provides lots of data from all the countries world wide, they do not systematicall a birthrate (br) or a deathrate (dr). Thus I have compiled the br and dr by inferring it from absolute population numbers from 2016 and 2017 combined with the fertility rate for 1000 people in the year 2013 averaged over all countries. This gives an estimate of 3% for the birthrate br. Using this from 2016 to 2017 this gives an ‘overshoot’ to the real numbers of 2017. I inferred from this the deathrate dr which is clearly a very week inference. Nevertheless it works for 2016 to 2017 and gives a first simple example for the upcoming years.

SOURCE CODE OF pop0d.py

pop0d.py as pop0d.pdf

Example python3: pop0 – simple population program

eJournal: uffmm.org,
ISSN 2567-6458, 2-3.April 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This is a possible 2nd step in the overall topic ‘Co-Learning python3′. After downloading WinPython and activating the integrated editor ‘spyder’ (see here),  one can edit a first simple program dealing with population dynamics in a most simple way (see the source code below under the title ‘EXAMPLE: pop0.py’.

BASIC PROGRAMMING ELEMENTS

SOURCE FILE

The source code is stored under the name ‘pop0.py’. It is a ‘stand alone’ program not making use of any kind of a library except the built in functions of python3. The external libraries can be included by the ‘import command’.

FUNCTION DEFINITION

If one wants to use the built-in functions in some new way one can do this by telling the computer the keyword ‘def‘ which states that the following text defines a new function.

A function has always a name, some input arguments between rounded brackets followed by a colon marking the beginning of the function-body.  After the colon follows a list of built-in commands or some already defined functions. With defined functions one can make life much easier. Instead of repeating all the commands of the function-body again and again one can limit the writing to the call of the function name with its input arguments.

In the example file we have the following function definition:

def pop0(p,br,dr):
       p=p+(p*br)-(p*dr)
       return p

The name is pop0, the input arguments are (p,br,dr), the used built-in functions are +, -, *, return as well as the =-sign, and the new composition is p=p+(p*br)-(p*dr). This new composition combines known function names with new variable names to compute a certain mathematical mapping. The result of this simple mapping is stored in the variable ‘p’ and it is delivered to the outside of the function pop0 by the return-statement.

NECESSARY INPUT VALUES

To run the program one has to call the defined new function. But because the called function will need some values for the input variables one has first to enable the user to interact with the program by some input commands.

The input commands are informing the user which kind of information is asked for and the answers of the users will be stored in the variables p, br, and dr. The input values can also be ‘casted‘ into different value types like int — for integer — and float — for floating point –.

Here is the protocol of a possible input:

Number of citizens in the start year? 1000

Birthrate %? 0.82

Deathrate in %? 0.92

CALLING A DEFINED FUNCTION

After these preparations one can call the defined new function with the statement pnew = pop0(p,br,dr). Because the input variables have values received from the user the new function can start it’s mapping and can compute the follow up value for the population.

SHOW RESULTS

To show the user this new value explicitly on the screen  one has to use the print function, the counterpart to the input function:  print(‘New population number:\n’,int(pnew)). The print function prints the new value for the population number on the screen.  If you look closer to the print function you can detect some inherent structure: print() is the main structure with the function name ‘print’ and the brackets () as the placeholder for possible input arguments. In the used example the input arguments have two ‘parts’: (‘…’,v). The ‘…’-part allows some text which will be shown to the user, in our case New population number: followed by a line break caused by the symbols \n. The v-part allows the names of variables which have some values which can be printed. In the used example we have the expression int(pnew). pnew is the name of a variable with a value delivered by the pop0() function enabled by the return p function of the pop0() function. (Attention: the ‘return’ function works without ()-brackets to receive input arguments! The variable ‘p’ is an input argument) The value delivered by ‘return p’ is a floating point value. But because we have only ‘whole citizens’ we cast the non-integer parts of the float value away by making the variable ‘pnew’ an argument of another function int(). The int() function translates a floating point value into an integer value. This is the reason that we do not see ‘999.0’ but ‘999’:

New population number:
999

REMARK: Global and Local Variables

This simple example tells already something about the difference between global and local values. If one enters the variable names ‘p’ and ‘pnew’ in the python console of the spyder editor then one can see the following:

p
Out[5]: 1000

pnew
Out[6]: 999.0

After the function call to pop0() the original value of ‘p’ is unchanged, and the new value ‘pnew’ is different. That means the new value of ‘p internal in the function’ and the ‘old value of p external to the function’ are separated. The name of a variable has therefore to be distinguished with regard to the actual context: The same name  in different contexts (inside a function definition or outside) does represents different memory spaces.   The variable names inside a function definition are called local variables and the variable names external to a function definition are called global variables.

A continuation of this post you can find here.

SOURCE CODE: EXAMPLE: pop0.py

pop0.py as pop0.pdf

Co-Learning with python 3

 

eJournal: uffmm.org, ISSN 2567-6458, 1-6.April 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

The context of this small initiative ist the Actor-Actor Interaction (AAI) paradigm described in this blog.  For this AAI paradigm one will need an assisting software to manage real problems with real people. In principle this software can be realized by every kind of programming language. Which one will be used for the planned overall software service will be decided by those groups, which will do the final job. But for the development of the ideas, for an open learning process, we will use here the programming language python 3. To get a first understanding of the main programming languages and the special advantages and disadvantages of python 3 compared to python 2 and to other important languages see the short overview here 2019-03-18 in apenwarr or chapters 1-2 from the excellent book by Lutz mentioned below.

VISION

There are many gifted young people around who could produce wonderfull programms, if they would find a ‘start’, how to begin, where to go from here… sometimes these are friends doing other things together — like playing computer games 🙂 — why not start doing programming real programs by themselves? Why not start together some exciting project by their own?

If they want to do it, they need some first steps to enter the scene. This is the idea behind this vision: getting young people to do first steps, helping each other, document the process, improve, making nice things…

CONTRIBUTING POSTS

Here is a list of posts which can support the Co-Learning Process:

More posts will follow in a random order depending of the questions which will arise or the ideas someone wants to test.

REFERENCES

There is a huge amount of python books, articles and online resources. I will mention here only some of the books and resources I have used. The following is therefore neither complete nor closed. I will add occasionally some titles.

  • Mark Lutz, Learn Python, 2013,5th ed.,Sebastopol (CA), O’Reilly
  • To get the sources: https://www.python.org/
  • Documentation: https://docs.python.org/3/
  • Python Software Foundation (PSF): https://www.python.org/psf-landing/
  • Python Community: https://www.python.org/community/

 

AAI-THEORY V2 – BLUEPRINT: Bottom-up

eJournal: uffmm.org,
ISSN 2567-6458, 27.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 28.February 2019 (Several corrections)

CONTEXT

An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the special topic how to proceed in a bottom-up approach.

BOTTOM-UP: THE GENERAL BLUEPRINT
Outine of the process how to generate an AS
Figure 1: Outline of the process how to generate an AS with a bottom-up approach

As the introductory figure shows it is assumed here that there is a collection of citizens and experts which offer their individual knowledge, experiences, and skills to ‘put them on the table’ challenged by a given problem P.

This knowledge is in the beginning not structured. The first step in the direction of an actor story (AS) is to analyze the different contributions in a way which shows distinguishable elements with properties and relations. Such a set of first ‘objects’ and ‘relations’ characterizes a set of facts which define a ‘situation’ or a ‘state’ as a collection of ‘facts’. Such a situation/ state can also be understood as a first simple ‘model‘ as response to a given problem. A model is as such ‘static‘; it describes what ‘is’ at a certain point of ‘time’.

In a next step the group has to identify possible ‘changes‘ which can be associated with at least one fact. There can be many possible changes which eventually  need different durations to come into effect. These effects can happen  as ‘exclusive alternatives’ or in ‘parallel’. Apply the possible changes to a  situation  generates   ‘successors’ to the actual situation. A sequence of situations generated by applied changes is  usually called a ‘simulation‘.

If one allows the interaction between real actors with a simulation by associating  a real actor to one of the actors ‘inside the simulation’ one is turning the simulation into an ‘interactive simulation‘ which represents basically a ‘computer game‘ (short: ‘egame‘).

One can use interactive simulations e.g. to (i) learn about the dynamics of a model, to (ii) test the assumptions of a model, to (iii) test the knowledge and skills of the real actors.

Making new experiences with a  simulation allows a continuous improvement of the model and its change rules.

Additionally one can include more citizens and experts into this process and one can use available knowledge from databases and libraries.

EPISTEMOLOGY OF CONCEPTS
Epistemology of concepts used in an AAI Analysis rprocess
Fig.2: Epistemology of concepts used in an AAI Analysis process

As outlined in the preceding section about the blueprint of a bottom-up process there will be a heavy   usage of concepts to describe state of affairs.

The literature about this topic in philosophy as well as many scientific disciplines is overwhelmingly and therefore this small text here can only be a ‘pointer’ into a complex topic. Nevertheless I will use exactly this pointer to explore this topic further.

While the literature is mainly dealing with  more or less specific partial models, I am trying here to point out a very general framework which fits to a more genera philosophical — especially epistemological — view as well as gives respect to many results of scientific disciplines.

The main dimensions here are (i) the outside external empirical world, which connects via sensors to the (ii) internal body, especially the brain,  which works largely ‘unconscious‘, and then (iii) the ‘conscious‘ part of he brain.

The most important relationship between the ‘conscious’ and the ‘unconscious’ part of the brain is the ability of the unconscious brain to transform automatically incoming concrete sens-experiences into more   ‘abstract’ structures, which have at least three sub-dimensions: (i) different concrete material, (ii) a sub-set of extracted common properties, (iii) different sets of occurring contexts associated with the different subsets. This enables the brain to extract only a ‘few’ abstract structures (= abstract concepts)  to deal with ‘many’  concrete events. Thus the abstract concept ‘chair’ can cover many different concrete chairs which have only a few properties in common. Additionally the chairs can occur in different ‘contexts’ associating them with different ‘relations’ which can  specify  possible different ‘usages’   of  the concept ‘chair’.

Thus, if the actor perceives something which ‘matches’ some ‘known’ concept then the actor is  not only conscious about the empirical concrete phenomenon but also simultaneously about the abstract concept which will automatically be activated. ‘Immediately’ the actor ‘knows’ that this empirical something is e.g. a ‘chair’. Concrete: this concrete something is matching an abstract concept ‘chair’ which can as such cover many other concrete things too which can be as concrete somethings partially different from another concrete something.

From this follows an interesting side effect: while an actor can easily decide, whether a concrete something is there  (“it is the case, that” = “it is true”) or not (“it is not the case, that” = “it isnot true” = “it is false”), an actor can not directly decide whether an abstract concept like ‘chair’ as such is ‘true’ in the sense, that the concept ‘as a whole’ corresponds to concrete empirical occurrences. This depends from the fact that an abstract concept like ‘chair’ can match with a  nearly infinite set of possible concrete somethings which are called ‘possible instances’ of the abstract concept. But a human actor can directly   ‘check’ only a ‘few’ concrete somethings. Therefore the usage of abstract concepts like ‘chair’, ‘house’, ‘bottle’ etc. implies  inherently an ‘open set’ of ‘possible’ concrete  exemplars and therefor is the usage of such concepts necessarily a ‘hypothetical’ usage.  Because we can ‘in principle’ check the real extensions of these abstract concepts   in everyday life as long there is the ‘freedom’ to do  such checks,  we are losing the ‘truth’ of our concepts and thereby the basis for a  realistic cooperation, if this ‘freedom of checking’ is not possible.

If some incoming perception is ‘not yet known’,  because nothing given in the unconsciousness does ‘match’,  it is in a basic sens ‘new’ and the brain will automatically generate a ‘new concept’.

THE DIMENSION OF MEANING

In Figure 2 one can find two other components: the ‘meaning relation’ which maps concepts into ‘language expression’.

Language expressions inside the brain correspond to a diversity of visual, auditory, tactile or other empirical event sequences, which are in use for communicative acts.

These language expressions are usually not ‘isolated structures’ but are embedded in relations which map the expression structures to conceptual structures including  the different substantiations of the abstract concepts and the associated contexts. By these relations the expressions are attached to the conceptual structures which are called the ‘meaning‘ of the expressions and vice versa the expressions are called the ‘language articulation’ of the meaning structures.

As far as conceptual structures are related via meaning relations to language expressions then  a perception can automatically cause the ‘activation’ of the associated language expressions, which in turn can be uttered in some way. But conceptual structures   can exist  (especially with children) without an available  meaning relation.

When language expressions are used within a communicative act then  their usage can activate in all participants of the communication the ‘learned’ concepts as their intended meanings. Heaving the meaning activated in someones ‘consciousness’ this is a real phenomenon for that actor. But from the occurrence of  concepts alone does not automatically follow, that a  concept is ‘backed up’ by some ‘real matter’ in the external world. Someone can utter that it is raining, in the hearer of this utterance the intended concepts can become activated, but in the outside external world no rain is happening. In this case one has to state that the utterance of the language expressions “Look, its raining” has no counterpart in the real world, therefore we call the utterance in this case ‘false‘ or  ‘not true‘.

THE DIMENSION OF TIME
The dimension of time based on past experience and combinatoric thinking
Fig.3: The dimension of time based on past experience and combinatoric thinking

The preceding figure 2 of the conceptual space is not yet complete. There is another important dimension based on the ability of the unconscious brain to ‘store’ certain structures in a ‘timely order’ which enables an actor — under certain conditions ! — to decide whether a certain structure X occurred in the consciousness ‘before’ or ‘after’ or ‘at the same time’ as another structure Y.

Evidently the unconscious brain is able do exactly this:  (i) it can arrange the different structures under certain conditions in a ‘timely order’;  (ii)  it can detect ‘differences‘ between timely succeeding structures;  the brain (iii) can conceptualize these changes as ‘change concepts‘ (‘rules of change’), and it can  can classify different kinds of change like ‘deterministic’, ‘non-deterministic’ with different kinds of probabilities, as well as ‘arbitrary’ as in the case of ‘free learning systems‘. Free learning systems are able to behave in a ‘deterministic-like manner’, but they can also change their patterns on account of internal learning and decision processes in nearly any direction.

Based on memories of conceptual structures and derived change concepts (rules of change) the unconscious brain is able to generate different kinds of ‘possible configurations’, whose quality is  depending from the degree of dependencies within the  ‘generating  criteria’: (i) no special restrictions; (ii) empirical restrictions; (iii) empirical restrictions for ‘upcoming states’ (if all drinkable water would be consumed, then one cannot plan any further with drinkable water).

 

 

 

 

 

 

 

AAI THEORY V2 –A Philosophical Framework

eJournal: uffmm.org,
ISSN 2567-6458, 22.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 23.February 2019 (continued the text)

Last change: 24.February 2019 (extended the text)

CONTEXT

In the overview of the AAI paradigm version 2 you can find this section  dealing with the philosophical perspective of the AAI paradigm. Enjoy reading (or not, then send a comment :-)).

THE DAILY LIFE PERSPECTIVE

The perspective of Philosophy is rooted in the everyday life perspective. With our body we occur in a space with other bodies and objects; different features, properties  are associated with the objects, different kinds of relations an changes from one state to another.

From the empirical sciences we have learned to see more details of the everyday life with regard to detailed structures of matter and biological life, with regard to the long history of the actual world, with regard to many interesting dynamics within the objects, within biological systems, as part of earth, the solar system and much more.

A certain aspect of the empirical view of the world is the fact, that some biological systems called ‘homo sapiens’, which emerged only some 300.000 years ago in Africa, show a special property usually called ‘consciousness’ combined with the ability to ‘communicate by symbolic languages’.

General setting of the homo sapiens species (simplified)
Figure 1: General setting of the homo sapiens species (simplified)

As we know today the consciousness is associated with the brain, which in turn is embedded in the body, which  is further embedded in an environment.

Thus those ‘things’ about which we are ‘conscious’ are not ‘directly’ the objects and events of the surrounding real world but the ‘constructions of the brain’ based on actual external and internal sensor inputs as well as already collected ‘knowledge’. To qualify the ‘conscious things’ as ‘different’ from the assumed ‘real things’ ‘outside there’ it is common to speak of these brain-generated virtual things either as ‘qualia’ or — more often — as ‘phenomena’ which are  different to the assumed possible real things somewhere ‘out there’.

PHILOSOPHY AS FIRST PERSON VIEW

‘Philosophy’ has many facets.  One enters the scene if we are taking the insight into the general virtual character of our primary knowledge to be the primary and irreducible perspective of knowledge.  Every other more special kind of knowledge is necessarily a subspace of this primary phenomenological knowledge.

There is already from the beginning a fundamental distinction possible in the realm of conscious phenomena (PH): there are phenomena which can be ‘generated’ by the consciousness ‘itself’  — mostly called ‘by will’ — and those which are occurring and disappearing without a direct influence of the consciousness, which are in a certain basic sense ‘given’ and ‘independent’,  which are appearing  and disappearing according to ‘their own’. It is common to call these independent phenomena ’empirical phenomena’ which represent a true subset of all phenomena: PH_emp  PH. Attention: These empirical phenomena’ are still ‘phenomena’, virtual entities generated by the brain inside the brain, not directly controllable ‘by will’.

There is a further basic distinction which differentiates the empirical phenomena into those PH_emp_bdy which are controlled by some processes in the body (being tired, being hungry, having pain, …) and those PH_emp_ext which are controlled by objects and events in the environment beyond the body (light, sounds, temperature, surfaces of objects, …). Both subsets of empirical phenomena are different: PH_emp_bdy PH_emp_ext = 0. Because phenomena usually are occurring  associated with typical other phenomena there are ‘clusters’/ ‘pattern’ of phenomena which ‘represent’ possible events or states.

Modern empirical science has ‘refined’ the concept of an empirical phenomenon by introducing  ‘standard objects’ which can be used to ‘compare’ some empirical phenomenon with such an empirical standard object. Thus even when the perception of two different observers possibly differs somehow with regard to a certain empirical phenomenon, the additional comparison with an ’empirical standard object’ which is the ‘same’ for both observers, enhances the quality, improves the precision of the perception of the empirical phenomena.

From these considerations we can derive the following informal definitions:

  1. Something is ‘empirical‘ if it is the ‘real counterpart’ of a phenomenon which can be observed by other persons in my environment too.
  2. Something is ‘standardized empirical‘ if it is empirical and can additionally be associated with a before introduced empirical standard object.
  3. Something is ‘weak empirical‘ if it is the ‘real counterpart’ of a phenomenon which can potentially be observed by other persons in my body as causally correlated with the phenomenon.
  4. Something is ‘cognitive‘ if it is the counterpart of a phenomenon which is not empirical in one of the meanings (1) – (3).

It is a common task within philosophy to analyze the space of the phenomena with regard to its structure as well as to its dynamics.  Until today there exists not yet a complete accepted theory for this subject. This indicates that this seems to be some ‘hard’ task to do.

BRIDGING THE GAP BETWEEN BRAINS

As one can see in figure 1 a brain in a body is completely disconnected from the brain in another body. There is a real, deep ‘gap’ which has to be overcome if the two brains want to ‘coordinate’ their ‘planned actions’.

Luckily the emergence of homo sapiens with the new extended property of ‘consciousness’ was accompanied by another exciting property, the ability to ‘talk’. This ability enabled the creation of symbolic languages which can help two disconnected brains to have some exchange.

But ‘language’ does not consist of sounds or a ‘sequence of sounds’ only; the special power of a language is the further property that sequences of sounds can be associated with ‘something else’ which serves as the ‘meaning’ of these sounds. Thus we can use sounds to ‘talk about’ other things like objects, events, properties etc.

The single brain ‘knows’ about the relationship between some sounds and ‘something else’ because the brain is able to ‘generate relations’ between brain-structures for sounds and brain-structures for something else. These relations are some real connections in the brain. Therefore sounds can be related to ‘something  else’ or certain objects, and events, objects etc.  can become related to certain sounds. But these ‘meaning relations’ can only ‘bridge the gap’ to another brain if both brains are using the same ‘mapping’, the same ‘encoding’. This is only possible if the two brains with their bodies share a real world situation RW_S where the perceptions of the both brains are associated with the same parts of the real world between both bodies. If this is the case the perceptions P(RW_S) can become somehow ‘synchronized’ by the shared part of the real world which in turn is transformed in the brain structures P(RW_S) —> B_S which represent in the brain the stimulating aspects of the real world.  These brain structures B_S can then be associated with some sound structures B_A written as a relation  MEANING(B_S, B_A). Such a relation  realizes an encoding which can be used for communication. Communication is using sound sequences exchanged between brains via the body and the air of an environment as ‘expressions’ which can be recognized as part of a learned encoding which enables the receiving brain to identify a possible meaning candidate.

DIFFERENT MODES TO EXPRESS MEANING

Following the evolution of communication one can distinguish four important modes of expressing meaning, which will be used in this AAI paradigm.

VISUAL ENCODING

A direct way to express the internal meaning structures of a brain is to use a ‘visual code’ which represents by some kinds of drawing the visual shapes of objects in the space, some attributes of  shapes, which are common for all people who can ‘see’. Thus a picture and then a sequence of pictures like a comic or a story board can communicate simple ideas of situations, participating objects, persons and animals, showing changes in the arrangement of the shapes in the space.

Pictorial expressions representing aspects of the visual and the auditory sens modes
Figure 2: Pictorial expressions representing aspects of the visual and the auditory sens modes

Even with a simple visual code one can generate many sequences of situations which all together can ‘tell a story’. The basic elements are a presupposed ‘space’ with possible ‘objects’ in this space with different positions, sizes, relations and properties. One can even enhance these visual shapes with written expressions of  a spoken language. The sequence of the pictures represents additionally some ‘timely order’. ‘Changes’ can be encoded by ‘differences’ between consecutive pictures.

FROM SPOKEN TO WRITTEN LANGUAGE EXPRESSIONS

Later in the evolution of language, much later, the homo sapiens has learned to translate the spoken language L_s in a written format L_w using signs for parts of words or even whole words.  The possible meaning of these written expressions were no longer directly ‘visible’. The meaning was now only available for those people who had learned how these written expressions are associated with intended meanings encoded in the head of all language participants. Thus only hearing or reading a language expression would tell the reader either ‘nothing’ or some ‘possible meanings’ or a ‘definite meaning’.

A written textual version in parallel to a pictorial version
Figure 3: A written textual version in parallel to a pictorial version

If one has only the written expressions then one has to ‘know’ with which ‘meaning in the brain’ the expressions have to be associated. And what is very special with the written expressions compared to the pictorial expressions is the fact that the elements of the pictorial expressions are always very ‘concrete’ visual objects while the written expressions are ‘general’ expressions allowing many different concrete interpretations. Thus the expression ‘person’ can be used to be associated with many thousands different concrete objects; the same holds for the expression ‘road’, ‘moving’, ‘before’ and so on. Thus the written expressions are like ‘manufacturing instructions’ to search for possible meanings and configure these meanings to a ‘reasonable’ complex matter. And because written expressions are in general rather ‘abstract’/ ‘general’ which allow numerous possible concrete realizations they are very ‘economic’ because they use minimal expressions to built many complex meanings. Nevertheless the daily experience with spoken and written expressions shows that they are continuously candidates for false interpretations.

FORMAL MATHEMATICAL WRITTEN EXPRESSIONS

Besides the written expressions of everyday languages one can observe later in the history of written languages the steady development of a specialized version called ‘formal languages’ L_f with many different domains of application. Here I am  focusing   on the formal written languages which are used in mathematics as well as some pictorial elements to ‘visualize’  the intended ‘meaning’ of these formal mathematical expressions.

Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)
Fig. 4: Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)

One prominent concept in mathematics is the concept of a ‘graph’. In  the basic version there are only some ‘nodes’ (also called vertices) and some ‘edges’ connecting the nodes.  Formally one can represent these edges as ‘pairs of nodes’. If N represents the set of nodes then N x N represents the set of all pairs of these nodes.

In a more specialized version the edges are ‘directed’ (like a ‘one way road’) and also can be ‘looped back’ to a node   occurring ‘earlier’ in the graph. If such back-looping arrows occur a graph is called a ‘cyclic graph’.

Directed cyclic graph extended to represent 'states of affairs'
Fig.5: Directed cyclic graph extended to represent ‘states of affairs’

If one wants to use such a graph to describe some ‘states of affairs’ with their possible ‘changes’ one can ‘interpret’ a ‘node’ as  a state of affairs and an arrow as a change which turns one state of affairs S in a new one S’ which is minimally different to the old one.

As a state of affairs I  understand here a ‘situation’ embedded in some ‘context’ presupposing some common ‘space’. The possible ‘changes’ represented by arrows presuppose some dimension of ‘time’. Thus if a node n’  is following a node n indicated by an arrow then the state of affairs represented by the node n’ is to interpret as following the state of affairs represented in the node n with regard to the presupposed time T ‘later’, or n < n’ with ‘<‘ as a symbol for a timely ordering relation.

Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token
Fig.6: Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token

The space can be any kind of a space. If one assumes as an example a 2-dimensional space configured as a grid –as shown in figure 6 — with two tokens at certain positions one can introduce a language to describe the ‘facts’ which constitute the state of affairs. In this example one needs ‘names for objects’, ‘properties of objects’ as well as ‘relations between objects’. A possible finite set of facts for situation 1 could be the following:

  1. TOKEN(T1), BLACK(T1), POSITION(T1,1,1)
  2. TOKEN(T2), WHITE(T2), POSITION(T2,2,1)
  3. NEIGHBOR(T1,T2)
  4. CELL(C1), POSITION(1,2), FREE(C1)

‘T1’, ‘T2’, as well as ‘C1’ are names of objects, ‘TOKEN’, ‘BACK’ etc. are names of properties, and ‘NEIGHBOR’ is a relation between objects. This results in the equation:

S1 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), TOKEN(T2), WHITE(T2), POSITION(T2,2,1), NEIGHBOR(T1,T2), CELL(C1), POSITION(1,2), FREE(C1)}

These facts describe the situation S1. If it is important to describe possible objects ‘external to the situation’ as important factors which can cause some changes then one can describe these objects as a set of facts  in a separated ‘context’. In this example this could be two players which can move the black and white tokens and thereby causing a change of the situation. What is the situation and what belongs to a context is somewhat arbitrary. If one describes the agriculture of some region one usually would not count the planets and the atmosphere as part of this region but one knows that e.g. the sun can severely influence the situation   in combination with the atmosphere.

Change of a state of affairs given as a state which will be enhanced by a new object
Fig.7: Change of a state of affairs given as a state which will be enhanced by a new object

Let us stay with a state of affairs with only a situation without a context. The state of affairs is     a ‘state’. In the example shown in figure 6 I assume a ‘change’ caused by the insertion of a new black token at position (2,2). Written in the language of facts L_fact we get:

  1. TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)

Thus the new state S2 is generated out of the old state S1 by unifying S1 with the set of new facts: S2 = S1 {TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)}. All the other facts of S1 are still ‘valid’. In a more general manner one can introduce a change-expression with the following format:

<S1, S2, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)})>

This can be read as follows: The follow-up state S2 is generated out of the state S1 by adding to the state S1 the set of facts { … }.

This layout of a change expression can also be used if some facts have to be modified or removed from a state. If for instance  by some reason the white token should be removed from the situation one could write:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)})>

Another notation for this is S2 = S1 – {TOKEN(T2), WHITE(T2), POSITION(2,1)}.

The resulting state S2 would then look like:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1)}

And a combination of subtraction of facts and addition of facts would read as follows:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)}, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would result in the final state S2:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1),TOKEN(T3), BLACK(T3), POSITION(2,2)}

These simple examples demonstrate another fact: while facts about objects and their properties are independent from each other do relational facts depend from the state of their object facts. The relation of neighborhood e.g. depends from the participating neighbors. If — as in the example above — the object token T2 disappears then the relation ‘NEIGHBOR(T1,T2)’ no longer holds. This points to a hierarchy of dependencies with the ‘basic facts’ at the ‘root’ of a situation and all the other facts ‘above’ basic facts or ‘higher’ depending from the basic facts. Thus ‘higher order’ facts should be added only for the actual state and have to be ‘re-computed’ for every follow-up state anew.

If one would specify a context for state S1 saying that there are two players and one allows for each player actions like ‘move’, ‘insert’ or ‘delete’ then one could make the change from state S1 to state S2 more precise. Assuming the following facts for the context:

  1. PLAYER(PB1), PLAYER(PW1), HAS-THE-TURN(PB1)

In that case one could enhance the change statement in the following way:

<S1, S2, PB1,insert(TOKEN(T3,2,2)),add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would read as follows: given state S1 the player PB1 inserts a  black token at position (2,2); this yields a new state S2.

With or without a specified context but with regard to a set of possible change statements it can be — which is the usual case — that there is more than one option what can be changed. Some of the main types of changes are the following ones:

  1. RANDOM
  2. NOT RANDOM, which can be specified as follows:
    1. With PROBABILITIES (classical, quantum probability, …)
    2. DETERMINISTIC

Furthermore, if the causing object is an actor which can adapt structurally or even learn locally then this actor can appear in some time period like a deterministic system, in different collected time periods as an ‘oscillating system’ with different behavior, or even as a random system with changing probabilities. This make the forecast of systems with adaptive and/ or learning systems rather difficult.

Another aspect results from the fact that there can be states either with one actor which can cause more than one action in parallel or a state with multiple actors which can act simultaneously. In both cases the resulting total change has eventually to be ‘filtered’ through some additional rules telling what  is ‘possible’ in a state and what not. Thus if in the example of figure 6 both player want to insert a token at position (2,2) simultaneously then either  the rules of the game would forbid such a simultaneous action or — like in a computer game — simultaneous actions are allowed but the ‘geometry of a 2-dimensional space’ would not allow that two different tokens are at the same position.

Another aspect of change is the dimension of time. If the time dimension is not explicitly specified then a change from some state S_i to a state S_j does only mark the follow up state S_j as later. There is no specific ‘metric’ of time. If instead a certain ‘clock’ is specified then all changes have to be aligned with this ‘overall clock’. Then one can specify at what ‘point of time t’ the change will begin and at what point of time t*’ the change will be ended. If there is more than one change specified then these different changes can have different timings.

THIRD PERSON VIEW

Up until now the point of view describing a state and the possible changes of states is done in the so-called 3rd-person view: what can a person perceive if it is part of a situation and is looking into the situation.  It is explicitly assumed that such a person can perceive only the ‘surface’ of objects, including all kinds of actors. Thus if a driver of a car stears his car in a certain direction than the ‘observing person’ can see what happens, but can not ‘look into’ the driver ‘why’ he is steering in this way or ‘what he is planning next’.

A 3rd-person view is assumed to be the ‘normal mode of observation’ and it is the normal mode of empirical science.

Nevertheless there are situations where one wants to ‘understand’ a bit more ‘what is going on in a system’. Thus a biologist can be  interested to understand what mechanisms ‘inside a plant’ are responsible for the growth of a plant or for some kinds of plant-disfunctions. There are similar cases for to understand the behavior of animals and men. For instance it is an interesting question what kinds of ‘processes’ are in an animal available to ‘navigate’ in the environment across distances. Even if the biologist can look ‘into the body’, even ‘into the brain’, the cells as such do not tell a sufficient story. One has to understand the ‘functions’ which are enabled by the billions of cells, these functions are complex relations associated with certain ‘structures’ and certain ‘signals’. For this it is necessary to construct an explicit formal (mathematical) model/ theory representing all the necessary signals and relations which can be used to ‘explain’ the obsrvable behavior and which ‘explains’ the behavior of the billions of cells enabling such a behavior.

In a simpler, ‘relaxed’ kind of modeling  one would not take into account the properties and behavior of the ‘real cells’ but one would limit the scope to build a formal model which suffices to explain the oservable behavior.

This kind of approach to set up models of possible ‘internal’ (as such hidden) processes of an actor can extend the 3rd-person view substantially. These models are called in this text ‘actor models (AM)’.

HIDDEN WORLD PROCESSES

In this text all reported 3rd-person observations are called ‘actor story’, independent whether they are done in a pictorial or a textual mode.

As has been pointed out such actor stories are somewhat ‘limited’ in what they can describe.

It is possible to extend such an actor story (AS)  by several actor models (AM).

An actor story defines the situations in which an actor can occur. This  includes all kinds of stimuli which can trigger the possible senses of the actor as well as all kinds of actions an actor can apply to a situation.

The actor model of such an actor has to enable the actor to handle all these assumed stimuli as well as all these actions in the expected way.

While the actor story can be checked whether it is describing a process in an empirical ‘sound’ way,  the actor models are either ‘purely theoretical’ but ‘behavioral sound’ or they are also empirically sound with regard to the body of a biological or a technological system.

A serious challenge is the occurrence of adaptiv or/ and locally learning systems. While the actor story is a finite  description of possible states and changes, adaptiv or/ and locally learning systeme can change their behavior while ‘living’ in the actor story. These changes in the behavior can not completely be ‘foreseen’!

COGNITIVE EXPERT PROCESSES

According to the preceding considerations a homo sapiens as a biological system has besides many properties at least a consciousness and the ability to talk and by this to communicate with symbolic languages.

Looking to basic modes of an actor story (AS) one can infer some basic concepts inherently present in the communication.

Without having an explicit model of the internal processes in a homo sapiens system one can infer some basic properties from the communicative acts:

  1. Speaker and hearer presuppose a space within which objects with properties can occur.
  2. Changes can happen which presuppose some timely ordering.
  3. There is a disctinction between concrete things and abstract concepts which correspond to many concrete things.
  4. There is an implicit hierarchy of concepts starting with concrete objects at the ‘root level’ given as occurence in a concrete situation. Other concepts of ‘higher levels’ refer to concepts of lower levels.
  5. There are different kinds of relations between objects on different conceptual levels.
  6. The usage of language expressions presupposes structures which can be associated with the expressions as their ‘meanings’. The mapping between expressions and their meaning has to be learned by each actor separately, but in cooperation with all the other actors, with which the actor wants to share his meanings.
  7. It is assume that all the processes which enable the generation of concepts, concept hierarchies, relations, meaning relations etc. are unconscious! In the consciousness one can  use parts of the unconscious structures and processes under strictly limited conditions.
  8. To ‘learn’ dedicated matters and to be ‘critical’ about the quality of what one is learnig requires some disciplin, some learning methods, and a ‘learning-friendly’ environment. There is no guaranteed method of success.
  9. There are lots of unconscious processes which can influence understanding, learning, planning, decisions etc. and which until today are not yet sufficiently cleared up.

 

 

 

 

 

 

 

 

ACTOR-ACTOR INTERACTION ANALYSIS – A rough Outline of the Blueprint

eJournal: uffmm.org,
ISSN 2567-6458, 13.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last corrections: 14.February 2019 (add some more keywords; added  emphasizes for central words)

Change: 5.May 2019 (adding the the aspect of simulation and gaming; extending the view of the driving actors)

CONTEXT

An overview to the enhanced AAI theory  version 2 you can find here.  In this post we talk about the blueprint  of the whole  AAI analysis process. Here I leave out the topic of actor models (AM); the aspect of  simulation and gaming is mentioned only shortly. For these topics see other posts.

THE AAI ANALYSIS BLUEPRINT

Blueprint of the whole AAI analysis process including the epistemological assumptions. Not shown here is the whole topic of actor models (AM) and as well simulation.
Blueprint of the whole AAI analysis process including the epistemological assumptions. Not shown here is the whole topic of actor models (AM) and as well simulation.

The Actor-Actor Interaction (AAI) analysis is understood here as part of an  embracing  systems engineering process (SEP), which starts with the statement of a problem (P) which includes a vision (V) of an improved alternative situation. It has then to be analyzed how such a new improved situation S+ looks like; how one can realize certain tasks (T)  in an improved way.

DRIVING ACTORS

The driving actors for such an AAI analysis are at least one  stakeholder (STH) which communicates a problem P and an envisioned solution (ES) to an  expert (EXPaai) with a sufficient AAI experience. This expert will take   the lead in the process of transforming the problem and the envisioned  solution into a working solution (WS).

In the classical industrial case the stakeholder can be a group of managers from some company and the expert is also represented by a whole team of experts from different disciplines, including the AAI perspective as leading perspective.

In another case which  I will call here the  communal case — e.g. a whole city —      the stakeholder as well as the experts are members of the communal entity.   As   in the before mentioned cases there is some commonly accepted problem P combined  with a first envisioned solution ES, which shall be analyzed: what is needed to make it working? Can it work at all? What are costs? And many other questions can arise. The challenge to include all relevant experience and knowledge from all participants is at the center of the communication and to transform this available knowledge into some working solution which satisfies all stated requirements for all participants is a central  condition for the success of the project.

EPISTEMOLOGY

It has to be taken into account that the driving actors are able to do this job because they  have in their bodies brains (BRs) which in turn include  some consciousness (CNS). The processes and states beyond the consciousness are here called ‘unconscious‘ and the set of all these unconscious processes is called ‘the Unconsciousness’ (UCNS).

For more details to the cognitive processes see the post to the philosophical framework as well as the post bottom-up process. Both posts shall be integrated into one coherent view in the future.

SEMIOTIC SUBSYSTEM

An important set of substructures of the unconsciousness are those which enable symbolic language systems with so-called expressions (L) on one side and so-called non-expressions (~L) on the other. Embedded in a meaning relation (MNR) does the set of non-expressions ~L  function as the meaning (MEAN) of the expressions L, written as a mapping MNR: L <—> ~L. Depending from the involved sensors the expressions L can occur either as acoustic events L_spk, or as visual patterns written L_txt or visual patterns as pictures L_pict or even in other formats, which will not discussed here. The non-expressions can occur in every format which the brain can handle.

While written (symbolic) expressions L are only associated with the intended meaning through encoded mappings in the brain,  the spoken expressions L_spk as well as the pictorial ones L_pict can show some similarities with the intended meaning. Within acoustic  expressions one can ‘imitate‘ some sounds which are part of a meaning; even more can the pictorial expressions ‘imitate‘ the visual experience of the intended meaning to a high degree, but clearly not every kind of meaning.

DEFINING THE MAIN POINT OF REFERENCE

Because the space of possible problems and visions it nearly infinite large one has to define for a certain process the problem of the actual process together with the vision of a ‘better state of the affairs’. This is realized by a description of he problem in a problem document D_p as well as in a vision statement D_v. Because usually a vision is not without a given context one has to add all the constraints (C) which have to be taken into account for the possible solution.  Examples of constraints are ‘non-functional requirements’ (NFRs) like “safety” or “real time” or “without barriers” (for handicapped people). Part of the non-functional requirements are also definitions of win-lose states as part of a game.

AAI ANALYSIS – BASIC PROCEDURE

If the AAI check has been successful and there is at least one task T to be done in an assumed environment ENV and there are at least one executing actor A_exec in this task as well as an assisting actor A_ass then the AAI analysis can start.

ACTOR STORY (AS)

The main task is to elaborate a complete description of a process which includes a start state S* and a goal state S+, where  the participating executive actors A_exec can reach the goal state S+ by doing some actions. While the imagined process p_v  is a virtual (= cognitive/ mental) model of an intended real process p_e, this intended virtual model p_e can only be communicated by a symbolic expressions L embedded in a meaning relation. Thus the elaboration/ construction of the intended process will be realized by using appropriate expressions L embedded in a meaning relation. This can be understood as a basic mapping of sensor based perceptions of the supposed real world into some abstract virtual structures automatically (unconsciously) computed by the brain. A special kind of this mapping is the case of measurement.

In this text especially three types of symbolic expressions L will be used: (i) pictorial expressions L_pict, (ii) textual expressions of a natural language L_txt, and (iii) textual expressions of a mathematical language L_math. The meaning part of these symbolic expressions as well as the expressions itself will be called here an actor story (AS) with the different modes  pictorial AS (PAS), textual AS (TAS), as well as mathematical AS (MAS).

The basic elements of an  actor story (AS) are states which represent sets of facts. A fact is an expression of some defined language L which can be decided as being true in a real situation or not (the past and the future are special cases for such truth clarifications). Facts can be identified as actors which can act by their own. The transformation from one state to a follow up state has to be described with sets of change rules. The combination of states and change rules defines mathematically a directed graph (G).

Based on such a graph it is possible to derive an automaton (A) which can be used as a simulator. A simulator allows simulations. A concrete simulation takes a start state S0 as the actual state S* and computes with the aid of the change rules one follow up state S1. This follow up state becomes then the new actual state S*. Thus the simulation constitutes a continuous process which generally can be infinite. To make the simulation finite one has to define some stop criteria (C*). A simulation can be passive without any interruption or interactive. The interactive mode allows different external actors to select certain real values for the available variables of the actual state.

If in the problem definition certain win-lose states have been defined then one can turn an interactive simulation into a game where the external actors can try to manipulate the process in a way as to reach one of the defined win-states. As soon as someone (which can be a team) has reached a win-state the responsible actor (or team) has won. Such games can be repeated to allow accumulation of wins (or loses).

Gaming allows a far better experience of the advantages or disadvantages of some actor story as a rather lose simulation. Therefore the probability to detect aspects of an actor story with their given constraints is by gaming quite high and increases the probability to improve the whole concept.

Based on an actor story with a simulator it is possible to increase the cognitive power of exploring the future even more.  There exists the possibility to define an oracle algorithm as well as different kinds of intelligent algorithms to support the human actor further. This has to be described in other posts.

TAR AND AAR

If the actor story is completed (in a certain version v_i) then one can extract from the story the input-output profiles of every participating actor. This list represents the task-induced actor requirements (TAR).  If one is looking for concrete real persons for doing the job of an executing actor the TAR can be used as a benchmark for assessing candidates for this job. The profiles of the real persons are called here actor-actor induced requirements (AAR), that is the real profile compared with the ideal profile of the TAR. If the ‘distance’ between AAR and TAR is below some threshold then the candidate has either to be rejected or one can offer some training to improve his AAR; the other option is to  change the conditions of the TAR in a way that the TAR is more closer to the AARs.

The TAR is valid for the executive actors as well as for the assisting actors A_ass.

CONSTRAINTS CHECK

If the actor story has in some version V_i a certain completion one has to check whether the different constraints which accompany the vision document are satisfied through the story: AS_vi |- C.

Such an evaluation is only possible if the constraints can be interpreted with regard to the actor story AS in version vi in a way, that the constraints can be decided.

For many constraints it can happen that the constraints can not or not completely be decided on the level of the actor story but only in a later phase of the systems engineering process, when the actor story will be implemented in software and hardware.

MEASURING OF USABILITY

Using the actor story as a benchmark one can test the quality of the usability of the whole process by doing usability tests.

 

 

 

 

 

 

 

 

 

 

 

ADVANCED AAI-THEORY – V2 – COLLECTED REFERENCES

eJournal: uffmm.org
ISSN 2567-6458, 6.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

An overview of the enhanced AAI theory version 2 you can find here. In this post you can find the unified references from the different posts.

REFERENCES

  • ISO/IEC 25062:2006(E)
  • Joseph S. Dumas and Jean E. Fox. Usability testing: Current practice
    and future directions. chapter 57, pp.1129 – 1149,  in J.A. Jacko and A. Sears, editors, The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and Emerging Applications. 2nd edition, 2008
  • S. Lauesen. User Interface Design. A software Engineering Perspective.
    Pearson – Addison Wesley, London et al., 2005