Category Archives: bottom-up

THE BIG PICTURE: HCI – HMI – AAI in History – Engineering – Society – Philosophy

eJournal: uffmm.org,
ISSN 2567-6458, 20.April 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

A first draft version …

CONTEXT

The context for this text is the whole block dedicated to the AAI (Actor-Actor Interaction)  paradigm. The aim of this text is to give the big picture of all dimensions and components of this subject as it shows up during April 2019.

The first dimension introduced is the historical dimension, because this allows a first orientation in the course of events which lead  to the actual situation. It starts with the early days of real computers in the thirties and forties of the 20 century.

The second dimension is the engineering dimension which describes the special view within which we are looking onto the overall topic of interactions between human persons and computers (or machines or technology or society). We are interested how to transform a given problem into a valuable solution in a methodological sound way called engineering.

The third dimension is the whole of society because engineering happens always as some process within a society.  Society provides the resources which can be used and spends the preferences (values) what is understood as ‘valuable’, as ‘good’.

The fourth dimension is Philosophy as that kind of thinking which takes everything into account which can be thought and within thinking Philosophy clarifies conditions of thinking, possible tools of thinking and has to clarify when some symbolic expression becomes true.

HISTORY

In history we are looking back in the course of events. And this looking back is in a first step guided by the  concepts of HCI (Human-Computer Interface) and  HMI (Human-Machine Interaction).

It is an interesting phenomenon how the original focus of the interface between human persons and the early computers shifted to  the more general picture of interaction because the computer as machine developed rapidly on account of the rapid development of the enabling hardware (HW)  the enabling software (SW).

Within the general framework of hardware and software the so-called artificial intelligence (AI) developed first as a sub-topic on its own. Since the last 10 – 20 years it became in a way productive that it now  seems to become a normal part of every kind of software. Software and smart software seem to be   interchangeable. Thus the  new wording of augmented or collective intelligence is emerging intending to bridge the possible gap between humans with their human intelligence and machine intelligence. There is some motivation from the side of society not to allow the impression that the smart (intelligent) machines will replace some day the humans. Instead one is propagating the vision of a new collective shape of intelligence where human and machine intelligence allows a symbiosis where each side gives hist best and receives a maximum in a win-win situation.

What is revealing about the actual situation is the fact that the mainstream is always talking about intelligence but not seriously about learning! Intelligence is by its roots a static concept representing some capabilities at a certain point of time, while learning is the more general dynamic concept that a system can change its behavior depending from actual external stimuli as well as internal states. And such a change includes real changes of some of its internal states. Intelligence does not communicate this dynamics! The most demanding aspect of learning is the need for preferences. Without preferences learning is impossible. Today machine learning is a very weak example of learning because the question of preferences is not a real topic there. One assumes that some reward is available, but one does not really investigate this topic. The rare research trying to do this job is stating that there is not the faintest idea around how a general continuous learning could happen. Human society is of no help for this problem while human societies have a clash of many, often opposite, values, and they have no commonly accepted view how to improve this situation.

ENGINEERING

Engineering is the art and the science to transform a given problem into a valuable and working solution. What is valuable decides the surrounding enabling society and this judgment can change during the course of time.  Whether some solution is judged to be working can change during the course of time too but the criteria used for this judgment are more stable because of their adherence to concrete capabilities of technical solutions.

While engineering was and is  always  a kind of an art and needs such aspects like creativity, innovation, intuition etc. it is also and as far as possible a procedure driven by defined methods how to do things, and these methods are as far as possible backed up by scientific theories. The real engineer therefore synthesizes art, technology and science in a unique way which can not completely be learned in the schools.

In the past as well as in the present engineering has to happen in teams of many, often many thousands or even more, people which coordinate their brains by communication which enables in the individual brains some kind of understanding, of emerging world pictures,  which in turn guide the perception, the decisions, and the concrete behavior of everybody. And these cognitive processes are embedded — in every individual team member — in mixtures of desires, emotions, as well as motivations, which can support the cognitive processes or obstruct them. Therefore an optimal result can only be reached if the communication serves all necessary cognitive processes and the interactions between the team members enable the necessary constructive desires, emotions, and motivations.

If an engineering process is done by a small group of dedicated experts  — usually triggered by the given problem of an individual stakeholder — this can work well for many situations. It has the flavor of a so-called top-down approach. If the engineering deals with states of affairs where different kinds of people, citizens of some town etc. are affected by the results of such a process, the restriction to  a small group of experts  can become highly counterproductive. In those cases of a widespread interest it seems promising to include representatives of all the involved persons into the executing team to recognize their experiences and their kinds of preferences. This has to be done in a way which is understandable and appreciative, showing esteem for the others. This manner of extending the team of usual experts by situative experts can be termed bottom-up approach. In this usage of the term bottom-up this is not the opposite to top-down but  is reflecting the extend in which members of a society are included insofar they are affected by the results of a process.

SOCIETY

Societies in the past and the present occur in a great variety of value systems, organizational structures, systems of power etc.  Engineering processes within a society  are depending completely on the available resources of a society and of its value systems.

The population dynamics, the needs and wishes of the people, the real territories, the climate, housing, traffic, and many different things are constantly producing demands to be solved if life shall be able and continue during the course of time.

The self-understanding and the self-management of societies is crucial for their ability to used engineering to improve life. This deserves communication and education to a sufficient extend, appropriate public rules of management, otherwise the necessary understanding and the freedom to act is lacking to use engineering  in the right way.

PHILOSOPHY

Without communication no common constructive process can happen. Communication happens according to many  implicit rules compressed in the formula who when can speak how about what with whom etc. Communication enables cognitive processes of for instance  understanding, explanations, lines of arguments.  Especially important for survival is the ability to make true descriptions and the ability to decide whether a statement is true or not. Without this basic ability communication will break down, coordination will break down, life will break down.

The basic discipline to clarify the rules and conditions of true communication, of cognition in general, is called Philosophy. All the more modern empirical disciplines are specializations of the general scope of Philosophy and it is Philosophy which integrates all the special disciplines in one, coherent framework (this is the ideal; actually we are far from this ideal).

Thus to describe the process of engineering driven by different kinds of actors which are coordinating themselves by communication is primarily the task of philosophy with all their sub-disciplines.

Thus some of the topics of Philosophy are language, text, theory, verification of a  theory, functions within theories as algorithms, computation in general, inferences of true statements from given theories, and the like.

In this text I apply Philosophy as far as necessary. Especially I am introducing a new process model extending the classical systems engineering approach by including the driving actors explicitly in the formal representation of the process. Learning machines are included as standard tools to improve human thinking and communication. You can name this Augmented Social Learning Systems (ASLS). Compared to the wording Augmented Intelligence (AI) (as used for instance by the IBM marketing) the ASLS concept stresses that the primary point of reference are the biological systems which created and create machine intelligence as a new tool to enhance biological intelligence as part of biological learning systems. Compared to the wording Collective Intelligence (CI) (as propagated by the MIT, especially by Thomas W.Malone and colleagues) the spirit of the CI concept seems to be   similar, but perhaps only a weak similarity.

AAI-THEORY V2 – BLUEPRINT: Bottom-up

eJournal: uffmm.org,
ISSN 2567-6458, 27.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 28.February 2019 (Several corrections)

CONTEXT

An overview to the enhanced AAI theory version 2 you can find here. In this post we talk about the special topic how to proceed in a bottom-up approach.

BOTTOM-UP: THE GENERAL BLUEPRINT
Outine of the process how to generate an AS
Figure 1: Outline of the process how to generate an AS with a bottom-up approach

As the introductory figure shows it is assumed here that there is a collection of citizens and experts which offer their individual knowledge, experiences, and skills to ‘put them on the table’ challenged by a given problem P.

This knowledge is in the beginning not structured. The first step in the direction of an actor story (AS) is to analyze the different contributions in a way which shows distinguishable elements with properties and relations. Such a set of first ‘objects’ and ‘relations’ characterizes a set of facts which define a ‘situation’ or a ‘state’ as a collection of ‘facts’. Such a situation/ state can also be understood as a first simple ‘model‘ as response to a given problem. A model is as such ‘static‘; it describes what ‘is’ at a certain point of ‘time’.

In a next step the group has to identify possible ‘changes‘ which can be associated with at least one fact. There can be many possible changes which eventually  need different durations to come into effect. These effects can happen  as ‘exclusive alternatives’ or in ‘parallel’. Apply the possible changes to a  situation  generates   ‘successors’ to the actual situation. A sequence of situations generated by applied changes is  usually called a ‘simulation‘.

If one allows the interaction between real actors with a simulation by associating  a real actor to one of the actors ‘inside the simulation’ one is turning the simulation into an ‘interactive simulation‘ which represents basically a ‘computer game‘ (short: ‘egame‘).

One can use interactive simulations e.g. to (i) learn about the dynamics of a model, to (ii) test the assumptions of a model, to (iii) test the knowledge and skills of the real actors.

Making new experiences with a  simulation allows a continuous improvement of the model and its change rules.

Additionally one can include more citizens and experts into this process and one can use available knowledge from databases and libraries.

EPISTEMOLOGY OF CONCEPTS
Epistemology of concepts used in an AAI Analysis rprocess
Fig.2: Epistemology of concepts used in an AAI Analysis process

As outlined in the preceding section about the blueprint of a bottom-up process there will be a heavy   usage of concepts to describe state of affairs.

The literature about this topic in philosophy as well as many scientific disciplines is overwhelmingly and therefore this small text here can only be a ‘pointer’ into a complex topic. Nevertheless I will use exactly this pointer to explore this topic further.

While the literature is mainly dealing with  more or less specific partial models, I am trying here to point out a very general framework which fits to a more genera philosophical — especially epistemological — view as well as gives respect to many results of scientific disciplines.

The main dimensions here are (i) the outside external empirical world, which connects via sensors to the (ii) internal body, especially the brain,  which works largely ‘unconscious‘, and then (iii) the ‘conscious‘ part of he brain.

The most important relationship between the ‘conscious’ and the ‘unconscious’ part of the brain is the ability of the unconscious brain to transform automatically incoming concrete sens-experiences into more   ‘abstract’ structures, which have at least three sub-dimensions: (i) different concrete material, (ii) a sub-set of extracted common properties, (iii) different sets of occurring contexts associated with the different subsets. This enables the brain to extract only a ‘few’ abstract structures (= abstract concepts)  to deal with ‘many’  concrete events. Thus the abstract concept ‘chair’ can cover many different concrete chairs which have only a few properties in common. Additionally the chairs can occur in different ‘contexts’ associating them with different ‘relations’ which can  specify  possible different ‘usages’   of  the concept ‘chair’.

Thus, if the actor perceives something which ‘matches’ some ‘known’ concept then the actor is  not only conscious about the empirical concrete phenomenon but also simultaneously about the abstract concept which will automatically be activated. ‘Immediately’ the actor ‘knows’ that this empirical something is e.g. a ‘chair’. Concrete: this concrete something is matching an abstract concept ‘chair’ which can as such cover many other concrete things too which can be as concrete somethings partially different from another concrete something.

From this follows an interesting side effect: while an actor can easily decide, whether a concrete something is there  (“it is the case, that” = “it is true”) or not (“it is not the case, that” = “it isnot true” = “it is false”), an actor can not directly decide whether an abstract concept like ‘chair’ as such is ‘true’ in the sense, that the concept ‘as a whole’ corresponds to concrete empirical occurrences. This depends from the fact that an abstract concept like ‘chair’ can match with a  nearly infinite set of possible concrete somethings which are called ‘possible instances’ of the abstract concept. But a human actor can directly   ‘check’ only a ‘few’ concrete somethings. Therefore the usage of abstract concepts like ‘chair’, ‘house’, ‘bottle’ etc. implies  inherently an ‘open set’ of ‘possible’ concrete  exemplars and therefor is the usage of such concepts necessarily a ‘hypothetical’ usage.  Because we can ‘in principle’ check the real extensions of these abstract concepts   in everyday life as long there is the ‘freedom’ to do  such checks,  we are losing the ‘truth’ of our concepts and thereby the basis for a  realistic cooperation, if this ‘freedom of checking’ is not possible.

If some incoming perception is ‘not yet known’,  because nothing given in the unconsciousness does ‘match’,  it is in a basic sens ‘new’ and the brain will automatically generate a ‘new concept’.

THE DIMENSION OF MEANING

In Figure 2 one can find two other components: the ‘meaning relation’ which maps concepts into ‘language expression’.

Language expressions inside the brain correspond to a diversity of visual, auditory, tactile or other empirical event sequences, which are in use for communicative acts.

These language expressions are usually not ‘isolated structures’ but are embedded in relations which map the expression structures to conceptual structures including  the different substantiations of the abstract concepts and the associated contexts. By these relations the expressions are attached to the conceptual structures which are called the ‘meaning‘ of the expressions and vice versa the expressions are called the ‘language articulation’ of the meaning structures.

As far as conceptual structures are related via meaning relations to language expressions then  a perception can automatically cause the ‘activation’ of the associated language expressions, which in turn can be uttered in some way. But conceptual structures   can exist  (especially with children) without an available  meaning relation.

When language expressions are used within a communicative act then  their usage can activate in all participants of the communication the ‘learned’ concepts as their intended meanings. Heaving the meaning activated in someones ‘consciousness’ this is a real phenomenon for that actor. But from the occurrence of  concepts alone does not automatically follow, that a  concept is ‘backed up’ by some ‘real matter’ in the external world. Someone can utter that it is raining, in the hearer of this utterance the intended concepts can become activated, but in the outside external world no rain is happening. In this case one has to state that the utterance of the language expressions “Look, its raining” has no counterpart in the real world, therefore we call the utterance in this case ‘false‘ or  ‘not true‘.

THE DIMENSION OF TIME
The dimension of time based on past experience and combinatoric thinking
Fig.3: The dimension of time based on past experience and combinatoric thinking

The preceding figure 2 of the conceptual space is not yet complete. There is another important dimension based on the ability of the unconscious brain to ‘store’ certain structures in a ‘timely order’ which enables an actor — under certain conditions ! — to decide whether a certain structure X occurred in the consciousness ‘before’ or ‘after’ or ‘at the same time’ as another structure Y.

Evidently the unconscious brain is able do exactly this:  (i) it can arrange the different structures under certain conditions in a ‘timely order’;  (ii)  it can detect ‘differences‘ between timely succeeding structures;  the brain (iii) can conceptualize these changes as ‘change concepts‘ (‘rules of change’), and it can  can classify different kinds of change like ‘deterministic’, ‘non-deterministic’ with different kinds of probabilities, as well as ‘arbitrary’ as in the case of ‘free learning systems‘. Free learning systems are able to behave in a ‘deterministic-like manner’, but they can also change their patterns on account of internal learning and decision processes in nearly any direction.

Based on memories of conceptual structures and derived change concepts (rules of change) the unconscious brain is able to generate different kinds of ‘possible configurations’, whose quality is  depending from the degree of dependencies within the  ‘generating  criteria’: (i) no special restrictions; (ii) empirical restrictions; (iii) empirical restrictions for ‘upcoming states’ (if all drinkable water would be consumed, then one cannot plan any further with drinkable water).