Category Archives: Knowledge

Pierre Lévy : Collective Intelligence – Chapter 1 – Introduction

eJournal: uffmm.org, ISSN 2567-6458, 17.March 2022 – 22.March 2022, 8:40
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

SCOPE

In the uffmm review section the different papers and books are discussed from the point of view of the oksimo paradigm. [1] In the following text the author discusses some aspects of the book “Collective Intelligence. mankind’s emerging world in cyberspace” by Pierre Lévy (translated by Robert Bonono),1997 (French: 1994)[2]

PREVIEW

Before starting a more complete review here a notice in advance.

Only these days I started reading this book of Pierre Lévy after working more than 4 years intensively with the problem of an open knowledge space for everybody as genuine part of the cyberspace. I have approached the problem from several disciplines culminating in a new theory concept which has additionally a direct manifestation in a new kind of software too. While I am now are just testing version 2 of this software and having in parallel worked through several papers of the early, the middle, and the late Karl Popper [3], I detected this book of Lévy [*] and was completely impressed by the preface of this book. His view of mankind and cyberspace is intellectual deep and a real piece of art. I had the feeling that this text could be without compromise a direct preview of our software paradigm although I didn’t know about him before.

Looking to know more about him I detected some more interesting books but especially also his blog intlekt – metadata [4], where he develops his vision of a new language for a new ‘collective intelligence’ being practiced in the cyberspace. While his ideas about ‘collective intelligence’ associated with the ‘cyberspace’ are fascinating, it appears to me that his ideas about a new language are strongly embedded in ‘classical’ concepts of language, semiotics, and computer, concepts which — in my view — are not sufficient for a new language enabling collective intelligence.

Thus it can become an exciting reading with continuous reflections about the conditions about ‘collective intelligence’ and the ‘role of language’ within this.

Chapter 1: Introduction

Position lévy

The following description of the position of Lévy described in his 1st chapter is clearly an ‘interpretation’ from the ‘viewpoint’ of the writer at this time. This is more or less ‘inevitable’. [5]

A good starting point for the project of ‘understanding the book’ seems to be the historical outline which Lévy gives on the pages 5-10. Starting with the appearance of the homo sapiens he characterizes different periods of time with different cultural patterns triggered by the homo sapiens. In the last period, which is still lasting, knowledge takes radical new ‘forms’; one central feature is the appearance of the ‘cyberspace’.

Primarily the cyberspace is ‘machine-based’, some material structure, enhanced with a certain type of dynamics enabled by algorithms working in the machine. But as part of the cultural life of the homo sapiens the cyberspace is also a cultural reality increasingly interacting directly with individuals, groups, institutions, companies, industry, nature, and even more. And in this space enabled by interactions the homo sapiens does not only encounter with technical entities alone, but also with effects/ events/ artifacts produced by other homo sapiens companions.

Lévy calls this a “re-creation of the social bond based on reciprocal apprenticeship, shared skills, imagination, and collective intelligence.” (p.10) And he adds as a supplement that “collective intelligence is not a purely cognitive object.” (p.10)

Looking into the future Lévy assumes two main axes: “The renewal of the social bond through our relation to knowledge and collective intelligence itself.” (p.11)

Important seems to be that ‘knowledge’ is also not be confined to ‘facts alone’ but it ‘lives’ in the reziproke interactions of human actors and thereby knowledge is a dynamic process.(cf. p.11) Humans as part of such knowledge processes receive their ‘identities’ from this flow. (cf. p.12) One consequence from this is “… the other remains enigmatic, becomes a desirable being in every respect.”(p.12) With some further comment: “No one knows everything, everyone knows something, all knowledge resides in humanity. There is no transcendent store of knowledge and knowledge is simply the sum of what we know.”(p.13f)

‘Collective intelligence’ dwells nearby to dynamic knowledge: “The basis and goal of collective intelligence is the mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities.”(p.13) Thus Lévy can state that collective intelligence “is born with culture and growth with it.”(p.16) And making it more concrete with a direct embedding in a community: “In an intelligent community the specific objective is to permanently negotiate the order of things, language, the role of the individual, the identification and definition of objects, the reinterpretation of memory. Nothing is fixed.”(p.17)

These different aspects are accumulating in the vision of “a new humanism that incorporates and enlarges the scope of self knowledge into a form of group knowledge and collective thought. … [the] process of collective intelligence [is] leading to the creation of a distinct sense of community.”(p.17)

One side effect of such a new humanism could be “new forms of democracy, better suited to the complexity of contemporary problems…”.(p.18)

First COMMENTS

At this point I will give only some few comments, waiting with more general and final thoughts until the end of the reading of the whole text.

Shortened Timeline – Wrong Picture

The timeline which Lévy is using is helpful, but this timeline is ‘incomplete’. What is missing is the whole time ‘before’ the advent of the homo sapiens within the biological evolution. And this ‘absence’ hides the understanding of one, if not ‘the’, most important concept of all life, including the homo sapiens and its cultural process.

This central concept is today called ‘sustainable development’. It points to a ‘dynamical structure’, which is capable of ‘adapting to an ever changing environment’. Life on the planet earth is only possible from the very beginning on account of this fundamental capability starting with the first cells and being kept strongly alive through all the 3.5 Billion years (10^9) in all the following fascinating developments.

This capability to be able to ‘adapt to an ever changing environment’ implies the ability to change the ‘working structure, the body’ in a way, that the structure can change to respond in new ways, if the environment changes. Such a change has two sides: (i) the real ‘production’ of the working structures of a living system, and (ii) the ‘knowledge’, which is necessary to ‘inform’ the processes of formation and keeping an organism ‘in action’. And these basic mechanisms have additionally (iii) to be ‘distributed in a whole population’, whose sheer number gives enough redundancy to compensate for ‘wrong proposals’.

Knowing this the appearance of the homo sapiens life form manifests a qualitative shift in the structure of the adaption so far: surely prepared by several Millions of years the body of the homo sapiens with an unusual brain enabled new forms of ‘understanding the world’ in close connection with new forms of ‘communication’ and ‘cooperation’. With the homo sapiens the brains became capable to talk — mediated by their body and the surrounding body world — with other brains hidden in other bodies in a way, which enabled the sharing of ‘meaning’ rooted in the body world as well in the own body. This capability created by communication a ‘network of distributed knowledge’ encoded in the shared meaning of individual meaning functions. As long as communication with a certain meaning function with the shared meanings ‘works’, as long does this distributed knowledge’ exist. If the shared meaning weakens or breaks down this distributed knowledge is ‘gone’.

Thus, a homo sapiens population has not to wait for another generation until new varieties of their body structures could show up and compete with the changing environment. A homo sapiens population has the capability to perceive the environment — and itself — in a way, that allows additionally new forms of ‘transformations of the perceptions’ in a way, that ‘cognitive varieties of perceived environments’ can be ‘internally produced’ and being ‘communicated’ and being used for ‘sequences of coordinated actions’ which can change the environment and the homo sapiens them self.

The cultural history then shows — as Lévy has outlined shortly on his pages 5-10 — that the homo sapiens population (distributed in many competing smaller sub-populations) ‘invented’ more and more ‘behavior pattern’, ‘social rules’ and a rich ‘diversity of tools’ to improve communication and to improve the representation and processing of knowledge, which in turn helped for even more complex ‘sequences of coordinated actions’.

Sustainability & Collective Intelligence

Although until today there are no commonly accepted definitions of ‘intelligence’ and of ‘knowledge’ available [6], it makes some sense to locate ‘knowledge’ and ‘intelligence’ in this ‘communication based space of mutual coordinated actions’. And this embedding implies to think about knowledge and intelligence as a property of a population, which ‘collectively’ is learning, is understanding, is planning, is modifying its environment as well as them self.

And having this distributed capability a population has all the basics to enable a ‘sustainable development’.

Therefore the capability for a sustainable development is an emergent capability based on the processes enabled by a distributed knowledge enabled by a collective intelligence.

Having sketched out this then all the wonderful statements of Lévy seem to be ‘true’ in that they describe a dynamic reality which is provided by biological life as such.

A truly Open Space with Real Boundaries

Looking from the outside onto this biological mystery of sustainable processes based on collective intelligence using distributed knowledge one can identify incredible spaces of possible continuations. In principle these spaces are ‘open spaces’.

Looking to the details of this machinery — because we are ‘part of it’ — we know by historical and everyday experience that these processes can fail every minute, even every second.

To ‘improve’ a given situation one needs (i) not only a criterion which enables a judgment about something to be classified as being ‘not good’ (e.g. the given situation), one needs further (ii) some ‘minimal vision’ of a ‘different situation’, which can be classified by a criterion as being ‘better’. And, finally, one needs (iii) a minimal ‘knowledge’ about possible ‘actions’ which can change the given situation in successive steps to transform it into the envisioned ‘new better situation’ functioning as a ‘goal’.

Looking around, looking back, everybody has surely experiences from everyday life that these three tasks are far from being trivial. To judge something to be ‘not good’ or ‘not good enough’ presupposes a minimum of ‘knowledge’ which should be sufficiently evenly be ‘distributed’ in the ‘brains of all participants’. Without a sufficient agreement no common judgment will be possible. At the time of this writing it seems that there is plenty of knowledge around, but it is not working as a coherent knowledge space accepted by all participants. Knowledge battles against knowledge. The same is effective for the tasks (ii) and (iii).

There are many reasons why it is no working. While especially the ‘big challenges’ are of ‘global nature’ and are following a certain time schedule there is not too much time available to ‘synchronize’ the necessary knowledge between all. Mankind has until now supportet predominantly the sheer amount of knowledge and ‘individual specialized solutions’, but did miss the challenge to develop at the same time new and better ‘common processes’ of ‘shared knowledge’. The invention of computer, networks of computer, and then the multi-faceted cyberspace is a great and important invention, but is not really helpful as long as the cyberspace has not become a ‘genuin human-like’ tool for ‘distributed human knowledge’ and ‘distributed collective human-machine intelligence’.

Truth

One of the most important challenges for all kinds of knowledge is the ability to enable a ‘knowledge inspired view’ of the environment — including the actor — which is ‘in agreement with the reality of the environment’; otherwise the actions will not be able to support life in the long run. [7] Such an ‘agreement’ is a challenge, especially if the ‘real processes’ are ‘complex’ , ‘distributed’ and are happening in ‘large time frames’. As all human societies today demonstrate, this fundamental ability to use ’empirically valid knowledge’ is partially well developed, but in many other cases it seems to be nearly not in existence. There is a strong — inborn ! — tendency of human persons to think that the ‘pictures in their heads’ represent ‘automatically’ such a knowledge what is in agreement with the real world. It isn’t. Thus ‘dreams’ are ruling the everyday world of societies. And the proportion of brains with such ‘dreams’ seems to grow. In a certain sense this is a kind of ‘illness’: invisible, but strongly effective and highly infectious. Science alone seems to be not a sufficient remedy, but it is a substantial condition for a remedy.

COMMENTS

[*] The decisive hint for this book came from Athene Sorokowsky, who is member of my research group.

[1] Gerd Doeben-Henisch,The general idea of the oksimo paradigm: https://www.uffmm.org/2022/01/24/newsletter/, January 2022

[2] Pierre Lévy in wkp-en: https://en.wikipedia.org/wiki/Pierre_L%C3%A9vy

[3] Karl Popper in wkp-en: https://en.wikipedia.org/wiki/Karl_Popper. One of the papers I have written commenting on Popper can be found HERE.

[4] Pierre Lévy, intlekt – metadata, see: https://intlekt.io/blog/

[5] Who wants to know, what Lévy ‘really’ has written has to go back to the text of Lévy directly. … then the reader will read the text of Lévy with ‘his own point of view’ … indeed, even then the reader will not know with certainty, whether he did really understand Lévy ‘right’. … reading a text is always a ‘dialogue’ .. .

[6] Not in Philosophie, not in the so-called ‘Humanities’, not in the Social Sciences, not in the Empirical Sciences, and not in Computer Science!

[7] The ‘long run’ can be very short if you misjudge in the traffic a situation, or a medical doctor makes a mistake or a nuclear reactor has the wrong sensors or ….

Continuation

See HERE.

NEWSLETTER

eJournal: uffmm.org
ISSN 2567-6458, 24. January 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This post is part of the uffmm science blog.

INTENTION

This is the place for short summaries of topics about which  the author is writing in his German blogs (cognitiveagent.org (Philosophy, Science ), oksimo.org (a new paradigm how people can together turn everything in a simulation by only using their everyday language))

NEWSLETTER January 24, 2022

Software-Paradigm

Since the beginning of the development of the oksimo software I was urged to distinguish between the oksimo software and the oksimo paradigm. The oksimo software is some software which appears to the user as an interface by a web browser, able to do some work, and the oksimo paradigm stands for the whole ‘action space’ which is possible for   human actors using the oksimo software. We know from daily practice that the  ‘software’ — until now in use — is important and big; the software is somehow the ‘store of knowledge’ coded in some language. In the context of the oksimo paradigm the software is ‘small’ and ‘unimportant’. The only contribution of the software is to support human actors to talk about the world in their everyday language in a way, that these talks will automatically be turned into simulations as well as full fledged theories. That’s it. The computer as such does not understand anything. This is a new kind of ‘collective man:machine intelligence’.

Concrete Simulations

Because the main experience while communicating the ideas of this new software paradigm is, that people do not understanding this new paradigm — especially the computer science guys have problems, locked by their ‘usual understanding’ of computers — we stopped ‘advertising’ and focus on first practical examples. This year we spent time to set up a real simulation of a  real county in Germany named ‘Main-Kinzig-Kreis (MKK)’ (Perhaps the same will be done in South-Africa with  the Gauteng Province and there mostly from the Tshwane District.)

Clearly these models will to the end of the year 2022 only cover some main aspects of the county including the related towns and cities, but it will be a real model and can be further developed in the upcoming years.

Because these models are completely WWW-conform and reachable by the ‘ordinary World Wide Web’, everybody can read the results, can try the simulation mode on its own, and can add his own version as an HTML-page.  One can also unify different models  by ‘only pressing a button’. The main intention is, that distributed people can work together as ‘a group’ to share their ideas, visions, experiences.

Software Roadmap

Although we had in the beginning a kind of a Roadmap, what we wanted to have  finished at some time, things went differently: because this whole paradigm is radically new we had in the beginning a basic idea, but not a complete understanding of everything. And thus it happened that we step wise    learned better what it really is. It became more ‘simpler’ and at the same time ‘more powerful’. From a theoretical point of view it looks now as if it can do nearly everything which humans want from a ‘software for a sustainable future’.

A nice point just now was the understanding how we can use radically everyday language and at the same time all of mathematics. If one understands what mathematics is, how it works in our thinking, than it became very simple.

Meta-Thinking

This whole oksimo (reloaded) software project became only possible because there was during many years a truly multi-disciplinary thinking alive relating different disciplines in a truly trans-disciplinary (= meta-theoretical = philosophical) fashion. What we observe today is a steady growth of always more ‘special disciplines’ but a pondering lack of ‘integration’, of meta-thinking. Nowhere we have really working trans-disciplinary programs, there exist not even ideas/ concepts, how to do it.

Sustainability

The united nations series of conferences starting in 1992 until 2015 brought to the front that the course of life on the planet earth is facing more and more a crisis, because the human race has meanwhile occupied 3/4 of the usable areas of the planet and has changed the whole bio-systems and many important resources. The climate change as such is not a problem, but because the human population — and a working biosphere — is highly sensitive to climate change, it is a growing experience of humans that the conditions of the planet are becoming ‘pressing’. Because these problems are working on a global scale they cannot be solve by single nations alone. The time of ‘nations’ seems to be ‘out’. Either we are ‘one mankind’ or we will lose.

To understand ‘sustainability’ one has to look to the biological evolution with the eyes of many disciplines. Besides biology (with many additional disciplines) it seems to me that   ecology is highly important, theoretical ecology!

As part of the biosphere we humans as biological systems have introduced culture,  technology and society  in the game of life. As part of technology we have also introduced machines called ‘computer’ embedded in networks of ‘everything’. All this can be very valuable tools to master the different kinds of future including the whole biosphere. But this can only happen if the human race learns a bit more what it means to live in a truly sustainable fashion. This begins in the kind of ‘thinking and sharing ideas’. We are — it seems to me — far from such a ‘sustainable thinking’.  The minds are very ‘closed boxes’.

Spirituality

In this uffmm blog I did never write about spirituality, also not in the oksimo.org blog, but I have written several posts in may philosophy blog (about 20 – 30, or even more), and elsewhere.

Most people associate the wording ‘spirituality’ with strange, esoteric things, with religions. This reflects the course of history where different kinds of religions and partially strange movements used this term as ‘their’ term.  But this must not be so, not necessarily.

Spirituality is a genuine property of all biological life which in turn is an ‘outcome’ of the whole universe.  The ‘spiritual’ is not owned by special persons, it belongs to every human person  as a part of it. If one understands ‘life’ in it’s full reality, it is ‘the’ most important event in the whole universe. To understand this one must use everything we know today by the empirical sciences, but clearly more, because the empirical sciences are still lacking a true meta-science. The ‘old philosophy’ has not ‘grown’ ‘with’ the sciences; both are still ‘highly separated’ ….

 

 

OKSIMO MEETS POPPER. The Generalized Oksimo Theory Paradigm

eJournal: uffmm.org
ISSN 2567-6458, 5.April – 5.April  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last changes: Small corrections, April 8, 2021

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE GENERALIZED OKSIMO THEORY PARADIGM

The Generalized Oksimo Paradigm
Figure: Overview of the Generalized Oksimo Paradigm

In the preceding sections it has been shown that the oksimo paradigm is principally fitting in the theory paradigm as it has been  discussed by Popper. This is possible because some of the concepts used by Popper have been re-interpreted by re-analyzing the functioning of the symbolic dimension. All the requirements of Popper could be shown to work but now even in a more extended way.

SUSTAINABLE FUTURE

To describe the oksimo paradigm it is not necessary to mention as a wider context the general perspective of sustainability as described by the United Nations [UN][1]. But if one understands the oksiomo paradigm deeper and one knows that from the 17 sustainable development goals [SDGs] the fourth goal [SDG4] is understood by the UN as the central key for the development of all the other SDGs [2], then one can understand this as an invitation to think about that kind of knowledge which could be the ‘kernel technology’ for sustainability. A ‘technology’ is not simply ‘knowledge’, it is a process which enables the participants — here assumed as human actors with built-in meaning functions — to share their experience of the world and as well their hopes, their wishes, their dreams to become true in a reachable future. To be ‘sustainable’ these visions have to be realized in a fashion which keeps the whole of biological life alive on earth as well in the whole universe. Biological life is the highest known value with which the universe is gifted.

Knowledge as a kernel technology for a sustainable future of the whole biological life has to be a process where all human biological life-forms headed by the human actors have to contribute with their experience and capabilities to find those possible future states (visions, goals, …) which can really enable a sustainable future.

THE SYMBOLIC DIMENSION

To enable different isolated brains in different bodies to ‘cooperate’ and thereby to ‘coordinate’ their experience, and their behavior, the only and most effective way to do this is known as ‘symbolic communication’: using expressions of some ordinary language whose ‘meaning’ has been learned by every member of the population beginning with being born on this planet.  Human actors (classified as the life-form ‘homo sapiens’) have the most known elaborated language capability by being able to associate all kinds of experience with expressions of an ordinary language. These ‘mappings’ between expressions and the general experience is taking place ‘inside the brain’ and these mappings are highly ‘adaptive’; they can change over time and they are mostly ‘synchronized’ with the mappings taking place in other brains. Such a mapping is here called a ‘meaning function’ [μ].

DIFFERENT KINDS OF EXPRESSIONS

The different sientific disciplines today have developed many different views and models how to describe the symbolic dimension, their ‘parts’, their functioning. Here we assume only three different kinds of expressions which can be analayzed further with nearly infinite many details.

True Concrete Expressions [S_A]

The ‘everyday case’ occurs if human actors share a real actual situation and they use their symbolic expressions to ‘talk about’ the shared situation, telling each other what is given according to their understanding using their built-in meaning function μ. With regard to the shared knowledge and language these human actors can decide, wether an expression E used in the description is matching the observed situation or not. If the expression is matching than such an expression is classified as being a ‘true expression’. Otherwise it is either undefined or eventually ‘false’ if it ‘contradicts’ directly. Thus the set of all expressions assumed to be true in a actual given situation S is named  here S_A. Let us look to an example: Peter says, “it is raining”, and Jenny says “it is not raining”. If all would agree, that   it is raining, then Peters expression is classified as ‘true’ and Jennys expression as ‘false’. If  different views would exist in the group, then it is not clear what is true or false or undefined in this group! This problem belongs to the pragmatic dimension of communication, where human actors have to find a way to clarify their views of the world. The right view of the situation  depends from the different individual views located in the individual brains and these views can be wrong. There exists no automatic procedure to get a ‘true’ vision of the real world.

General Assumptions [S_U]

It is typical for human actors that they are collecting knowledge about the world including general assumptions like “Birds can fly”, “Ice is melting in the sun”, “In certain cases the covid19-virus can bring people to death”, etc. These expressions are usually understood as ‘general’ rules  because they do not describe a concrete single case but are speaking of many possible cases. Such a general rule can be used within some logical deduction as demonstrated by the  classical greek logic:  ‘IF it is true that  “Birds can fly” AND we have a certain fact  “R2D2 is a bird” THEN we can deduce the fact  “R2D2 can fly”‘.  The expression “R2D2 can fly”  claims to be  true. Whether this is ‘really’ the case has to be shown in a real situation, either actually or at some point in the future. The set of all assumed general assumptions is named here S_U.

Possible Future States [S_V]

By experience and some ‘creative’ thinking human actors can imagine concrete situations, which are not yet actually given but which are assumed to be ‘possible’; the possibility can be interpreted as some ‘future’ situation. If a real situation would be reached which includes the envisioned state then one could say that the vision has become  ‘true’. Otherwise the envisioned state is ‘undefined’: perhaps it can become true or not.  In human culture there exist many visions since hundreds or even thousands of years where still people are ‘believing’ that they will become ‘true’ some day. The set of all expressions related to a vision is named here S_V.

REALIZING FUTURE [X, X]

If the set of expressions S_V  related to a ‘vision’ (accompanied by many emotions, desires, details of all kinds) is not empty,  then it is possible to look for those ‘actions’ which with highest ‘probability’ π can ‘change’ a given situation S_A in a way that the new situation S’  is becoming more and more similar to the envisioned situation S_V. Thus a given goal (=vision) can inspire a ‘construction process’ which is typical for all kinds of engineering and creative thinking. The general format of an expression to describe a change is within the oksimo paradigm assumed as follows:

  1. With regard to a given situation S
  2. Check whether a certain set of expressions COND is a subset of the expressions of S
  3. If this is the case then with probability π:
  4. Remove all expressions of the set Eminus from S,
  5. Add all expressions of the set Eplus to S
  6. and update (compute) all parameters of the set Model

In a short format:

S’π = S – Eminus + Eplus & MODEL(S)

All change rules together represent the set X. In the general theory paradigm the change rules X represent the inference rules, which together with a general ‘inference concept’ X constitute the ‘logic’ of the theory. This enables the following general logical relation:

{S_U, S_A} <S_A, S1, S2, …, Sn>

with the continuous evaluation: |S_V ⊆ Si| > θ. During the whole construction it is possible to evaluate each individual state whether the expressions of the vision state S_V are part of the actual state Si and to which degree.

Such a logical deduction concept is called a ‘simulation’ by using a ‘simulator’ to repeat the individual deductions.

POSSIBLE EXTENSIONS

The above outlined oksimo theory paradigm can easily be extended by some more features:

  1. AUTONOMOUS ACTORS: The change rules X so far are ‘static’ rules. But we know from everyday life that there are many dynamic sources around which can cause some change, especially biological and non-biological actors. Every such actors can be understood as an input-output system with an adaptive ‘behavior function’ φ.  Such a behavior can not be modeled by ‘static’ rules alone. Therefore one can either define theoretical models of such ‘autonomous’ actors with  their behavior and enlarge the set of change rules X with ‘autonomous change rules’ Xa as Xa ⊆ X. The other variant is to include in real time ‘living autonomous’ actors as ‘players’ having the role of an ‘autonomous’ rule and being enabled to act according to their ‘will’.
  2. MACHINE INTELLIGENCE: To run a simulation will always give only ‘one path’ P in the space of possible states. Usually there would be many more paths which can lead to a goal state S_V and the accompanying parameters from Model can be different: more or less energy consumption, more or less financial losses, more or less time needed, etc. To improve the knowledge about the ‘good candidates’ in the possible state space one can introduce  general machine intelligence algorithms to evaluate the state space and make proposals.
  3. REAL-TIME PARAMETERS: The parameters of Model can be connected online with real measurements in near real-time. This would allow to use the collected knowledge to ‘monitor’ real processes in the world and based on the collected knowledge recommend actions to react to some states.
COMMENTS

[1] The 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet, now and into the future. At its heart are the 17 Sustainable Development Goals (SDGs), which are an urgent call for action by all countries – developed and developing – in a global partnership. They recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests. See PDF: https://sdgs.un.org/sites/default/files/publication/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf

[2] UN, SDG4, PDF, Argumentation why the SDG4 ist fundamental for all other SDGs: https://sdgs.un.org/sites/default/files/publications/2275sdbeginswitheducation.pdf

 

 

OKSIMO MEETS POPPER. The Oksimo Theory Paradigm

eJournal: uffmm.org
ISSN 2567-6458, 2.April – 2.April  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE OKSIMO THORY PARADIGM

The Oksimo Theory Paradigm
Figure 1: The Oksimo Theory Paradigm

The following text is a short illustration how the general theory concept as extracted from the text of Popper can be applied to the oksimo simulation software concept.

The starting point is the meta-theoetical schema as follows:

MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>

In the oksimo case we have also a given empirical context S, a non-epty set of human actors A[μ] whith a built-in meaning function for the expressions E of some language L, some axioms AX as a subset of the expressions E, an inference concept , and all the other concepts.

The human actors A[μ] can write  some documents with the expressions E of language L. In one document S_U they can write down some universal facts they belief that these are true (e.g. ‘Birds can fly’).  In another document S_E they can write down some empirical facts from the given situation S like ‘There is something named James. James is a bird’. And somehow they wish that James should be able to fly, thus they write down a vision text S_V with ‘James can fly’.

The interesting question is whether it is possible to generate a situation S_E.i in the future, which includes the fact ‘James can fly’.

With the knowledge already given they can built the change rule: IF it is valid, that {Birds can fly. James is a bird} THEN with probability π = 1 add the expression Eplus = {‘James can fly’} to the actual situation S_E.i. EMinus = {}. This rule is then an element of the set of change rules X.

The simulator X works according to the schema S’ = S – Eminus + Eplus.

Because we have S=S_U + S_E we are getting

S’ = {Birds can fly. Something is named James. James is a bird.} – Eminus + Eplus

S’ = {Birds can fly. Something is named James. James is a bird.} – {}+ {James can fly}

S’ = {Birds can fly. Something is named James. James is a bird. James can fly}

With regard to the vision which is used for evaluation one can state additionally:

|{James can fly} ⊆ {Birds can fly. Something is named James. James is a bird. James can fly}|= 1 ≥ 1

Thus the goal has been reached with 1 meaning with 100%.

THE ROLE OF MEANING

What makes a certain difference between classical concepts of an empirical theory and the oksimo paradigm is the role of meaning in the oksimo paradigm. While the classical empirical theory concept is using formal (mathematical) languages for their descriptions with the associated — nearly unsolvable — problem how to relate these concepts to the intended empirical world, does the oksimo paradigm assume the opposite: the starting point is always the ordinary language as basic language which on demand can be extended by special expressions (like e.g. set theoretical expressions, numbers etc.).

Furthermore it is in the oksimo paradigm assumed that the human actors with their built-in meaning function nearly always are able to  decided whether an expression e of the used expressions E of the ordinary language L is matching certain properties of the given situation S. Thus the human actors are those who have the authority to decided by their meaning whether some expression is actually true or not.

The same holds with possible goals (visions) and possible inference rules (= change rules). Whether some consequence Y shall happen if some condition X is satisfied by a given actual situation S can only be decided by the human actors. There is no other knowledge available then that what is in the head of the human actors. [1] This knowledge can be narrow, it can even be wrong, but human actors can only decide with that knowledge what is available to them.

If they are using change rules (= inference rules) based on their knowledge and they derive some follow up situation as a theorem, then it can happen, that there exists no empiricial situation S which is matching the theorem. This would be an undefined truth case. If the theorem t would be a contradiction to the given situation S then it would be clear that the theory is inconsistent and therefore something seems to be wrong. Another case cpuld be that the theorem t is matching a situation. This would confirm the belief on the theory.

COMMENTS

[1] Well known knowledge tools are since long libraries and since not so long data-bases. The expressions stored there can only be of use (i) if a human actor knows about these and (ii) knows how to use them. As the amount of stored expressions is increasing the portion of expressions to be cognitively processed by human actors is decreasing. This decrease in the usable portion can be used for a measure of negative complexity which indicates a growng deterioration of the human knowledge space.  The idea that certain kinds of algorithms can analyze these growing amounts of expressions instead of the human actor themself is only constructive if the human actor can use the results of these computations within his knowledge space.  By general reasons this possibility is very small and with increasing negativ complexity it is declining.

 

 

 

HMI ANALYSIS, Part 4: Tool based Actor Story Development with Testing and Gaming

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, March 3-4, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 4, 2021, 07:49h (Minor corrections; relating to the UN SDGs)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 4: Tool based Actor Story Development with Testing and Gaming

Context

This text is preceded by the following texts:

INFO GRAPH

Overview about different scenarios which will be possible for the development, simulation, testing and gaming of actor stories using the oksimo software tool

Introduction

In the preceding post it has been explained, how one can format an actor story [AS] as a theory in the  format  of  an Evaluated Theory Tε with Algorithmic Intelligence:   Tε,α=<M,∑,ε,α>.

In the following text it will be explained which kinds of different scenarios will be possible to elaborate, to simulate, to test, and to enable gaming with  an actor story theory by using the oksimo software tool.

UNIVERSAL TEAM

The classical distinctions between certain types of managers, special experts and the rest of the world is given up here in favor of a stronger generalization: everybody is a potential expert with regard to a future, which nobody knows. This is emphasized by the fact, that everybody can use its usual mother tongue, a normal language, every language. Nothing more is needed.

BASIC MODELS (S, X)

As minimal elements for all possible applications it is assumed here that the experts define at least a given situation (state) [S] and a set of change rules [X].

The given state S is  either (i)  taken as it is or (ii)  as a state which  should be improved. In both cases the initial state S is called the start state [S0].

The change rules X describe possible changes which transform a given state S into a changed successor state S’.

A pair of S and X as (S,X) is called a basic model M(S,X). One can define as many models as one wants.

A DIRECTION BY A VISION V

A vision [V] can describe a possible state SV  in an assumed future. If such a state SV is given, then this state becomes a goal state SGoal In this case  we assume V ≠ 0. If no explicit goal is given, then we assume V = 0.

DEVELOPMENT BY GOALS

If a vision is given (V ≠ 0), then the vision can be used to induce a direction which can/ shall be approached by creating a set X, which enables the generation of a sequence of states with the start state S0 as first state followed by successor state Si until the goal state SGoal has been reached or at least it holds that the goal state is a subset of the reached state: SGoalSn.

It is possible to use many basic models M(S,X) in parallel and for each model Mi one can define a different goal Vi (the typical situation in a pluralistic society).

Thus there can be many basic theories T(M,V) in parallel.

STEADY STATES (V = 0)

If no explicit visions are defined (V = 0) then every direction of change is allowed. A basic steady state theory T(M,V) with V = 0 can   be written as T(M,0). Whether such a case can be of interest is not clear at the moment.

BASIC INTERACTION PATTERNS

The following interaction modes are assumed as typical cases:

  1. N-1: Within an online session an interactive webpage with the oksimo software is active and the whole group can interact with the oksimo software tool.
  2. N-N-1: N-many participants can individually login into the interactive oksimo website and being logged in they can collaborate within the oksimo software with one project.
  3. N-N-N: N-many participants can individually login into the interactive oksimo website and there everybody can run its own process or can collaborate in various ways.

The default case is case (1). The exact dates for the availability of modes (2) – (3) depends from how fast the roadmap can be realized.

BASIC APPLICATIONS
  1. Exploring Simulation-Based Development [ESBD] (V ≠ 0): If the main goal is to find a path from a given state today S (Now) to an envisioned state V in the future then one has  to collect appropriate change rules X to approach the final goal state SGoal better and better. Activating the simulator ∑ during search and construction phase at will can be of great help, especially if the documents (S, X, V) are becoming more and more complex.
  2. Embedded Simulation-Based  Testing [ESBT] (V ≠ 0): If a basic  actor story theory T(M,) is given with a given goal (V ≠ 0) then it is of great help if the simulation is done in interactive mode where the simulator is not applying the change rules by itself but by asking different logged in users which rule they want to apply and how. These tests show not only which kinds of errors will occur but they can also show during n-many repetitions to which degree an user  can learn to behave task-conform. If the tests will not show the expected outcomes then this can point  to possible deficiencies of the software as well to specialties of the user.
  3. Embedded Simulation-Based Gaming [ESBTG] (V ≠ 0):  The case of gaming is partially  different to the case of testing.  Although it is assumed here too that at least one vision (goal) is given, it is additionally assumed that  there exists  a competition between different players or different teams. Different to testing exists in gaming according to the goal(s) the role of a winner: that player/ team which has reached a defined  goal state before the other player/ teams,  has won. As a side-effect of gaming one can also evaluate the playing environment and give some feedback to the developers.
ALGORITHMIC INTELLIGENCE
  1. Case ESBD, T(S,X,V,∑,ε,α): Because a normal simulation with the simulator always does  produce only one path from the start state to the goal state it is desirable to have an algorithm α which would run on demand as many times as wanted and thereby the algorithm α would search for all possible paths and at the same time it would look for those derivations, where the goal state satisfies with  ε certain special requirements. Thus the result from the application of α onto a given model M with the vision V would generate the set SV* of all those final states which satisfy the special requirements.
  2. Case ESBG, T(S,X,V,∑,ε,α):   The case of gaming allows at least three kinds of interesting applications for algorithmic intelligence: (i) Introduce non-biological players with learning capabilities which can act simultaneously with the biological players; (ii) Introduce non-biological players with learning capabilities which have to learn how to support, to assist, to train biological player. This second case addresses the challenging task to develop algorithmic tutors for several kinds of learning tasks. (iii) Another variant of case (ii) is to enable the development of a personal algorithmic assistant who works only with one person on a long-term basis.

The kinds of algorithmic Intelligence in (2)(i)-(iii) are different to the  mentioned algorithmic intelligence α in (1).

TYPES OF ACTORS

As the default standard case of an actor it is assumed that there are biological actors, usually human persons, which will not be analyzed with their inner structure [IS]. While the behavior of every system — and  therefore any biological system too — can be described with a behavior function φ: I x IS —> IS x O (if one has all the necessary knowledge), in the default case of biological systems  no behavior function φ is specified, φ = 0. During interactive simulations biological systems act by themselves.

If non-biological actors are used — e.g. automata with a certain machine program (an algorithm) — then one can use these only if one has a fully specified behavior function φ. From this follows that a  change rule which is associated with a non-biological actor has in its Eplus and in its Eminus part not a concrete expression but a variable, which will be computed during the simulation by the non-biological actor depending from its input and its behavior function φ: φ(input)IS=(Eplus, Eminus)IS.

FINAL COMMENT

Everybody who has read the parts (1) – (4) has now a general knowledge about the motivation to develop the oksimo software tool to support human kind to have a better communication and thinking of possible futures and a first understanding (hopefully :-)) how this tool can work. Reading the UN sustainable development goals [SDGs] [1] you will learn, that the SDG4 (Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all) is fundamental to all other SDGs. The oksimo software tool is one tool to be of help to reach these goals.

REFERENCES

[1] The 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet, now and into the future. At its heart are the 17 Sustainable Development Goals (SDGs), which are an urgent call for action by all countries – developed and developing – in a global partnership. They recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests. See PDF: https://sdgs.un.org/sites/default/files/publication/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf

[2] UN, SDG4, PDF, Argumentation why the SDG4 ist fundamental for all other SDGs: https://sdgs.un.org/sites/default/files/publications/2275sdbeginswitheducation.pdf

 

 

 

 

 

 

 

 

HMI Analysis for the CM:MI paradigm. Part 2. Problem and Vision

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, February 27-March 16, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 16, 2021 (minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 2: Problem & Vision

Context

This text is preceded by the following texts:

Introduction

Before one starts the HMI analysis  some stakeholder  — in our case are the users stakeholder as well as  users in one role —  have to present some given situation — classifiable as a ‘problem’ — to depart from and a vision as the envisioned goal to be realized.

Here we give a short description of the problem for the CM:MI paradigm and the vision, what should be gained.

Problem: Mankind on the Planet Earth

In this project  the mankind  on the planet earth is  understood as the primary problem. ‘Mankind’ is seen here  as the  life form called homo sapiens. Based on the findings of biological evolution one can state that the homo sapiens has — besides many other wonderful capabilities — at least two extraordinary capabilities:

Outside to Inside

The whole body with the brain is  able to convert continuously body-external  events into internal, neural events. And  the brain inside the body receives many events inside the body as external events too. Thus in the brain we can observe a mixup of body-external (outside 1) and body-internal events (outside 2), realized as set of billions of neural processes, highly interrelated.  Most of these neural processes are unconscious, a small part is conscious. Nevertheless  these unconscious and conscious events are  neurally interrelated. This overall conversion from outside 1 and outside 2 into neural processes  can be seen as a mapping. As we know today from biology, psychology and brain sciences this mapping is not a 1-1 mapping. The brain does all the time a kind of filtering — mostly unconscious — sorting out only those events which are judged by the brain to be important. Furthermore the brain is time-slicing all its sensory inputs, storing these time-slices (called ‘memories’), whereby these time-slices again are no 1-1 copies. The storing of time-sclices is a complex (unconscious) process with many kinds of operations like structuring, associating, abstracting, evaluating, and more. From this one can deduce that the content of an individual brain and the surrounding reality of the own body as well as the world outside the own body can be highly different. All kinds of perceived and stored neural events which can be or can become conscious are  here called conscious cognitive substrates or cognitive objects.

Inside to Outside (to Inside)

Generally it is known that the homo sapiens can produce with its body events which have some impact on the world outside the body.  One kind of such events is the production of all kinds of movements, including gestures, running, grasping with hands, painting, writing as well as sounds by his voice. What is of special interest here are forms of communications between different humans, and even more specially those communications enabled by the spoken sounds of a language as well as the written signs of a language. Spoken sounds as well as written signs are here called expressions associated with a known language. Expressions as such have no meaning (A non-speaker of a language L can hear or see expressions of the language L but he/she/x  never will understand anything). But as everyday experience shows nearly every child  starts very soon to learn which kinds of expressions belong to a language and with what kinds of shared experiences they can be associated. This learning is related to many complex neural processes which map expressions internally onto — conscious and unconscious — cognitive objects (including expressions!). This mapping builds up an internal  meaning function from expressions into cognitive objects and vice versa. Because expressions have a dual face (being internal neural structures as well as being body-outside events by conversions from the inside to body-outside) it is possible that a homo sapiens  can transmit its internal encoding of cognitive objects into expressions from his  inside to the outside and thereby another homo sapiens can perceive the produced outside expression and  can map this outside expression into an intern expression. As far as the meaning function of of the receiving homo sapiens  is sufficiently similar to the meaning function of  the sending homo sapiens there exists some probability that the receiving homo sapiens can activate from its memory cognitive objects which have some similarity with those of  the sending  homo sapiens.

Although we know today of different kinds of animals having some form of language, there is no species known which is with regard to language comparable to  the homo sapiens. This explains to a large extend why the homo sapiens population was able to cooperate in a way, which not only can include many persons but also can stretch through long periods of time and  can include highly complex cognitive objects and associated behavior.

Negative Complexity

In 2006 I introduced the term negative complexity in my writings to describe the fact that in the world surrounding an individual person there is an amount of language-encoded meaning available which is beyond the capacity of an  individual brain to be processed. Thus whatever kind of experience or knowledge is accumulated in libraries and data bases, if the negative complexity is higher and higher than this knowledge can no longer help individual persons, whole groups, whole populations in a constructive usage of all this. What happens is that the intended well structured ‘sound’ of knowledge is turned into a noisy environment which crashes all kinds of intended structures into nothing or badly deformed somethings.

Entangled Humans

From Quantum Mechanics we know the idea of entangled states. But we must not dig into quantum mechanics to find other phenomena which manifest entangled states. Look around in your everyday world. There exist many occasions where a human person is acting in a situation, but the bodily separateness is a fake. While sitting before a laptop in a room the person is communicating within an online session with other persons. And depending from the  social role and the  membership in some social institution and being part of some project this person will talk, perceive, feel, decide etc. with regard to the known rules of these social environments which are  represented as cognitive objects in its brain. Thus by knowledge, by cognition, the individual person is in its situation completely entangled with other persons which know from these roles and rules  and following thereby  in their behavior these rules too. Sitting with the body in a certain physical location somewhere on the planet does not matter in this moment. The primary reality is this cognitive space in the brains of the participating persons.

If you continue looking around in your everyday world you will probably detect that the everyday world is full of different kinds of  cognitively induced entangled states of persons. These internalized structures are functioning like protocols, like scripts, like rules in a game, telling everybody what is expected from him/her/x, and to that extend, that people adhere to such internalized protocols, the daily life has some structure, has some stability, enables planning of behavior where cooperation between different persons  is necessary. In a cognitively enabled entangled state the individual person becomes a member of something greater, becoming a super person. Entangled persons can do things which usually are not possible as long you are working as a pure individual person.[1]

Entangled Humans and Negative Complexity

Although entangled human persons can principally enable more complex events, structures,  processes, engineering, cultural work than single persons, human entanglement is still limited by the brain capacities as well as by the limits of normal communication. Increasing the amount of meaning relevant artifacts or increasing the velocity of communication events makes things even more worse. There are objective limits for human processing, which can run into negative complexity.

Future is not Waiting

The term ‘future‘ is cognitively empty: there exists nowhere an object which can  be called ‘future’. What we have is some local actual presence (the Now), which the body is turning into internal representations of some kind (becoming the Past), but something like a future does not exist, nowhere. Our knowledge about the future is radically zero.

Nevertheless, because our bodies are part of a physical world (planet, solar system, …) and our entangled scientific work has identified some regularities of this physical world which can be bused for some predictions what could happen with some probability as assumed states where our clocks are showing a different time stamp. But because there are many processes running in parallel, composed of billions of parameters which can be tuned in many directions, a really good forecast is not simple and depends from so many presuppositions.

Since the appearance of homo sapiens some hundred thousands years ago in Africa the homo sapiens became a game changer which makes all computations nearly impossible. Not in the beginning of the appearance of the homo sapiens, but in the course of time homo sapiens enlarged its number, improved its skills in more and more areas, and meanwhile we know, that homo sapiens indeed has started to crash more and more  the conditions of its own life. And principally thinking points out, that homo sapiens could even crash more than only planet earth. Every exemplar of a homo sapiens has a built-in freedom which allows every time to decide to behave in a different way (although in everyday life we are mostly following some protocols). And this built-in freedom is guided by actual knowledge, by emotions, and by available resources. The same child can become a great musician, a great mathematician, a philosopher, a great political leader, an engineer, … but giving the child no resources, depriving it from important social contexts,  giving it the wrong knowledge, it can not manifest its freedom in full richness. As human population we need the best out of all children.

Because  the processing of the planet, the solar system etc.  is going on, we are in need of good forecasts of possible futures, beyond our classical concepts of sharing knowledge. This is where our vision enters.

VISION: DEVELOPING TOGETHER POSSIBLE FUTURES

To find possible and reliable shapes of possible futures we have to exploit all experiences, all knowledge, all ideas, all kinds of creativity by using maximal diversity. Because present knowledge can be false — as history tells us –, we should not rule out all those ideas, which seem to be too crazy at a first glance. Real innovations are always different to what we are used to at that time. Thus the following text is a first rough outline of the vision:

  1. Find a format
  2. which allows any kinds of people
  3. for any kind of given problem
  4. with at least one vision of a possible improvement
  5. together
  6. to search and to find a path leading from the given problem (Now) to the envisioned improved state (future).
  7. For all needed communication any kind of  everyday language should be enough.
  8. As needed this everyday language should be extendable with special expressions.
  9. These considerations about possible paths into the wanted envisioned future state should continuously be supported  by appropriate automatic simulations of such a path.
  10. These simulations should include automatic evaluations based on the given envisioned state.
  11. As far as possible adaptive algorithms should be available to support the search, finding and identification of the best cases (referenced by the visions)  within human planning.

REFERENCES or COMMENTS

[1] One of the most common entangled state in daily life is the usage of normal language! A normal language L works only because the rules of usage of this language L are shared by all speaker-hearer of this language, and these rules are explicit cognitive structures (not necessarily conscious, mostly unconscious!).

Continuation

Yes, it will happen 🙂 Here.

 

 

 

 

 

 

komega-v08a. First complete version with simulation

Journal: uffmm.org,
ISSN 2567-6458, Sept-16, 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email:gerd@doeben-henisch.de

ABSTRACT

A first minimal version of a simulator is working. Another  solution  in version   komega-v07a was too bad and has been thrown in the ‘trash’ and now we have komega-v08a. You can get the real idea in a nucleus. Many things have to be improved and will be improved (not before End of October :-)). An important improvement was the inclusion of set theoretical data structures and operators.

SOURCE CODE AS PDF

komega-v08a

KOMEGA REQUIREMENTS No.4, Version 5. Basic Application Scenario

ISSN 2567-6458, 13.September  2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

As described in the uffmm eJournal  the wider context of this software project is a generative theory of cultural anthropology [GCA] which is an extension of the engineering theory called Distributed Actor-Actor Interaction [DAAI]. In  the section Case Studies of the uffmm eJournal there is also a section about Python co-learning – mainly
dealing with python programming – and a section about a web-server with
Dragon. This document will be part of the Case Studies section.

PDF DOCUMENT

requirements-no4-v5-13Sept2020

The only change to the preceding version is a short description of the first simple simulator cycle which will be used  in the parallel python program.