Time: Oct 18, 2023 — Oct 24, 2023) Author: Gerd Doeben-Henisch Email: gerd@doeben-henisch.de
CONTEXT
This post is part of the uffmm science blog. It is a translation from the German source: https://www.cognitiveagent.org/2023/10/18/schmerz-ersetzt-nicht-die-wahrheit/. For the translation I have used chatGPT4 and deepl.com. Because in the text the word ‘hamas’ is occurring, chatGPT didn’t translate a long paragraph with this word. Thus the algorithm is somehow ‘biased’ by a certain kind of training. This is really bad because the following text is offers some reflections about a situation where someone ‘hates’ others. This is one of our biggest ‘disease’ today.
Preface
The Hamas terrorist attack on Israeli citizens on October 7, 2023, has shaken the world. For years, terrorist acts have been shaking our world. In front of our eyes, a is attempting, since 2022 (actually since 2014), to brutally eradicate the entire Ukrainian population. Similar events have been and are taking place in many other regions of the world…
… Pain does not replace the truth [0]…
Truth is not automatic. Making truth available requires significantly more effort than remaining in a state of partial truth.
The probability that a person knows the truth or seeks the truth is smaller than staying in a state of partial truth or outright falsehood.
Whether in a democracy, falsehood or truth predominates depends on how a democracy shapes the process of truth-finding and the communication of truth. There is no automatic path to truth.
In a dictatorship, the likelihood of truth being available is extremely dependent on those who exercise centralized power. Absolute power, however, has already fundamentally broken with the truth (which does not exclude the possibility that this power can have significant effects).
The course of human history on planet Earth thus far has shown that there is evidently no simple, quick path that uniformly leads all people to a state of happiness. This must have to do with humans themselves—with us.
The interest in seeking truth, in cultivating truth, in a collective process of truth, has never been strong enough to overcome the everyday exclusions, falsehoods, hostilities, atrocities…
One’s own pain is terrible, but it does not help us to move forward…
Who even wants a future for all of us?????
[0] There is an overview article by the author from 2018, in which he presents 15 major texts from the blog “Philosophie Jetzt” ( “Philosophy Now”) ( “INFORMAL COSMOLOGY. Part 3a. Evolution – Truth – Society. Synopsis of previous contributions to truth in this blog” ( https://www.cognitiveagent.org/2018/03/20/informelle-kosmologie-teil-3a-evolution-wahrheit-gesellschaft-synopse-der-bisherigen-beitraege-zur-wahrheit-in-diesem-blog/ )), in which the matter of truth is considered from many points of view. In the 5 years since, society’s treatment of truth has continued to deteriorate dramatically.
Hate cancels the truth
Truth is related to knowledge. However, in humans, knowledge most often is subservient to emotions. Whatever we may know or wish to know, when our emotions are against it, we tend to suppress that knowledge.
One form of emotion is hatred. The destructive impact of hatred has accompanied human history like a shadow, leaving a trail of devastation everywhere it goes: in the hater themselves and in their surroundings.
The event of the inhumane attack on October 7, 2023 in Israel, claimed by Hamas, is unthinkable without hatred.
If one traces the history of Hamas since its founding in 1987 [1,2], then one can see that hatred is already laid down as an essential moment in its founding. This hatred is joined by the moment of a religious interpretation, which calls itself Islamic, but which represents a special, very radicalized and at the same time fundamentalist form of Islam.
The history of the state of Israel is complex, and the history of Judaism is no less so. And the fact that today’s Judaism also contains strong components that are clearly fundamentalist and to which hatred is not alien, this also leads within many other factors at the core to a constellation of fundamentalist antagonisms on both sides that do not in themselves reveal any approaches to a solution. The many other people in Israel and Palestine ‘around’ are part of these ‘fundamentalist force fields’, which simply evaporate humanity and truth in their vicinity. By the trail of blood one can see this reality.
Both Judaism and Islam have produced wonderful things, but what does all this mean in the face of a burning hatred that pushes everything aside, that sees only itself.
[1] Jeffrey Herf, Sie machen den Hass zum Weltbild, FAZ 20.Okt. 23, S.11 (Abriss der Geschichte der Hamas und ihr Weltbild, als Teil der größeren Geschichte) (Translation:They make hatred their worldview, FAZ Oct. 20, 23, p.11 (outlining the history of Hamas and its worldview, as part of the larger story)).
[2] Joachim Krause, Die Quellen des Arabischen Antisemitismus, FAZ, 23.10.2023,p.8 (This text “The Sources of Arab Anti-Semitism” complements the account by Jeffrey Herf. According to Krause, Arab anti-Semitism has been widely disseminated in the Arab world since the 1920s/ 30s via the Muslim Brotherhood, founded in 1928).
A society in decline
When truth diminishes and hatred grows (and, indirectly, trust evaporates), a society is in free fall. There is no remedy for this; the use of force cannot heal it, only worsen it.
The mere fact that we believe that lack of truth, dwindling trust, and above all, manifest hatred can only be eradicated through violence, shows how seriously we regard these phenomena and at the same time, how helpless we feel in the face of these attitudes.
In a world whose survival is linked to the availability of truth and trust, it is a piercing alarm signal to observe how difficult it is for us as humans to deal with the absence of truth and face hatred.
Is Hatred Incurable?
When we observe how tenaciously hatred persists in humanity, how unimaginably cruel actions driven by hatred can be, and how helpless we humans seem in the face of hatred, one might wonder if hatred is ultimately not a kind of disease—one that threatens the hater themselves and, particularly, those who are hated with severe harm, ultimately death.
With typical diseases, we have learned to search for remedies that can free us from the illness. But what about a disease like hatred? What helps here? Does anything help? Must we, like in earlier times with people afflicted by deadly diseases (like the plague), isolate, lock away, or send away those who are consumed by hatred to some no man’s land? … but everyone knows that this isn’t feasible… What is feasible? What can combat hatred?
After approximately 300.000 years of Homo sapiens on this planet, we seem strangely helpless in the face of the disease of hatred.
What’s even worse is that there are other people who see in every hater a potential tool to redirect that hatred toward goals they want to damage or destroy, using suitable manipulation. Thus, hatred does not disappear; on the contrary, it feels justified, and new injustices fuel the emergence of new hatred… the disease continues to spread.
One of the greatest events in the entire known universe—the emergence of mysterious life on this planet Earth—has a vulnerable point where this life appears strangely weak and helpless. Throughout history, humans have demonstrated their capability for actions that endure for many generations, that enable more people to live fulfilling lives, but in the face of hatred, they appear oddly helpless… and the one consumed by hatred is left incapacitated, incapable of anything else… plummeting into their dark inner abyss…
Instead of hatred, we need (minimally and in outline):
Water: To sustain human life, along with the infrastructure to provide it, and individuals to maintain that infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
Food: To sustain human life, along with the infrastructure for its production, storage, processing, transportation, distribution, and provision. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
Shelter: To provide a living environment, including the infrastructure for its creation, provisioning, maintenance, and distribution. Individuals are needed to manage this provision, and they, too, require everything they need for their own lives to fulfill this task.
Energy: For heating, cooling, daily activities, and life itself, along with the infrastructure for its generation, provisioning, maintenance, and distribution. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
Authorization and Participation: To access water, food, shelter, and energy. This requires an infrastructure of agreements, and individuals to manage these agreements. These individuals also require everything they need for their own lives to fulfill this task.
Education: To be capable of undertaking and successfully completing tasks in real life. This necessitates individuals with enough experience and knowledge to offer and conduct such education. These individuals also require everything they need for their own lives to fulfill this task.
Medical Care: To help with injuries, accidents, and illnesses. This requires individuals with sufficient experience and knowledge to offer and provide medical care, as well as the necessary facilities and equipment. These individuals also require everything they need for their own lives to fulfill this task.
Communication Facilities: So that everyone can receive helpful information needed to navigate their world effectively. This requires suitable infrastructure and individuals with enough experience and knowledge to provide such information. These individuals also require everything they need for their own lives to fulfill this task.
Transportation Facilities: So that people and goods can reach the places they need to go. This necessitates suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
Decision Structures: To mediate the diverse needs and necessary services in a way that ensures most people have access to what they need for their daily lives. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
Law Enforcement: To ensure disruptions and damage to the infrastructure necessary for daily life are resolved without creating new disruptions. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such services. These individuals also require everything they need for their own lives to fulfill this task.
Sufficient Land: To provide enough space for all these requirements, along with suitable soil (for water, food, shelter, transportation, storage, production, etc.).
Suitable Climate
A functioning ecosystem.
A capable scientific community to explore and understand the world.
Suitable technology to accomplish everyday tasks and support scientific endeavors.
Knowledge in the minds of people to understand daily events and make responsible decisions.
Goal orientations (preferences, values, etc.) in the minds of people to make informed decisions.
Ample time and peace to allow these processes to occur and produce results.
Strong and lasting relationships with other population groups pursuing the same goals.
Sufficient commonality among all population groups on Earth to address their shared needs where they are affected.
A sustained positive and constructive competition for those goal orientations that make life possible and viable for as many people on this planet (in this solar system, in this galaxy, etc.) as possible.
The freedom present within the experiential world, included within every living being, especially within humans, should be given as much room as possible, as it is this freedom that can overcome false ideas from the past in the face of a constantly changing world, enabling us to potentially thrive in the world of the future.
Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.
CONTEXT
This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.
Start of the Lecture
Dear Auditorium,
This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.
In this conference many different views are presented that are possible on this topic.
TRANSDISCIPLINARY
My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.
An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.
‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.
HUMAN TEXT GENERATION
The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.
This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.
TEXT CAPABLE MACHINES
With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.
In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.
BIOLOGICAL — NON-BIOLOGICAL
On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.
BLANK INTELLIGENCE TERM
The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.
PREREQUISITES FOR TEXT GENERATION
If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.
VALIDITY
A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.
In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.
In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.
ASYMMETRY: APPLICABLE- NOT APPLICABLE
One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.
With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.
MEANING
This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.
If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):
KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.
LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.
MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.
Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.
FUTURE AND EMOTIONS
In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.
If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.
If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?
The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]
SCIENTIFIC DISCOURSE AND EVERYDAY SITUATIONS
In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.
The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?
For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.
This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.
The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.
Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.
From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.
The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.
Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.
SUSTAINABLE EMPIRICAL THEORY
With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.
While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.
However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]
If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.
In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]
MAN-MACHINE
After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?
My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.
In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.
The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.
This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.
Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?
Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.
Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.
In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.
But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?
A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.
My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]
COMMENTS
[1] https://zevedi.de/en/topics/ki-text-2/
[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/ ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.
The whole text shows a dynamic, which induces many changes. Difficult to plan ‘in advance’.
Perhaps, some time, it will look like a ‘book’, at least ‘for a moment’.
I have started a ‘book project’ in parallel. This was motivated by the need to provide potential users of our new oksimo.R software with a coherent explanation of how the oksimo.R software, when used, generates an empirical theory in the format of a screenplay. The primary source of the book is in German and will be translated step by step here in the uffmm.blog.
INTRODUCTION
In a rather foundational paper about an idea, how one can generalize ‘systems engineering’ [*1] to the art of ‘theory engineering’ [1] a new conceptual framework has been outlined for a ‘sustainable applied empirical theory (SAET)’. Part of this new framework has been the idea that the classical recourse to groups of special experts (mostly ‘engineers’ in engineering) is too restrictive in the light of the new requirement of being sustainable: sustainability is primarily based on ‘diversity’ combined with the ‘ability to predict’ from this diversity probable future states which keep life alive. The aspect of diversity induces the challenge to see every citizen as a ‘natural expert’, because nobody can know in advance and from some non-existing absolut point of truth, which knowledge is really important. History shows that the ‘mainstream’ is usually to a large degree ‘biased’ [*1b].
With this assumption, that every citizen is a ‘natural expert’, science turns into a ‘general science’ where all citizens are ‘natural members’ of science. I will call this more general concept of science ‘sustainable citizen science (SCS)’ or ‘Citizen Science 2.0 (CS2)’. The important point here is that a sustainable citizen science is not necessarily an ‘arbitrary’ process. While the requirement of ‘diversity’ relates to possible contents, to possible ideas, to possible experiments, and the like, it follows from the other requirement of ‘predictability’/ of being able to make some useful ‘forecasts’, that the given knowledge has to be in a format, which allows in a transparent way the construction of some consequences, which ‘derive’ from the ‘given’ knowledge and enable some ‘new’ knowledge. This ability of forecasting has often been understood as the business of ‘logic’ providing an ‘inference concept’ given by ‘rules of deduction’ and a ‘practical pattern (on the meta level)’, which defines how these rules have to be applied to satisfy the inference concept. But, looking to real life, to everyday life or to modern engineering and economy, one can learn that ‘forecasting’ is a complex process including much more than only cognitive structures nicely fitting into some formulas. For this more realistic forecasting concept we will use here the wording ‘common logic’ and for the cognitive adventure where common logic is applied we will use the wording ‘common science’. ‘Common science’ is structurally not different from ‘usual science’, but it has a substantial wider scope and is using the whole of mankind as ‘experts’.
The following chapters/ sections try to illustrate this common science view by visiting different special views which all are only ‘parts of a whole’, a whole which we can ‘feel’ in every moment, but which we can not yet completely grasp with our theoretical concepts.
CONTENT
Language (Main message: “The ordinary language is the ‘meta language’ to every special language. This can be used as a ‘hint’ to something really great: the mystery of the ‘self-creating’ power of the ordinary language which for most people is unknown although it happens every moment.”)
Concrete Abstract Statements (Main message: “… you will probably detect, that nearly all words of a language are ‘abstract words’ activating ‘abstract meanings’. …If you cannot provide … ‘concrete situations’ the intended meaning of your abstract words will stay ‘unclear’: they can mean ‘nothing or all’, depending from the decoding of the hearer.”)
True False Undefined (Main message: “… it reveals that ’empirical (observational) evidence’ is not necessarily an automatism: it presupposes appropriate meaning spaces embedded in sets of preferences, which are ‘observation friendly’.“
Beyond Now (Main message: “With the aid of … sequences revealing possible changes the NOW is turned into a ‘moment’ embedded in a ‘process’, which is becoming the more important reality. The NOW is something, but the PROCESS is more.“)
Playing with the Future (Main message: “In this sense seems ‘language’ to be the master tool for every brain to mediate its dynamic meaning structures with symbolic fix points (= words, expressions) which as such do not change, but the meaning is ‘free to change’ in any direction. And this ‘built in ‘dynamics’ represents an ‘internal potential’ for uncountable many possible states, which could perhaps become ‘true’ in some ‘future state’. Thus ‘future’ can begin in these potentials, and thinking is the ‘playground’ for possible futures.(but see [18])”)
Forecasting – Prediction: What? (This chapter explains the cognitive machinery behind forecasting/ predictions, how groups of human actors can elaborate shared descriptions, and how it is possible to start with sequences of singularities to built up a growing picture of the empirical world which appears as a radical infinite and indeterministic space. )
!!! From here all the following chapters have to be re-written !!!
Boolean Logic (Explains what boolean logic is, how it enables the working of programmable machines, but that it is of nearly no help for the ‘heart’ of forecasting.)
/* Often people argue against the usage of the wikipedia encyclopedia as not ‘scientific’ because the ‘content’ of an entry in this encyclopedia can ‘change’. This presupposes the ‘classical view’ of scientific texts to be ‘stable’, which presupposes further, that such a ‘stable text’ describes some ‘stable subject matter’. But this view of ‘steadiness’ as the major property of ‘true descriptions’ is in no correspondence with real scientific texts! The reality of empirical science — even as in some special disciplines like ‘physics’ — is ‘change’. Looking to Aristotle’s view of nature, to Galileo Galilei, to Newton, to Einstein and many others, you will not find a ‘single steady picture’ of nature and science, and physics is only a very simple strand of science compared to the live-sciences and many others. Thus wikipedia is a real scientific encyclopedia give you the breath of world knowledge with all its strengths and limits at once. For another, more general argument, see In Favour for Wikipedia */
[*1] Meaning operator ‘…’ : In this text (and in nearly all other texts of this author) the ‘inverted comma’ is used quite heavily. In everyday language this is not common. In some special languages (theory of formal languages or in programming languages or in meta-logic) the inverted comma is used in some special way. In this text, which is primarily a philosophical text, the inverted comma sign is used as a ‘meta-language operator’ to raise the intention of the reader to be aware, that the ‘meaning’ of the word enclosed in the inverted commas is ‘text specific’: in everyday language usage the speaker uses a word and assumes tacitly that his ‘intended meaning’ will be understood by the hearer of his utterance as ‘it is’. And the speaker will adhere to his assumption until some hearer signals, that her understanding is different. That such a difference is signaled is quite normal, because the ‘meaning’ which is associated with a language expression can be diverse, and a decision, which one of these multiple possible meanings is the ‘intended one’ in a certain context is often a bit ‘arbitrary’. Thus, it can be — but must not — a meta-language strategy, to comment to the hearer (or here: the reader), that a certain expression in a communication is ‘intended’ with a special meaning which perhaps is not the commonly assumed one. Nevertheless, because the ‘common meaning’ is no ‘clear and sharp subject’, a ‘meaning operator’ with the inverted commas has also not a very sharp meaning. But in the ‘game of language’ it is more than nothing 🙂
[*1b] That the main stream ‘is biased’ is not an accident, not a ‘strange state’, not a ‘failure’, it is the ‘normal state’ based on the deeper structure how human actors are ‘built’ and ‘genetically’ and ‘cultural’ ‘programmed’. Thus the challenge to ‘survive’ as part of the ‘whole biosphere’ is not a ‘partial task’ to solve a single problem, but to solve in some sense the problem how to ‘shape the whole biosphere’ in a way, which enables a live in the universe for the time beyond that point where the sun is turning into a ‘red giant’ whereby life will be impossible on the planet earth (some billion years ahead)[22]. A remarkable text supporting this ‘complex view of sustainability’ can be found in Clark and Harvey, summarized at the end of the text. [23]
[*2] The meaning of the expression ‘normal’ is comparable to a wicked problem. In a certain sense we act in our everyday world ‘as if there exists some standard’ for what is assumed to be ‘normal’. Look for instance to houses, buildings: to a certain degree parts of a house have a ‘standard format’ assuming ‘normal people’. The whole traffic system, most parts of our ‘daily life’ are following certain ‘standards’ making ‘planning’ possible. But there exists a certain percentage of human persons which are ‘different’ compared to these introduced standards. We say that they have a ‘handicap’ compared to this assumed ‘standard’, but this so-called ‘standard’ is neither 100% true nor is the ‘given real world’ in its properties a ‘100% subject’. We have learned that ‘properties of the real world’ are distributed in a rather ‘statistical manner’ with different probabilities of occurrences. To ‘find our way’ in these varying occurrences we try to ‘mark’ the main occurrences as ‘normal’ to enable a basic structure for expectations and planning. Thus, if in this text the expression ‘normal’ is used it refers to the ‘most common occurrences’.
[*3] Thus we have here a ‘threefold structure’ embracing ‘perception events, memory events, and expression events’. Perception events represent ‘concrete events’; memory events represent all kinds of abstract events but they all have a ‘handle’ which maps to subsets of concrete events; expression events are parts of an abstract language system, which as such is dynamically mapped onto the abstract events. The main source for our knowledge about perceptions, memory and expressions is experimental psychology enhanced by many other disciplines.
[*4] Characterizing language expressions by meaning – the fate of any grammar: the sentence ” … ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like.” is pointing to a deep property of every ordinary language, which represents the real power of language but at the same time the great weakness too: expressions as such have no meaning. Hundreds, thousands, millions of words arranged in ‘texts’, ‘documents’ can show some statistical patterns’ and as such these patterns can give some hint which expressions occur ‘how often’ and in ‘which combinations’, but they never can give a clue to the associated meaning(s). During more than three-thousand years humans have tried to describe ordinary language in a more systematic way called ‘grammar’. Due to this radically gap between ‘expressions’ as ‘observable empirical facts’ and ‘meaning constructs’ hidden inside the brain it was all the time a difficult job to ‘classify’ expressions as representing a certain ‘type’ of expression like ‘nouns’, ‘predicates’, ‘adjectives’, ‘defining article’ and the like. Without regressing to the assumed associated meaning such a classification is not possible. On account of the fuzziness of every meaning ‘sharp definitions’ of such ‘word classes’ was never and is not yet possible. One of the last big — perhaps the biggest ever — project of a complete systematic grammar of a language was the grammar project of the ‘Akademie der Wissenschaften der DDR’ (‘Academy of Sciences of the GDR’) from 1981 with the title “Grundzüge einer Deutschen Grammatik” (“Basic features of a German grammar”). A huge team of scientists worked together using many modern methods. But in the preface you can read, that many important properties of the language are still not sufficiently well describable and explainable. See: Karl Erich Heidolph, Walter Flämig, Wolfgang Motsch et al.: Grundzüge einer deutschen Grammatik. Akademie, Berlin 1981, 1028 Seiten.
[*5] Differing opinions about a given situation manifested in uttered expressions are a very common phenomenon in everyday communication. In some sense this is ‘natural’, can happen, and it should be no substantial problem to ‘solve the riddle of being different’. But as you can experience, the ability of people to solve the occurrence of different opinions is often quite weak. Culture is suffering by this as a whole.
[1] Gerd Doeben-Henisch, 2022, From SYSTEMS Engineering to THEORYEngineering, see: https://www.uffmm.org/2022/05/26/from-systems-engineering-to-theory-engineering/(Remark: At the time of citation this post was not yet finished, because there are other posts ‘corresponding’ with that post, which are too not finished. Knowledge is a dynamic network of interwoven views …).
[1d] ‘usual science’ is the game of science without having a sustainable format like in citizen science 2.0.
[2] Science, see e.g. wkp-en: https://en.wikipedia.org/wiki/Science
Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testableconjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”
[2b] History of science in wkp-en: https://en.wikipedia.org/wiki/History_of_science#Scientific_Revolution_and_birth_of_New_Science
[3] Theory, see wkp-en: https://en.wikipedia.org/wiki/Theory#:~:text=A%20theory%20is%20a%20rational,or%20no%20discipline%20at%20all.
Citation = “A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory’s assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.”
Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testableconjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”
[4b] Empiricism in wkp-en: https://en.wikipedia.org/wiki/Empiricism
[4c] Scientific method in wkp-en: https://en.wikipedia.org/wiki/Scientific_method
Citation =”The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3] [4c]
and
Citation = “The purpose of an experiment is to determine whether observations[A][a][b] agree with or conflict with the expectations deduced from a hypothesis.[6]: Book I, [6.54] pp.372, 408 [b] Experiments can take place anywhere from a garage to a remote mountaintop to CERN’s Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.[8][9]”
[5] Gerd Doeben-Henisch, “Is Mathematics a Fake? No! Discussing N.Bourbaki, Theory of Sets (1968) – Introduction”, 2022, https://www.uffmm.org/2022/06/06/n-bourbaki-theory-of-sets-1968-introduction/
[6] Logic, see wkp-en: https://en.wikipedia.org/wiki/Logic
[7] W. C. Kneale, The Development of Logic, Oxford University Press (1962)
[8] Set theory, in wkp-en: https://en.wikipedia.org/wiki/Set_theory
[9] N.Bourbaki, Theory of Sets , 1968, with a chapter about structures, see: https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments_de_math%C3%A9matique
[10] = [5]
[11] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein
[12] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */
[13] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act
[14] While the world view constructed in a brain is ‘virtual’ compared to the ‘real word’ outside the brain (where the body outside the brain is also functioning as ‘real world’ in relation to the brain), does the ‘virtual world’ in the brain function for the brain mostly ‘as if it is the real world’. Only under certain conditions can the brain realize a ‘difference’ between the triggering outside real world and the ‘virtual substitute for the real world’: You want to use your bicycle ‘as usual’ and then suddenly you have to notice that it is not at that place where is ‘should be’. …
[15] Propositional Calculus, see wkp-en: https://en.wikipedia.org/wiki/Propositional_calculus#:~:text=Propositional%20calculus%20is%20a%20branch,of%20arguments%20based%20on%20them.
[16] Boolean algebra, see wkp-en: https://en.wikipedia.org/wiki/Boolean_algebra
[17] Boolean (or propositional) Logic: As one can see in the mentioned articles of the English wikipedia, the term ‘boolean logic’ is not common. The more logic-oriented authors prefer the term ‘boolean calculus’ [15] and the more math-oriented authors prefer the term ‘boolean algebra’ [16]. In the view of this author the general view is that of ‘language use’ with ‘logic inference’ as leading idea. Therefore the main topic is ‘logic’, in the case of propositional logic reduced to a simple calculus whose similarity with ‘normal language’ is widely ‘reduced’ to a play with abstract names and operators. Recommended: the historical comments in [15].
[18] Clearly, thinking alone can not necessarily induce a possible state which along the time line will become a ‘real state’. There are numerous factors ‘outside’ the individual thinking which are ‘driving forces’ to push real states to change. But thinking can in principle synchronize with other individual thinking and — in some cases — can get a ‘grip’ on real factors causing real changes.
[19] This kind of knowledge is not delivered by brain science alone but primarily from experimental (cognitive) psychology which examines observable behavior and ‘interprets’ this behavior with functional models within an empirical theory.
[20] Predicate Logic or First-Order Logic or … see: wkp-en: https://en.wikipedia.org/wiki/First-order_logic#:~:text=First%2Dorder%20logic%E2%80%94also%20known,%2C%20linguistics%2C%20and%20computer%20science.
[21] Gerd Doeben-Henisch, In Favour of Wikipedia, https://www.uffmm.org/2022/07/31/in-favour-of-wikipedia/, 31 July 2022
[22] The sun, see wkp-ed https://en.wikipedia.org/wiki/Sun (accessed 8 Aug 2022)
[23] By Clark, William C., and Alicia G. Harley – https://doi.org/10.1146/annurev-environ-012420-043621, Clark, William C., and Alicia G. Harley. 2020. “Sustainability Science: Toward a Synthesis.” Annual Review of Environment and Resources 45 (1): 331–86, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=109026069
[24] Sustainability in wkp-en: https://en.wikipedia.org/wiki/Sustainability#Dimensions_of_sustainability
[27] SDG 4 in wkp-en: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4
[28] Thomas Rid, Rise of the Machines. A Cybernetic History, W.W.Norton & Company, 2016, New York – London
[29] Doeben-Henisch, G., 2006, Reducing Negative Complexity by a Semiotic System In: Gudwin, R., & Queiroz, J., (Eds). Semiotics and Intelligent Systems Development. Hershey et al: Idea Group Publishing, 2006, pp.330-342
[30] Döben-Henisch, G., Reinforcing the global heartbeat: Introducing the planet earth simulator project, In M. Faßler & C. Terkowsky (Eds.), URBAN FICTIONS. Die Zukunft des Städtischen. München, Germany: Wilhelm Fink Verlag, 2006, pp.251-263
[29] The idea that individual disciplines are not good enough for the ‘whole of knowledge’ is expressed in a clear way in a video of the theoretical physicist and philosopher Carlo Rovell: Carlo Rovelli on physics and philosophy, June 1, 2022, Video from the Perimeter Institute for Theoretical Physics. Theoretical physicist, philosopher, and international bestselling author Carlo Rovelli joins Lauren and Colin for a conversation about the quest for quantum gravity, the importance of unlearning outdated ideas, and a very unique way to get out of a speeding ticket.
[] By Azote for Stockholm Resilience Centre, Stockholm University – https://www.stockholmresilience.org/research/research-news/2016-06-14-how-food-connects-all-the-sdgs.html, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=112497386
[] Sierra Club in wkp-en: https://en.wikipedia.org/wiki/Sierra_Club
[] Herbert Bruderer, Where is the Cradle of the Computer?, June 20, 2022, URL: https://cacm.acm.org/blogs/blog-cacm/262034-where-is-the-cradle-of-the-computer/fulltext (accessed: July 20, 2022)
[] UN. Secretary-General; World Commission on Environment and Development, 1987, Report of the World Commission on Environment and Development : note / by the Secretary-General., https://digitallibrary.un.org/record/139811 (accessed: July 20, 2022) (A more readable format: https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf )
/* Comment: Gro Harlem Brundtland (Norway) has been the main coordinator of this document */
[] Chaudhuri, S.,et al.Neurosymbolic programming. Foundations and Trends in Programming Languages 7, 158-243 (2021).
[] Nello Cristianini, Teresa Scantamburlo, James Ladyman, The social turn of artificial intelligence, in: AI & SOCIETY, https://doi.org/10.1007/s00146-021-01289-8
[] Carl DiSalvo, Phoebe Sengers, and Hrönn Brynjarsdóttir, Mapping the landscape of sustainable hci, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, page 1975–1984, New York, NY, USA, 2010. Association for Computing Machinery.
[] Claude Draude, Christian Gruhl, Gerrit Hornung, Jonathan Kropf, Jörn Lamla, Jan Marco Leimeister, Bernhard Sick, Gerd Stumme, Social Machines, in: Informatik Spektrum, https://doi.org/10.1007/s00287-021-01421-4
[] EU: High-Level Expert Group on AI (AI HLEG), A definition of AI: Main capabilities and scientific disciplines, European Commission communications published on 25 April 2018 (COM(2018) 237 final), 7 December 2018 (COM(2018) 795 final) and 8 April 2019 (COM(2019) 168 final). For our definition of Artificial Intelligence (AI), please refer to our document published on 8 April 2019: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341
[] EU: High-Level Expert Group on AI (AI HLEG), Policy and investment recommendations for trustworthy Artificial Intelligence, 2019, https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence
[] European Union. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC General Data Protection Regulation; http://eur-lex.europa.eu/eli/reg/2016/679/oj (Wirksam ab 25.Mai 2018) [26.2.2022]
[] C.S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1):1–23, 1973
[] John P. van Gigch. 1991. System Design Modeling and Metamodeling. Springer US. DOI:https://doi.org/10.1007/978-1-4899-0676-2
[] Gudwin, R.R. (2003), On a Computational Model of the Peircean Semiosis, IEEE KIMAS 2003 Proceedings
[] J.A. Jacko and A. Sears, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 1st edition, 2003.
[] LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).
[] Lenat, D. What AI can learn from Romeo & Juliet.Forbes (2019)
[] Pierre Lévy, Collective Intelligence. mankind’s emerging world in cyberspace, Perseus books, Cambridge (M A), 1997 (translated from the French Edition 1994 by Robert Bonnono)
[] Lexikon der Nachhaltigkeit, ‘Starke Nachhaltigkeit‘, https://www.nachhaltigkeit.info/artikel/schwache_vs_starke_nachhaltigkeit_1687.htm (acessed: July 21, 2022)
[] Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report.
[] Kathryn Merrick. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41:38 – 55, 2017
[] Illah Reza Nourbakhsh and Jennifer Keating, AI and Humanity, MIT Press, 2020 /* An examination of the implications for society of rapidly advancing artificial intelligence systems, combining a humanities perspective with technical analysis; includes exercises and discussion questions. */
[] Olazaran, M. , A sociological history of the neural network controversy. Advances in Computers37, 335-425 (1993).
[] Friedrich August Hayek (1945), The use of knowledge in society. The American Economic Review 35, 4 (1945), 519–530
[] Karl Popper, „A World of Propensities“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1988, leicht erweitert neu abgedruckt 1990, repr. 1995)
[] Karl Popper, „Towards an Evolutionary Theory of Knowledge“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1989, ab gedruckt in 1990, repr. 1995)
[] Karl Popper, „All Life is Problem Solving“, Artikel, ursprünglich ein Vortrag 1991 auf Deutsch, erstmalig publiziert in dem Buch (auf Deutsch) „Alles Leben ist Problemlösen“ (1994), dann in dem Buch (auf Englisch) „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York
[] A. Sears and J.A. Jacko, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 2nd edition, 2008.
[] Skaburskis, Andrejs (19 December 2008). “The origin of “wicked problems””. Planning Theory & Practice. 9 (2): 277-280. doi:10.1080/14649350802041654. At the end of Rittel’s presentation, West Churchman responded with that pensive but expressive movement of voice that some may well remember, ‘Hmm, those sound like “wicked problems.”‘
[] Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).
[] Wurm, Daniel; Zielinski, Oliver; Lübben, Neeske; Jansen, Maike; Ramesohl, Stephan (2021) : Wege in eine ökologische Machine Economy: Wir brauchen eine ‘Grüne Governance der Machine Economy’, um das Zusammenspiel von Internet of Things, Künstlicher Intelligenz und Distributed Ledger Technology ökologisch zu gestalten, Wuppertal Report, No. 22, Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal, https://doi.org/10.48506/opus-7828
[] Aimee van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of AI, in: AI and Ethics (2021) 1:213–218, see: https://doi.org/10.1007/s43681
[] R. I. Damper (2000), Editorial for the special issue on ‘Emergent Properties of Complex Systems’: Emergence and levels of abstraction. International Journal of Systems Science 31, 7 (2000), 811–818. DOI:https://doi.org/10.1080/002077200406543
[] Gerd Doeben-Henisch (2004), The Planet Earth Simulator Project – A Case Study in Computational Semiotics, IEEE AFRICON 2004, pp.417 – 422
[] Eric Bonabeau (2009), Decisions 2.0: The power of collective intelligence. MIT Sloan Management Review 50, 2 (Winter 2009), 45-52.
[] Jim Giles (2005), Internet encyclopaedias go head to head. Nature 438, 7070 (Dec. 2005), 900–901. DOI:https://doi.org/10.1038/438900a
[] T. Bosse, C. M. Jonker, M. C. Schut, and J. Treur (2006), Collective representational content for shared extended mind. Cognitive Systems Research 7, 2-3 (2006), pp.151-174, DOI:https://doi.org/10.1016/j.cogsys.2005.11.007
[] Romina Cachia, Ramón Compañó, and Olivier Da Costa (2007), Grasping the potential of online social networks for foresight. Technological Forecasting and Social Change 74, 8 (2007), oo.1179-1203. DOI:https://doi.org/10.1016/j.techfore.2007.05.006
[] Tom Gruber (2008), Collective knowledge systems: Where the social web meets the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 6, 1 (2008), 4–13. DOI:https://doi.org/10.1016/j.websem.2007.11.011
[] Luca Iandoli, Mark Klein, and Giuseppe Zollo (2009), Enabling on-line deliberation and collective decision-making through large-scale argumentation. International Journal of Decision Support System Technology 1, 1 (Jan. 2009), 69–92. DOI:https://doi.org/10.4018/jdsst.2009010105
[] Shuangling Luo, Haoxiang Xia, Taketoshi Yoshida, and Zhongtuo Wang (2009), Toward collective intelligence of online communities: A primitive conceptual model. Journal of Systems Science and Systems Engineering 18, 2 (01 June 2009), 203–221. DOI:https://doi.org/10.1007/s11518-009-5095-0
[] Dawn G. Gregg (2010), Designing for collective intelligence. Communications of the ACM 53, 4 (April 2010), 134–138. DOI:https://doi.org/10.1145/1721654.1721691
[] Rolf Pfeifer, Jan Henrik Sieg, Thierry Bücheler, and Rudolf Marcel Füchslin. 2010. Crowdsourcing, open innovation and collective intelligence in the scientific method: A research agenda and operational framework. (2010). DOI:https://doi.org/10.21256/zhaw-4094
[] Martijn C. Schut. 2010. On model design for simulation of collective intelligence. Information Sciences 180, 1 (2010), 132–155. DOI:https://doi.org/10.1016/j.ins.2009.08.006 Special Issue on Collective Intelligence
[] Dimitrios J. Vergados, Ioanna Lykourentzou, and Epaminondas Kapetanios (2010), A resource allocation framework for collective intelligence system engineering. In Proceedings of the International Conference on Management of Emergent Digital EcoSystems (MEDES’10). ACM, New York, NY, 182–188. DOI:https://doi.org/10.1145/1936254.1936285
[] Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone (2010), Evidence for a collective intelligence factor in the performance of human groups. Science 330, 6004 (2010), 686–688. DOI:https://doi.org/10.1126/science.1193147
[] Michael A. Woodley and Edward Bell (2011), Is collective intelligence (mostly) the General Factor of Personality? A comment on Woolley, Chabris, Pentland, Hashmi and Malone (2010). Intelligence 39, 2 (2011), 79–81. DOI:https://doi.org/10.1016/j.intell.2011.01.004
[] Joshua Introne, Robert Laubacher, Gary Olson, and Thomas Malone (2011), The climate CoLab: Large scale model-based collaborative planning. In Proceedings of the 2011 International Conference on Collaboration Technologies and Systems (CTS’11). 40–47. DOI:https://doi.org/10.1109/CTS.2011.5928663
[] Miguel de Castro Neto and Ana Espírtio Santo (2012), Emerging collective intelligence business models. In MCIS 2012 Proceedings. Mediterranean Conference on Information Systems. https://aisel.aisnet.org/mcis2012/14
[] Peng Liu, Zhizhong Li (2012), Task complexity: A review and conceptualization framework, International Journal of Industrial Ergonomics 42 (2012), pp. 553 – 568
[] Sean Wise, Robert A. Paton, and Thomas Gegenhuber. (2012), Value co-creation through collective intelligence in the public sector: A review of US and European initiatives. VINE 42, 2 (2012), 251–276. DOI:https://doi.org/10.1108/03055721211227273
[] Antonietta Grasso and Gregorio Convertino (2012), Collective intelligence in organizations: Tools and studies. Computer Supported Cooperative Work (CSCW) 21, 4 (01 Oct 2012), 357–369. DOI:https://doi.org/10.1007/s10606-012-9165-3
[] Sandro Georgi and Reinhard Jung (2012), Collective intelligence model: How to describe collective intelligence. In Advances in Intelligent and Soft Computing. Vol. 113. Springer, 53–64. DOI:https://doi.org/10.1007/978-3-642-25321-8_5
[] H. Santos, L. Ayres, C. Caminha, and V. Furtado (2012), Open government and citizen participation in law enforcement via crowd mapping. IEEE Intelligent Systems 27 (2012), 63–69. DOI:https://doi.org/10.1109/MIS.2012.80
[] Jörg Schatzmann & René Schäfer & Frederik Eichelbaum (2013), Foresight 2.0 – Definition, overview & evaluation, Eur J Futures Res (2013) 1:15 DOI 10.1007/s40309-013-0015-4
[] Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin (2013), How diversity can drive innovation. Harvard Business Review 91, 12 (2013), 30–30
[] Tony Diggle (2013), Water: How collective intelligence initiatives can address this challenge. Foresight 15, 5 (2013), 342–353. DOI:https://doi.org/10.1108/FS-05-2012-0032
[] Hélène Landemore and Jon Elster. 2012. Collective Wisdom: Principles and Mechanisms. Cambridge University Press. DOI:https://doi.org/10.1017/CBO9780511846427
[] Jerome C. Glenn (2013), Collective intelligence and an application by the millennium project. World Futures Review 5, 3 (2013), 235–243. DOI:https://doi.org/10.1177/1946756713497331
[] Detlef Schoder, Peter A. Gloor, and Panagiotis Takis Metaxas (2013), Social media and collective intelligence—Ongoing and future research streams. KI – Künstliche Intelligenz 27, 1 (1 Feb. 2013), 9–15. DOI:https://doi.org/10.1007/s13218-012-0228-x
[] V. Singh, G. Singh, and S. Pande (2013), Emergence, self-organization and collective intelligence—Modeling the dynamics of complex collectives in social and organizational settings. In 2013 UKSim 15th International Conference on Computer Modelling and Simulation. 182–189. DOI:https://doi.org/10.1109/UKSim.2013.77
[] A. Kornrumpf and U. Baumöl (2014), A design science approach to collective intelligence systems. In 2014 47th Hawaii International Conference on System Sciences. 361–370. DOI:https://doi.org/10.1109/HICSS.2014.53
[] Michael A. Peters and Richard Heraud. 2015. Toward a political theory of social innovation: Collective intelligence and the co-creation of social goods. 3, 3 (2015), 7–23. https://researchcommons.waikato.ac.nz/handle/10289/9569
[] Juho Salminen. 2015. The Role of Collective Intelligence in Crowdsourcing Innovation. PhD dissertation. Lappeenranta University of Technology
[] Aelita Skarzauskiene and Monika Maciuliene (2015), Modelling the index of collective intelligence in online community projects. In International Conference on Cyber Warfare and Security. Academic Conferences International Limited, 313
[] AYA H. KIMURA and ABBY KINCHY (2016), Citizen Science: Probing the Virtues and Contexts of Participatory Research, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099
[] Philip Tetlow, Dinesh Garg, Leigh Chase, Mark Mattingley-Scott, Nicholas Bronn, Kugendran Naidoo†, Emil Reinert (2022), Towards a Semantic Information Theory (Introducing Quantum Corollas), arXiv:2201.05478v1 [cs.IT] 14 Jan 2022, 28 pages
[] Melanie Mitchell, What Does It Mean to Align AI With Human Values?, quanta magazin, Quantized Columns, 19.Devember 2022, https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213#
Comment by Gerd Doeben-Henisch:
[] Nick Bostrom. Superintelligence. Paths, Dangers, Strategies. Oxford University Press, Oxford (UK), 1 edition, 2014.
[] Scott Aaronson, Reform AI Alignment, Update: 22.November 2022, https://scottaaronson.blog/?p=6821
[] Andrew Y. Ng, Stuart J. Russell, Algorithms for Inverse Reinforcement Learning, ICML 2000: Proceedings of the Seventeenth International Conference on Machine LearningJune 2000 Pages 663–670
[] Pat Langley (ed.), ICML ’00: Proceedings of the Seventeenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 340 Pine Street, Sixth Floor, San Francisco, CA, United States, Conference 29 June 2000- 2 July 2000, 29.June 2000
Abstract: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations Daniel S. Brown * 1 Wonjoon Goo * 1 Prabhat Nagarajan 2 Scott Niekum 1 You can read in the abstract: “A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (ap- proximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo bench- mark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.”
In the abstract you can read: “For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.
In the abstract you can read: “Conceptual abstraction and analogy-making are key abilities underlying humans’ abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress
In the abstract you can read: “Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.”
[] Stuart Russell, (2019), Human Compatible: AI and the Problem of Control, Penguin books, Allen Lane; 1. Edition (8. Oktober 2019)
In the preface you can read: “This book is about the past , present , and future of our attempt to understand and create intelligence . This matters , not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future . The world’s great powers are waking up to this fact , and the world’s largest corporations have known it for some time . We cannot predict exactly how the technology will develop or on what timeline . Nevertheless , we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world . What then ? Everything civilization has to offer is the product of our intelligence ; gaining access to considerably greater intelligence would be the biggest event in human history . The purpose of the book is to explain why it might be the last event in human history and how to make sure that it is not .”
[] David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina, (2022), Method Cards for Prescriptive Machine-Learning Transparency, 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), CAIN’22, May 16–24, 2022, Pittsburgh, PA, USA, pp. 90 – 100, Association for Computing Machinery, ACM ISBN 978-1-4503-9275-4/22/05, New York, NY, USA, https://doi.org/10.1145/3522664.3528600
In the abstract you can read: “Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets, AI FactSheets, and Model Cards have taken a mainly descriptive approach, providing various details about the system components. While the above information is essential for product developers and external experts to assess whether the ML system meets their requirements, other stakeholders might find it less actionable. In particular, ML engineers need guidance on how to mitigate po- tential shortcomings in order to fix bugs or improve the system’s performance. We propose a documentation artifact that aims to provide such guidance in a prescriptive way. Our proposal, called Method Cards, aims to increase the transparency and reproducibil- ity of ML systems by allowing stakeholders to reproduce the models, understand the rationale behind their designs, and introduce adap- tations in an informed way. We showcase our proposal with an example in small object detection, and demonstrate how Method Cards can communicate key considerations that help increase the transparency and reproducibility of the detection model. We fur- ther highlight avenues for improving the user experience of ML engineers based on Method Cards.”
[] John H. Miller, (2022), Ex Machina: Coevolving Machines and the Origins of the Social Universe, The SFI Press Scholars Series, 410 pages Paperback ISBN: 978-1947864429 , DOI: 10.37911/9781947864429
In the announcement of the book you can read: “If we could rewind the tape of the Earth’s deep history back to the beginning and start the world anew—would social behavior arise yet again? While the study of origins is foundational to many scientific fields, such as physics and biology, it has rarely been pursued in the social sciences. Yet knowledge of something’s origins often gives us new insights into the present. In Ex Machina, John H. Miller introduces a methodology for exploring systems of adaptive, interacting, choice-making agents, and uses this approach to identify conditions sufficient for the emergence of social behavior. Miller combines ideas from biology, computation, game theory, and the social sciences to evolve a set of interacting automata from asocial to social behavior. Readers will learn how systems of simple adaptive agents—seemingly locked into an asocial morass—can be rapidly transformed into a bountiful social world driven only by a series of small evolutionary changes. Such unexpected revolutions by evolution may provide an important clue to the emergence of social life.”
In the abstract you can read: “Analyzing the spatial and temporal properties of information flow with a multi-century perspective could illuminate the sustainability of human resource-use strategies. This paper uses historical and archaeological datasets to assess how spatial, temporal, cognitive, and cultural limitations impact the generation and flow of information about ecosystems within past societies, and thus lead to tradeoffs in sustainable practices. While it is well understood that conflicting priorities can inhibit successful outcomes, case studies from Eastern Polynesia, the North Atlantic, and the American Southwest suggest that imperfect information can also be a major impediment to sustainability. We formally develop a conceptual model of Environmental Information Flow and Perception (EnIFPe) to examine the scale of information flow to a society and the quality of the information needed to promote sustainable coupled natural-human systems. In our case studies, we assess key aspects of information flow by focusing on food web relationships and nutrient flows in socio-ecological systems, as well as the life cycles, population dynamics, and seasonal rhythms of organisms, the patterns and timing of species’ migration, and the trajectories of human-induced environmental change. We argue that the spatial and temporal dimensions of human environments shape society’s ability to wield information, while acknowledging that varied cultural factors also focus a society’s ability to act on such information. Our analyses demonstrate the analytical importance of completed experiments from the past, and their utility for contemporary debates concerning managing imperfect information and addressing conflicting priorities in modern environmental management and resource use.”
This text is part of a philosophy of science analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive posts dedicated to the HMI-Analysis for this software.
THE GENERALIZED OKSIMO THEORY PARADIGM
In the preceding sections it has been shown that the oksimo paradigm is principally fitting in the theory paradigm as it has been discussed by Popper. This is possible because some of the concepts used by Popper have been re-interpreted by re-analyzing the functioning of the symbolic dimension. All the requirements of Popper could be shown to work but now even in a more extended way.
SUSTAINABLE FUTURE
To describe the oksimo paradigm it is not necessary to mention as a wider context the general perspective of sustainability as described by the United Nations [UN][1]. But if one understands the oksiomo paradigm deeper and one knows that from the 17 sustainable development goals [SDGs] the fourth goal [SDG4] is understood by the UN as the central key for the development of all the other SDGs [2], then one can understand this as an invitation to think about that kind of knowledge which could be the ‘kernel technology’ for sustainability. A ‘technology’ is not simply ‘knowledge’, it is a process which enables the participants — here assumed as human actors with built-in meaning functions — to share their experience of the world and as well their hopes, their wishes, their dreams to become true in a reachable future. To be ‘sustainable’ these visions have to be realized in a fashion which keeps the whole of biological life alive on earth as well in the whole universe. Biological life is the highest known value with which the universe is gifted.
Knowledge as a kernel technology for a sustainable future of the whole biological life has to be a process where all human biological life-forms headed by the human actors have to contribute with their experience and capabilities to find those possible future states (visions, goals, …) which can really enable a sustainable future.
THE SYMBOLIC DIMENSION
To enable different isolated brains in different bodies to ‘cooperate’ and thereby to ‘coordinate’ their experience, and their behavior, the only and most effective way to do this is known as ‘symbolic communication’: using expressions of some ordinary language whose ‘meaning’ has been learned by every member of the population beginning with being born on this planet. Human actors (classified as the life-form ‘homo sapiens’) have the most known elaborated language capability by being able to associate all kinds of experience with expressions of an ordinary language. These ‘mappings’ between expressions and the general experience is taking place ‘inside the brain’ and these mappings are highly ‘adaptive’; they can change over time and they are mostly ‘synchronized’ with the mappings taking place in other brains. Such a mapping is here called a ‘meaning function’ [μ].
DIFFERENT KINDS OF EXPRESSIONS
The different sientific disciplines today have developed many different views and models how to describe the symbolic dimension, their ‘parts’, their functioning. Here we assume only three different kinds of expressions which can be analayzed further with nearly infinite many details.
True Concrete Expressions [S_A]
The ‘everyday case’ occurs if human actors share a real actual situation and they use their symbolic expressions to ‘talk about’ the shared situation, telling each other what is given according to their understanding using their built-in meaning function μ. With regard to the shared knowledge and language these human actors can decide, wether an expression E used in the description is matching the observed situation or not. If the expression is matching than such an expression is classified as being a ‘true expression’. Otherwise it is either undefined or eventually ‘false’ if it ‘contradicts’ directly. Thus the set of all expressions assumed to be true in a actual given situation S is named here S_A. Let us look to an example: Peter says, “it is raining”, and Jenny says “it is not raining”. If all would agree, that it is raining, then Peters expression is classified as ‘true’ and Jennys expression as ‘false’. If different views would exist in the group, then it is not clear what is true or false or undefined in this group! This problem belongs to the pragmatic dimension of communication, where human actors have to find a way to clarify their views of the world. The right view of the situation depends from the different individual views located in the individual brains and these views can be wrong. There exists no automatic procedure to get a ‘true’ vision of the real world.
General Assumptions [S_U]
It is typical for human actors that they are collecting knowledge about the world including general assumptions like “Birds can fly”, “Ice is melting in the sun”, “In certain cases the covid19-virus can bring people to death”, etc. These expressions are usually understood as ‘general’ rules because they do not describe a concrete single case but are speaking of many possible cases. Such a general rule can be used within some logical deduction as demonstrated by the classical greek logic: ‘IF it is true that “Birds can fly” AND we have a certain fact “R2D2 is a bird” THEN we can deduce the fact “R2D2 can fly”‘. The expression “R2D2 can fly” claims to be true. Whether this is ‘really’ the case has to be shown in a real situation, either actually or at some point in the future. The set of all assumed general assumptions is named here S_U.
Possible Future States [S_V]
By experience and some ‘creative’ thinking human actors can imagine concrete situations, which are not yet actually given but which are assumed to be ‘possible’; the possibility can be interpreted as some ‘future’ situation. If a real situation would be reached which includes the envisioned state then one could say that the vision has become ‘true’. Otherwise the envisioned state is ‘undefined’: perhaps it can become true or not. In human culture there exist many visions since hundreds or even thousands of years where still people are ‘believing’ that they will become ‘true’ some day. The set of all expressions related to a vision is named here S_V.
REALIZING FUTURE [X, ⊢X]
If the set of expressions S_V related to a ‘vision’ (accompanied by many emotions, desires, details of all kinds) is not empty, then it is possible to look for those ‘actions’ which with highest ‘probability’ π can ‘change’ a given situation S_A in a way that the new situation S’ is becoming more and more similar to the envisioned situation S_V. Thus a given goal (=vision) can inspire a ‘construction process’ which is typical for all kinds of engineering and creative thinking. The general format of an expression to describe a change is within the oksimo paradigm assumed as follows:
With regard to a given situation S
Check whether a certain set of expressions COND is a subset of the expressions of S
If this is the case then with probability π:
Remove all expressions of the set Eminus from S,
Add all expressions of the set Eplus to S
and update (compute) all parameters of the set Model
In a short format:
S’π = S – Eminus + Eplus & MODEL(S)
All change rules together represent the set X. In the general theory paradigm the change rules X represent the inference rules, which together with a general ‘inference concept’ ⊢X constitute the ‘logic’ of the theory. This enables the following general logical relation:
{S_U, S_A} ⊢X <S_A, S1, S2, …, Sn>
with the continuous evaluation: |S_V ⊆ Si| > θ. During the whole construction it is possible to evaluate each individual state whether the expressions of the vision state S_V are part of the actual state Si and to which degree.
Such a logical deduction concept is called a ‘simulation’ by using a ‘simulator’ to repeat the individual deductions.
POSSIBLE EXTENSIONS
The above outlined oksimo theory paradigm can easily be extended by some more features:
AUTONOMOUS ACTORS: The change rules X so far are ‘static’ rules. But we know from everyday life that there are many dynamic sources around which can cause some change, especially biological and non-biological actors. Every such actors can be understood as an input-output system with an adaptive ‘behavior function’ φ. Such a behavior can not be modeled by ‘static’ rules alone. Therefore one can either define theoretical models of such ‘autonomous’ actors with their behavior and enlarge the set of change rules X with ‘autonomous change rules’ Xa as Xa ⊆ X. The other variant is to include in real time ‘living autonomous’ actors as ‘players’ having the role of an ‘autonomous’ rule and being enabled to act according to their ‘will’.
MACHINE INTELLIGENCE: To run a simulation will always give only ‘one path’ P in the space of possible states. Usually there would be many more paths which can lead to a goal state S_V and the accompanying parameters from Model can be different: more or less energy consumption, more or less financial losses, more or less time needed, etc. To improve the knowledge about the ‘good candidates’ in the possible state space one can introduce general machine intelligence algorithms to evaluate the state space and make proposals.
REAL-TIME PARAMETERS: The parameters of Model can be connected online with real measurements in near real-time. This would allow to use the collected knowledge to ‘monitor’ real processes in the world and based on the collected knowledge recommend actions to react to some states.
COMMENTS
[1] The 2030 Agenda for Sustainable Development, adopted by all United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet, now and into the future. At its heart are the 17 Sustainable Development Goals (SDGs), which are an urgent call for action by all countries – developed and developing – in a global partnership. They recognize that ending poverty and other deprivations must go hand-in-hand with strategies that improve health and education, reduce inequality, and spur economic growth – all while tackling climate change and working to preserve our oceans and forests. See PDF: https://sdgs.un.org/sites/default/files/publication/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf
[2] UN, SDG4, PDF, Argumentation why the SDG4 ist fundamental for all other SDGs: https://sdgs.un.org/sites/default/files/publications/2275sdbeginswitheducation.pdf
Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458, February 27-March 16, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de
Before one starts the HMI analysis some stakeholder — in our case are the users stakeholder as well as users in one role — have to present some given situation — classifiable as a ‘problem’ — to depart from and a vision as the envisioned goal to be realized.
Here we give a short description of the problem for the CM:MI paradigm and the vision, what should be gained.
Problem: Mankind on the Planet Earth
In this project the mankind on the planet earth is understood as the primary problem. ‘Mankind’ is seen here as the life form called homo sapiens. Based on the findings of biological evolution one can state that the homo sapiens has — besides many other wonderful capabilities — at least two extraordinary capabilities:
Outside to Inside
The whole body with the brain is able to convert continuously body-external events into internal, neural events. And the brain inside the body receives many events inside the body as external events too. Thus in the brain we can observe a mixup of body-external (outside 1) and body-internal events (outside 2), realized as set of billions of neural processes, highly interrelated. Most of these neural processes are unconscious, a small part is conscious. Nevertheless these unconscious and conscious events are neurally interrelated. This overall conversion from outside 1 and outside 2 into neural processes can be seen as a mapping. As we know today from biology, psychology and brain sciences this mapping is not a 1-1 mapping. The brain does all the time a kind of filtering — mostly unconscious — sorting out only those events which are judged by the brain to be important. Furthermore the brain is time-slicing all its sensory inputs, storing these time-slices (called ‘memories’), whereby these time-slices again are no 1-1 copies. The storing of time-sclices is a complex (unconscious) process with many kinds of operations like structuring, associating, abstracting, evaluating, and more. From this one can deduce that the content of an individual brain and the surrounding reality of the own body as well as the world outside the own body can be highly different. All kinds of perceived and stored neural events which can be or can become conscious are here called conscious cognitive substrates or cognitive objects.
Inside to Outside (to Inside)
Generally it is known that the homo sapiens can produce with its body events which have some impact on the world outside the body. One kind of such events is the production of all kinds of movements, including gestures, running, grasping with hands, painting, writing as well as sounds by his voice. What is of special interest here are forms of communications between different humans, and even more specially those communications enabled by the spoken sounds of a language as well as the written signs of a language. Spoken sounds as well as written signs are here called expressions associated with a known language. Expressions as such have no meaning (A non-speaker of a language L can hear or see expressions of the language L but he/she/x never will understand anything). But as everyday experience shows nearly every child starts very soon to learn which kinds of expressions belong to a language and with what kinds of shared experiences they can be associated. This learning is related to many complex neural processes which map expressions internally onto — conscious and unconscious — cognitive objects (including expressions!). This mapping builds up an internal meaning function from expressions into cognitive objects and vice versa. Because expressions have a dual face (being internal neural structures as well as being body-outside events by conversions from the inside to body-outside) it is possible that a homo sapiens can transmit its internal encoding of cognitive objects into expressions from his inside to the outside and thereby another homo sapiens can perceive the produced outside expression and can map this outside expression into an intern expression. As far as the meaning function of of the receiving homo sapiens is sufficiently similar to the meaning function of the sending homo sapiens there exists some probability that the receiving homo sapiens can activate from its memory cognitive objects which have some similarity with those of the sending homo sapiens.
Although we know today of different kinds of animals having some form of language, there is no species known which is with regard to language comparable to the homo sapiens. This explains to a large extend why the homo sapiens population was able to cooperate in a way, which not only can include many persons but also can stretch through long periods of time and can include highly complex cognitive objects and associated behavior.
Negative Complexity
In 2006 I introduced the term negative complexity in my writings to describe the fact that in the world surrounding an individual person there is an amount of language-encoded meaning available which is beyond the capacity of an individual brain to be processed. Thus whatever kind of experience or knowledge is accumulated in libraries and data bases, if the negative complexity is higher and higher than this knowledge can no longer help individual persons, whole groups, whole populations in a constructive usage of all this. What happens is that the intended well structured ‘sound’ of knowledge is turned into a noisy environment which crashes all kinds of intended structures into nothing or badly deformed somethings.
Entangled Humans
From Quantum Mechanics we know the idea of entangled states. But we must not dig into quantum mechanics to find other phenomena which manifest entangled states. Look around in your everyday world. There exist many occasions where a human person is acting in a situation, but the bodily separateness is a fake. While sitting before a laptop in a room the person is communicating within an online session with other persons. And depending from the social role and the membership in some social institution and being part of some project this person will talk, perceive, feel, decide etc. with regard to the known rules of these social environments which are represented as cognitive objects in its brain. Thus by knowledge, by cognition, the individual person is in its situation completely entangled with other persons which know from these roles and rules and following thereby in their behavior these rules too. Sitting with the body in a certain physical location somewhere on the planet does not matter in this moment. The primary reality is this cognitive space in the brains of the participating persons.
If you continue looking around in your everyday world you will probably detect that the everyday world is full of different kinds of cognitively induced entangled states of persons. These internalized structures are functioning like protocols, like scripts, like rules in a game, telling everybody what is expected from him/her/x, and to that extend, that people adhere to such internalized protocols, the daily life has some structure, has some stability, enables planning of behavior where cooperation between different persons is necessary. In a cognitively enabled entangled state the individual person becomes a member of something greater, becoming a super person. Entangled persons can do things which usually are not possible as long you are working as a pure individual person.[1]
Entangled Humans and Negative Complexity
Although entangled human persons can principally enable more complex events, structures, processes, engineering, cultural work than single persons, human entanglement is still limited by the brain capacities as well as by the limits of normal communication. Increasing the amount of meaning relevant artifacts or increasing the velocity of communication events makes things even more worse. There are objective limits for human processing, which can run into negative complexity.
Future is not Waiting
The term ‘future‘ is cognitively empty: there exists nowhere an object which can be called ‘future’. What we have is some local actual presence (the Now), which the body is turning into internal representations of some kind (becoming the Past), but something like a future does not exist, nowhere. Our knowledge about the future is radically zero.
Nevertheless, because our bodies are part of a physical world (planet, solar system, …) and our entangled scientific work has identified some regularities of this physical world which can be bused for some predictions what could happen with some probability as assumed states where our clocks are showing a different time stamp. But because there are many processes running in parallel, composed of billions of parameters which can be tuned in many directions, a really good forecast is not simple and depends from so many presuppositions.
Since the appearance of homo sapiens some hundred thousands years ago in Africa the homo sapiens became a game changer which makes all computations nearly impossible. Not in the beginning of the appearance of the homo sapiens, but in the course of time homo sapiens enlarged its number, improved its skills in more and more areas, and meanwhile we know, that homo sapiens indeed has started to crash more and more the conditions of its own life. And principally thinking points out, that homo sapiens could even crash more than only planet earth. Every exemplar of a homo sapiens has a built-in freedom which allows every time to decide to behave in a different way (although in everyday life we are mostly following some protocols). And this built-in freedom is guided by actual knowledge, by emotions, and by available resources. The same child can become a great musician, a great mathematician, a philosopher, a great political leader, an engineer, … but giving the child no resources, depriving it from important social contexts, giving it the wrong knowledge, it can not manifest its freedom in full richness. As human population we need the best out of all children.
Because the processing of the planet, the solar system etc. is going on, we are in need of good forecasts of possible futures, beyond our classical concepts of sharing knowledge. This is where our vision enters.
VISION: DEVELOPING TOGETHER POSSIBLE FUTURES
To find possible and reliable shapes of possible futures we have to exploit all experiences, all knowledge, all ideas, all kinds of creativity by using maximal diversity. Because present knowledge can be false — as history tells us –, we should not rule out all those ideas, which seem to be too crazy at a first glance. Real innovations are always different to what we are used to at that time. Thus the following text is a first rough outline of the vision:
Find a format
which allows anykinds of people
for any kind of given problem
with at least one vision of a possible improvement
together
to search and to find a path leading from the given problem (Now) to the envisioned improved state (future).
For all needed communication any kind of everyday language should be enough.
As needed this everyday language should be extendable with special expressions.
These considerations about possible paths into the wanted envisioned future state should continuously be supported by appropriate automaticsimulations of such a path.
These simulations should include automatic evaluations based on the given envisioned state.
As far as possible adaptive algorithms should be available to support the search, finding and identification of the best cases (referenced by the visions) within human planning.
REFERENCES or COMMENTS
[1] One of the most common entangled state in daily life is the usage of normal language! A normal language L works only because the rules of usage of this language L are shared by all speaker-hearer of this language, and these rules are explicit cognitive structures (not necessarily conscious, mostly unconscious!).
In this section several case studies will be presented. It will be shown, how the DAAI paradigm can be applied to many different contexts . Since the original version of the DAAI-Theory in Jan 18, 2020 the concept has been further developed centering around the concept of a Collective Man-Machine Intelligence [CM:MI] to address now any kinds of experts for any kind of simulation-based development, testing and gaming. Additionally the concept now can be associated with any kind of embedded algorithmic intelligence [EAI] (different to the mainstream concept ‘artificial intelligence’). The new concept can be used with every normal language; no need for any special programming language! Go back to the overall framework.
COLLECTION OF PAPERS
There exists only a loosely order between the different papers due to the character of this elaboration process: generally this is an experimental philosophical process. HMI Analysis applied for the CM:MI paradigm.
FROM DAAI to GCA. Turning Engineering into Generative Cultural Anthropology. This paper gives an outline how one can map the DAAI paradigm directly into the GCA paradigm (April-19,2020): case1-daai-gca-v1
A first GCA open research project [GCA-OR No.1]. This paper outlines a first open research project using the GCA. This will be the framework for the first implementations (May-5, 2020): GCAOR-v0-1
Engineering and Society. A Case Study for the DAAI Paradigm – Introduction. This paper illustrates important aspects of a cultural process looking to the acting actors where certain groups of people (experts of different kinds) can realize the generation, the exploration, and the testing of dynamical models as part of a surrounding society. Engineering is clearly not separated from society (April-9, 2020): case1-population-start-part0-v1
Bootstrapping some Citizens. This paper clarifies the set of general assumptions which can and which should be presupposed for every kind of a real world dynamical model (April-4, 2020): case1-population-start-v1-1
Hybrid Simulation Game Environment [HSGE]. This paper outlines the simulation environment by combing a usual web-conference tool with an interactive web-page by our own (23.May 2020): HSGE-v2 (May-5, 2020): HSGE-v0-1
The Observer-World Framework. This paper describes the foundations of any kind of observer-based modeling or theory construction.(July 16, 2020)
This suggests that a symbiosis between creative humans and computing algorithms is an attractive pairing. For this we have to re-invent our official learning processes in schools and universities to train the next generation of humans in a more inspired and creative usage of algorithms in a game-like learning processes.
CONTEXT
The overall context is given by the description of the Actor-Actor Interaction (AAI) paradigm as a whole. In this text the special relationship between engineering and the surrounding society is in the focus. And within this very broad and rich relationship the main interest lies in the ethical dimension here understood as those preferences of a society which are more supported than others. It is assumed that such preferences manifesting themselves in real actions within a space of many other options are pointing to hidden values which guide the decisions of the members of a society. Thus values are hypothetical constructs based on observable actions within a cognitively assumed space of possible alternatives. These cognitively represented possibilities are usually only given in a mixture of explicitly stated symbolic statements and different unconscious factors which are influencing the decisions which are causing the observable actions.
These assumptions represent until today not a common opinion and are not condensed in some theoretical text. Nevertheless I am using these assumptions here because they help to shed some light on the rather complex process of finding a real solution to a stated problem which is rooted in the cognitive space of the participants of the engineering process. To work with these assumptions in concrete development processes can support a further clarification of all these concepts.
ENGINEERING AND SOCIETY
DUAL: REAL AND COGNITIVE
As assumed in the AAI paradigm the engineering process is that process which connects the event of stating aproblem combined with a first vision of a solution with a final concrete working solution.
The main characteristic of such an engineering process is the dual character of a continuous interaction between the cognitive space of all participants of the process with real world objects, actions, and processes. The real world as such is a lose collection of real things, to some extend connected by regularities inherent in natural things, but the visions of possible states, possible different connections, possible new processes is bound to the cognitive space of biological actors, especially to humans as exemplars of the homo sapiens species.
Thus it is a major factor of training, learning, and education in general to see how the real world can be mappedinto some cognitive structures, how the cognitive structures can be transformed by cognitive operations into new structures and how these new cognitive structures can be re-mapped into the real world of bodies.
Within the cognitive dimension exists nearly infinite sets of possible alternatives, which all indicate possible states of a world, whose feasibility is more or less convincing. Limited by time and resources it is usually not possible to explore all these cognitively tapped spaces whether and how they work, what are possible side effects etc.
PREFERENCES
Somehow by nature, somehow by past experience biological system — like the home sapiens — have developed cultural procedures to induce preferences how one selects possible options, which one should be selected, under which circumstances and with even more constraints. In some situations these preferences can be helpful, in others they can hide possibilities which afterwards can be re-detected as being very valuable.
Thus every engineering process which starts a transformation process from some cognitively given point of view to a new cognitively point of view with a following up translation into some real thing is sharing its cognitive spacewith possible preferences of the cognitive space of the surrounding society.
It is an open case whether the engineers as the experts have an experimental, creative attitude to explore without dogmatic constraints the possible cognitive spaces to find new solutions which can improve life or not. If one assumes that there exist no absolute preferences on account of the substantially limit knowledge of mankind at every point of time and inferring from this fact the necessity to extend an actual knowledge further to enable the mastering of an open unknown future then the engineers will try to explore seriously all possibilities without constraints to extend the power of engineering deeper into the heart of the known as well as unknown universe.
EXPLORING COGNITIVE POSSIBILITIES
At the start one has only a rough description of the problem and a rough vision of a wanted solution which gives some direction for the search of an optimal solution. This direction represents also a kind of a preference what is wanted as the outcome of the process.
On account of the inherent duality of human thinking and communication embracing the cognitive space as well as the realm of real things which both are connected by complex mappings realized by the brain which operates nearly completely unconscious a long process of concrete real and cognitive actions is necessary to materialize cognitive realities within a communication process. Main modes of materialization are the usage of symbolic languages, paintings (diagrams), physical models, algorithms for computation and simulations, and especially gaming (in several different modes).
As everybody can know these communication processes are not simple, can be a source of confusions, and the coordination of different brains with different cognitive spaces as well as different layouts of unconscious factors is a difficult and very demanding endeavor.
The communication mode gaming is of a special interest here because it is one of the oldest and most natural modes to learn but in the official education processes in schools and universities (and in companies) it was until recently not part of the official curricula. But it is the only mode where one can exercise the dimensions of preferences explicitly in combination with an exploring process and — if one wants — with the explicit social dimension of having more than one brain involved.
In the last about 50 – 100 years the term project has gained more and more acceptance and indeed the organization of projects resembles a game but it is usually handled as a hierarchical, constraints-driven process where creativity and concurrent developing (= gaming) is not a main topic. Even if companies allow concurrent development teams these teams are cognitively separated and the implicit cognitive structures are black boxes which can not be evaluated as such.
In the presupposed AAI paradigm here the open creative space has a high priority to increase the chance for innovation. Innovation is the most valuable property in face of an unknown future!
While the open space for a real creativity has to be executed in all the mentioned modes of communication the final gaming mode is of special importance. To enable a gaming process one has explicitly to define explicit win-lose states. This objectifies values/ preferences hidden in the cognitive space before. Such an objectification makes things transparent, enables more rationality and allows the explicit testing of these defined win-lose states as feasible or not. Only tested hypothesis represent tested empirical knowledge. And because in a gaming mode whole groups or even all members of a social network can participate in a learning process of the functioning and possible outcome of a presented solution everybody can be included. This implies a common sharing of experience and knowledge which simplifies the communication and therefore the coordination of the different brains with their unconsciousness a lot.
TESTING AND EVALUATION
Testing a proposed solution is another expression for measuring the solution. Measuring is understood here as a basic comparison between the target to be measured (here the proposed solution) and the before agreednorm which shall be used as point of reference for the comparison.
But what can be a before agreed norm?
Some aspects can be mentioned here:
First of all there is the proposed solution as such, which is here a proposal for a possible assistive actor in an assumed environment for some intended executive actors which has to fulfill some job (task).
Part of this proposed solution are given constraints and non-functional requirements.
Part of this proposed solution are some preferences as win-lose states which have to be reached.
Another difficult to define factor are the executive actors if they are biological systems. Biological systems with their basic built in ability to act free, to be learning systems, and this associated with a not-definable large unconscious realm.
Given the explicit preferences constrained by many assumptions one can test only, whether the invited test persons understood as possible instances of the intended executive actors are able to fulfill the defined task(s) in some predefined amount of time within an allowed threshold of making errors with an expected percentage of solved sub-tasks together with a sufficient subjective satisfaction with the whole process.
But because biological executive actors are learning systems they will behave in different repeated tests differently, they can furthermore change their motivations and their interests, they can change their emotional commitment, and because of their built-in basic freedom to act there can be no 100% probability that they will act at time t as they have acted all the time before.
Thus for all kinds of jobs where the process is more or less fixed, where nothing new will happen, the participation of biological executive actors in such a process is questionable. It seems (hypothesis), that biological executing actors are better placed in jobs where there is some minimal rate of curiosity, of innovation, and of creativity combined with learning.
If this hypothesis is empirically sound (as it seems), then all jobs where human persons are involved should have more the character of games then something else.
It is an interesting side note that the actual research in robotics under the label of developmental robotics is struck by the problem how one can make robots continuously learning following interesting preferences. Given a preference an algorithm can work — under certain circumstances — often better than a human person to find an optimal solution, but lacking such a preference the algorithm is lost. And actually there exists not the faintest idea how algorithms should acquire that kind of preferences which are interesting and important for an unknown future.
On the contrary, humans are known to be creative, innovative, detecting new preferences etc. but they have only limited capacities to explore these creative findings until some telling endpoint.
This suggests that a symbiosis between creative humans and computing algorithms is an attractive pairing. For this we have to re-invent our official learning processes in schools and universities to train the next generation of humans in a more inspired and creative usage of algorithms in a game-like learning processes.
In this blog a new approach to the old topic of ‘Human-Machine Interaction (HMI)’ is developed turning the old Human-Machine dyad into the many-to-many relation of ‘Actor-Actor Interaction (AAI)’. And, moreover, in this new AAI approach the classical ‘top-down’ approach of engineering is expanded with a truly ‘bottom-up’ approach locating the center of development in the distributed knowledge of a population of users assisted by the AAI experts.
PROBLEM
From this perspective it is interesting to see how on an international level the citizens of a community/ city are not at the center of research, but again the city and its substructures – here public libraries – are called ‘actors’ while the citizens as such are only an anonymous matter of driving these structures to serve the international ‘buzz word’ of a ‘smart city’ empowered by the ‘Internet of Things (IoT)’.
This perspective is published in a paper from Shannon Mersand et al. (2019) which reviews all the main papers available focusing on the role of public libraries in cities. It seems – I could not check by myself the search space — that the paper gives a good overview of this topic in 48 cited papers.
The main idea underlined by the authors is that public libraries are already so-called ‘anchor institutions’ in a community which either already include or could be extended as “spaces for innovation, collaboration and hands on learning that are open to adults and younger children as well”. (p.3312) Or, another formulation “that libraries are consciously working to become a third space; a place for learning in multiple domains and that provides resources in the form of both materials and active learning opportunities”. (p.3312)
The paper is rich on details but for the context of the AAI paradigm I am interested only on the general perspective how the roles of the actors are described which are identified as responsible for the process of problem solving.
The in-official problem of cities is how to organize the city to respond to the needs of its citizens. There are some ‘official institutions’ which ‘officially’ have to fulfill this job. In democratic societies these institutions are ‘elected’. Ideally these official institutions are the experts which try to solve the problem for the citizens, which are the main stakeholder! To help in this job of organizing the ‘best fitting city-layout’ there exists usually at any point of time a bunch of infrastructures. The modern ‘Internet of Things (IoT)’ is only one of many possible infrastructures.
To proceed in doing the job of organizing the ‘best fitting city-layout’ there are generally two main strategies: ‘top-down’ as usual in most cities or ‘bottom-‘ in nearly no cities.
In the top-down approach the experts organize the processes of the cities more or less on their own. They do not really include the expertise of their citizens, not their knowledge, not their desires and visions. The infrastructures are provided from a birds perspective and an abstract systems thinking.
The case of the public libraries is matching this top-down paradigm. At the end of their paper the authors classify public libraries not only as some ‘infrastructure’ but “… recognize the potential of public libraries … and to consider them as a key actor in the governance of the smart community”. (p.3312) The term ‘actor’ is very strong. This turns an institution into an actor with some autonomy of deciding what to do. The users of the library, the citizens, the primary stakeholder of the city, are not seen as actors, they are – here – the material to ‘feed’ – to use a picture — the actor library which in turn has to serve the governance of the ‘smart community’.
DISCUSSION
Yes, this comment can be understood as a bit ‘harsh’ because one can read the text of the authors a bit different in the sense that the citizens are not only some matter to ‘feed’ the actor library but to see the public library as an ‘environment’ for the citizens which find in the libraries many possibilities to learn and empower themselves. In this different reading the citizens are clearly seen as actors too.
This different reading is possible, but within an overall ‘top-down’ approach the citizens as actors are not really included as actors but only as passive receivers of infrastructure offers; in a top-down approach the main focus are the infrastructures, and from all the infrastructures the ‘smart’ structures are most prominent, the internet of things.
If one remembers two previous papers of Mila Gascó (2016) and Mila Gascó-Hernandez (2018) then this is a bit astonishing because in these earlier papers she has analyzed that the ‘failure’ of the smart technology strategy in Barcelona was due to the fact that the city government (the experts in our framework) did not include sufficiently enough the citizens as actors!
From the point of view of the AAI paradigm this ‘hiding of the citizens as main actors’ is only due to the inadequate methodology of a top-down approach where a truly bottom-up approach is needed.
In the Oct-2, 2018 version of the AAI theory the bottom-up approach is not yet included. It has been worked out in the context of the new research project about ‘City Planning and eGaming‘ which in turn has been inspired by Mila Gascó-Hernandez!
REFERENCES
S.Mersand, M. Gasco-Hernandez, H. Udoh, and J.R. Gil-Garcia. “Public libraries as anchor institutions in smart communities: Current practices and future development”, Proceedings of the 52nd Hawaii International Conference on System Sciences, pages 3305 – 3314, 2019. URL https: //hdl.handle.net/10125/59766 .
Mila Gascó, “What makes a city smart? lessons from Barcelona”. 2016 49th Hawaii International Conference on System Sciences (HICSS), pages 2983–2989, Jan 2016. D O I : 10.1109/HICSS.2016.373.
Mila Gascó-Hernandez, “Building a smart city: Lessons from Barcelona.”, Commun. ACM, 61(4):50–57, March 2018. ISSN 0001-0782. D O I : 10.1145/3117800. URL http://doi.acm.org/10.1145/3117800 .
The online-book project published on the uffmm.org website has to be seen within a bigger idea which can be named ‘The better world project’.
As outlined in the figure above you can see that the AAIwSE theory is the nucleus of a project which intends to enable a global learning space which connects individual persons as well as schools, universities, cities as well as companies, and even more if wanted.
There are other ideas around using the concept ‘better world’, butt these other concepts are targeting other subjects. In this view here the engineering perspective is laying the ground to build new more effective systems to enhance all aspects of life.
As you already can detect in the AAAIwSE theory published so far there exists a new and enlarged vision of the acting persons, the engineers as the great artists of the real world. Taking this view seriously there will be a need for a new kind of spirituality too which is enabling the acting persons to do all this with a vital interest in the future of life in the universe.
Actually the following websites are directly involved in the ‘Better World Project Idea’: this site ‘uffmm.org’ (in English) and (in German) ‘cognitiveagent.org‘ and ‘Kommunalpolitik & eGaming‘. The last link points to an official project of the Frankfurt University of Applied Sciences (FRA-UAS) which will apply the AAI-Methods to all communities in Germany (about 11.000).
RESTART OF UFFMM AS SCIENTIFIC WORKPLACE.
For the Integrated Engineering of the Future (SW4IEF)
Campaining the Actor-Actor Systems Engineering (AASE) paradigm
Last Update June-22, 2018, 15:32 CET. See below: Case Studies — Templates – AASE Micro Edition – and Scheduling 2018 —
RESTART
This is a complete new restart of the old uffmm-site. It is intended as a working place for those people who are interested in an integrated engineering of the future.
SYSTEMS ENGINEERING
A widely known and useful concept for a general approach to the engineering of problems is systems engineering (SE).
Open for nearly every kind of a possible problem does a systems engineering process (SEP) organize the process how to analyze the problem, and turn this analysis into a possible design for a solution. This proposed solution will be examined by important criteria and, if it reaches an optimal version, it will be implemented as a real working system. After final evaluations this solution will start its carrier in the real world.
PHILOSOPHY OF SCIENCE
In a meta-scientific point of view the systems engineering process can become itself the object of an analysis. This is usually done by a discipline called philosophy of science (PoS). Philosophy of science is asking, e.g., what the ‘ingredients’ of an systems-engineering process are, or how these ingredients do interact? How can such a process ‘fail’? ‘How can such a process be optimized’? Therefore a philosophy of science perspective can help to make a systems engineering process more transparent and thereby supports an optimization of these processes.
A core idea of the philosophy of science perspective followed in this text is the assumption, that a systems engineering process is primarily based on different kinds of actors (AC) whose interactions enable and direct the whole process. These assumptions are also valid in that case, where the actors are not any more only biological systems like human persons and non-biological systems called machines, but also in that case where the traditional machines (M) are increasingly replaced by ‘intelligent machines (IM)‘. Therefore the well know paradigm of human-machine interaction (HMI) — or earlier ‘human-computer interaction (HCI)’ will be replaced in this text by the new paradigm of Actor-Actor Interaction (AAI). In this new version the main perspective is not the difference of man on one side and machines on the other but the kind of interactions between actors of all kind which are necessary and possible.
INTELLIGENT MACHINES
The concept of intelligent machines (IM) is understood here as a special case of the general Actor (A) concept which includes as other sub-cases biological systems, predominantly humans as instantiations of the species Homo Sapiens. While until today the question of biological intelligence and machine intelligence is usually treated separately and differently it is intended in this text to use one general concept of intelligence for all actors. This allows then more direct comparisons and evaluations. Whether biological actors are in some sense better than the non-biological actors or vice versa can seriously only be discussed when the used concept of intelligence is the same.
ACTOR STORY AND ACTOR MODELS
And, as it will be explained in the following sections, the used paradigm of actor-actor interactions uses the two main concepts of actor story (AS) as well as actor model (AM). Actor models are embedded in the actor stories. Whether an actor model describes biological or non-biological actors does not matter. Independent of the inner structures of an actor model (which can be completely different) the actor story is always completely described in terms of observable behavior which are the same for all kinds of actors (Comment: The major scientific disciplines for the analysis of behavior are biology, psychology, and sociology).
AASE PARADIGM
In analogy to the so-called ‘Object-Oriented (OO) approach in Software-Engineering (SWE)’ we campaign here the ‘Actor-Actor (AA) Systems Engineering (SE)’ approach. This takes the systems Engineering approach as a base concepts and re-works the whole framework from the point of view of the actor-actor paradigm. AASE is seen here as a theory as well as an domain of applications.
To understand the different perspectives of the used theory it can help to the figure ‘AASE-Paradigm Ontologies’. Within the systems engineering process (SEP) we have AAI-experts as acting actors. To describe these we need a ‘meta-level’ realized by a ‘philosophy of the actor’. The AAI-experts themselves are elaborating within an AAI-analysis an actor story (AS) as framework for different kinds of intended actors. To describe the inner structures of these intended actors one needs different kinds of ‘actor models’. The domain of actor-model structures overlaps with the domain of ‘machine learning (ML)’ and with ‘artificial intelligence (AI)’.
SOFTWARE
What will be described and developed separated from these theoretical considerations is an appropriate software environment which allows the construction of solutions within the AASE approach including e.g. the construction of intelligent machines too. This software environment is called in this text emerging-mind lab (EML) and it will be another public blog as well.
THEORY MICRO EDITION & CASE STUDIES
How we proceed
Because the overall framework of the intended integrated theory is too large to write it down in one condensed text with all the necessary illustrating examples we decided in Dec 2017 to follow a bottom-up approach by writing primarily case studies from different fields. While doing this we can introduce stepwise the general theory by developing a Micro Edition of the Theory in parallel to the case studies. Because the Theory Micro Edition has gained a sufficient minimal completeness already in April 2018 we do not need anymore a separate template for case studies. We will use the Theory Micro Edition as ‘template’ instead.
To keep the case studies readable as far as possible all needed mathematical concepts and formulas will be explained in a separate appendix section which is central for all case studies. This allows an evolutionary increase in the formal apparatus used for the integrated theory.
Here you can find the actual version of the theory which will continuously be updated and extended by related topics.
At the end of the text you find a list of ToDos where everybody is invited to collaborate. The main editor is Gerd Doeben-Henisch deciding whether the proposal fits into the final text or not.
This sections describes main developments in the history from HCI to AAI.
SCHEDULE 2018
The Milestone for a first outline in a book format has been reached June-22, 2018. The milestone for a first final version is scheduled for October-4, 2018.
Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458