A ‘text’ shall be written that speaks about the world, including all living beings, with ‘humans’ as the authors in the first instance. So far, we know of no cases where animals or plants write texts themselves: their view of life. We only know of humans who write from ‘their human perspective’ about life, animals, and plants. Much can be criticized about this approach. Upon further reflection, one might even realize that ‘humans writing about other humans and themselves’ is not so trivial either. Even humans writing ‘about themselves’ is prone to errors, can go completely ‘awry,’ can be entirely ‘wrong,’ which raises the question of what is ‘true’ or ‘false.’ Therefore, we should spend some thoughts on how we humans can talk about the world and ourselves in a way that gives us a chance not just to ‘fantasize,’ but to grasp something that is ‘real,’ something that describes what truly characterizes us as humans, as living beings, as inhabitants of this planet… but then the question pops up again, what is ‘real’? Are we caught in a cycle of questions with answers, where the answers themselves are again questions upon closer inspection?
First Steps
Life on Planet Earth
At the start of writing, we assume that there is a ‘Planet Earth’ and on this planet there is something we call ‘life,’ and we humans—belonging to the species Homo sapiens—are part of it.
Language
We also assume that we humans have the ability to communicate with each other using sounds. These sounds, which we use for communication, we call here ‘speech sounds’ to indicate that the totality of sounds for communication forms a ‘system’ which we ultimately call ‘language.’
Meaning
Since we humans on this planet can use completely different sounds for the ‘same objects’ in the same situation, it suggests that the ‘meaning’ of speech sounds is not firmly tied to the speech sounds themselves, but somehow has to do with what happens ‘in our minds.’ Unfortunately, we cannot look ‘into our minds.’ It seems a lot happens there, but this happening in the mind is ‘invisible.’ Nevertheless, in ‘everyday life,’ we experience that we can ‘agree’ with others whether it is currently ‘raining’ or if it smells ‘bad’ or if there is a trash bin on the sidewalk blocking the way, etc. So somehow, the ‘happenings in the mind’ seem to have certain ‘agreements’ among different people, so that not only I see something specific, but the other person does too, and we can even use the same speech sounds for it. And since a program like chatGPT can translate my German speech sounds, e.g., into English speech sounds, I can see that another person who does not speak German, instead of my word ‘Mülltonne,’ uses the word ‘trash bin’ and then nods in agreement: ‘Yes, there is a trash bin.’ Would that be a case for a ‘true statement’?
Changes and Memories
Since we experience daily how everyday life constantly ‘changes,’ we know that something that just found agreement may no longer find it the next moment because the trash bin is no longer there. We can only notice these changes because we have something called ‘memory’: we can remember that just now at a specific place there was a trash bin, but now it’s not. Or is this memory just an illusion? Can I trust my memory? If now everyone else says there was no trash bin, but I remember there was, what does that mean?
Concrete Body
Yes, and then my body: time and again I need to drink something, eat something, I’m not arbitrarily fast, I need some space, … my body is something very concrete, with all sorts of ‘sensations,’ ‘needs,’ a specific ‘shape,’ … and it changes over time: it grows, it ages, it can become sick, … is it like a ‘machine’?
Galaxies of Cells
Today we know that our human body resembles less a ‘machine’ and more a ‘galaxy of cells.’ Our body has about 37 trillion (10¹²) body cells with another 100 trillion cells in the gut that are vital for our digestive system, and these cells together form the ‘body system.’ The truly incomprehensible thing is that these approximately 140 trillion cells are each completely autonomous living beings, with everything needed for life. And if you know how difficult it is for us as humans to maintain cooperation among just five people over a long period, then you can at least begin to appreciate what it means that 140 trillion beings manage to communicate and coordinate actions every second—over many years, even decades—so that the masterpiece ‘human body’ exists and functions.
Origin as a Question
And since there is no ‘commander’ who constantly tells all the cells what to do, this ‘miracle of the human system’ expands further into the dimension of where the concept comes from that enables this ‘super-galaxy of cells’ to be as they are. How does this work? How did it arise?
Looking Behind Phenomena
In the further course, it will be important to gradually penetrate the ‘surface of everyday phenomena’ starting from everyday life, to make visible those structures that are ‘behind the phenomena,’ those structures that hold everything together and at the same time constantly move, change everything.
Fundamental Dimension of Time
All this implies the phenomenon ‘time’ as a basic category of all reality. Without time, there is also no ‘truth’…
[1] Specialists in brain research will of course raise their hand right away, and will want to say that they can indeed ‘look into the head’ by now, but let’s wait and see what this ‘looking into the head’ entails.
[2] If we assume for the number of stars in our home galaxy, the Milky Way, with an estimated 100 – 400 billion stars that there are 200 billion, then our body system would correspond to the scope of 700 galaxies in the format of the Milky Way, one cell for one star.
[3] Various disciplines of natural sciences, especially certainly evolutionary biology, have illuminated many aspects of this mega-wonder partially over the last approx. 150 years. One can marvel at the physical view of our universe, but compared to the super-galaxies of life on Planet Earth, the physical universe seems downright ‘boring’… Don’t worry: ultimately, both are interconnected: one explains the other…”
Telling Stories
Fragments of Everyday Life—Without Context
We constantly talk about something: the food, the weather, the traffic, shopping prices, daily news, politics, the boss, colleagues, sports events, music, … mostly, these are ‘fragments’ from the larger whole that we call ‘everyday life’. People in one of the many crisis regions on this planet, especially those in natural disasters or even in war…, live concretely in a completely different world, a world of survival and death.
These fragments in the midst of life are concrete, concern us, but they do not tell a story by themselves about where they come from (bombs, rain, heat,…), why they occur, how they are connected with other fragments. The rain that pours down is a single event at a specific place at a specific time. The bridge that must be closed because it is too old does not reveal from itself why this particular bridge, why now, why couldn’t this be ‘foreseen’? The people who are ‘too many’ in a country or also ‘too few’: Why is that? Could this have been foreseen? What can we do? What should we do?
The stream of individual events hits us, more or less powerfully, perhaps even simply as ‘noise’: we are so accustomed to it that we no longer even perceive certain events. But these events as such do not tell a ‘story about themselves’; they just happen, seemingly irresistibly; some say ‘It’s fate’.
Need for Meaning
It is notable that we humans still try to give the whole a ‘meaning’, to seek an ‘explanation’ for why things are the way they are. And everyday life shows that we have a lot of ‘imagination’ concerning possible ‘connections’ or ’causes’. Looking back into the past, we often smile at the various attempts at explanation by our ancestors: as long as nothing was known about the details of our bodies and about life in general, any story was possible. In our time, with science established for about 150 years, there are still many millions of people (possibly billions?) who know nothing about science and are willing to believe almost any story just because another person tells this story convincingly.
Liberation from the Moment through Words
Because of this ability, with the ‘power of imagination’ to pack things one experiences into a ‘story’ that suggests ‘possible connections’, through which events gain a ‘conceptual sense’, a person can try to ‘liberate’ themselves from the apparent ‘absoluteness of the moment’ in a certain way: an event that can be placed into a ‘context’ loses its ‘absoluteness’. Just by this kind of narrative, the experiencing person gains a bit of ‘power’: in narrating a connection, the narrator can make the experience ‘a matter’ over which they can ‘dispose’ as they see fit. This ‘power through the word’ can alleviate the ‘fear’ that an event can trigger. This has permeated the history of humanity from the beginning, as far as archaeological evidence allows.
Perhaps it is not wrong to first identify humans not as ‘hunters and gatherers’ or as ‘farmers’ but as ‘those who tell stories’.
[1] Such a magic word in Greek philosophy was the concept of ‘breath’ (Greek “pneuma”). The breath not only characterized the individually living but was also generalized to a life principle of everything that connected both body, soul, and spirit as well as permeated the entire universe. In the light of today’s knowledge, this ‘explanation’ could no longer be told, but about 2300 years ago, this belief was a certain ‘intellectual standard’ among all intellectuals, the prevailing ‘worldview’; it was ‘believed’. Anyone who thought differently was outside this ‘language game’.
Organization of an Order
Thinking Creates Relationships
As soon as one can ‘name’ individual events, things, processes, properties of things, and more through ‘language’, it is evident that humans have the ability to not only ‘name’ using language but to embed the ‘named’ through ‘arrangement of words in linguistic expression’ into ‘conceived relationships’, thereby connecting the individually named items not in isolation but in thought with others. This fundamental human ability to ‘think relationships in one’s mind’, which cannot be ‘seen’ but can indeed be ‘thought’ [1], is of course not limited to single events or a single relationship. Ultimately, we humans can make ‘everything’ a subject, and we can ‘think’ any ‘possible relationship’ in our minds; there are no fundamental restrictions here.
Stories as a Natural Force
Not only history is full of examples, but also our present day. Today, despite the incredible successes of modern science, almost universally, the wildest stories with ‘purely thought relationships’ are being told and immediately believed through all channels worldwide, which should give us pause. Our fundamental characteristic, that we can tell stories to break the absoluteness of the moment, obviously has the character of a ‘natural force’, deeply rooted within us, that we cannot ‘eradicate’; we might be able to ‘tame’ it, perhaps ‘cultivate’ it, but we cannot stop it. It is an ‘elemental characteristic’ of our thinking, that is: of our brain in the body.
Thought and Verified
The experience that we, the storytellers, can name events and arrange them into relationships—and ultimately without limit—may indeed lead to chaos if the narrated network of relationships is ultimately ‘purely thought’, without any real reference to the ‘real world around us’, but it is also our greatest asset. With it, humans can not only fundamentally free themselves from the apparent absoluteness of the present, but we can also create starting points with the telling of stories, ‘initially just thought relationships’, which we can then concretely ‘verify’ in our everyday lives.
A System of Order
When someone randomly sees another person who looks very different from what they are used to, all sorts of ‘assumptions’ automatically form in each person about what kind of person this might be. If one stops at these assumptions, these wild guesses can ‘populate the head’ and the ‘world in the head’ gets populated with ‘potentially evil people’; eventually, they might simply become ‘evil’. However, if one makes contact with the other, they might find that the person is actually nice, interesting, funny, or the like. The ‘assumptions in the head’ then transform into ‘concrete experiences’ that differ from what was initially thought. ‘Assumptions’ combined with ‘verification’ can thus lead to the formation of ‘reality-near ideas of relationships’. This gives a person the chance to transform their ‘spontaneous network of thought relationships’, which can be wrong—and usually are—into a ‘verified network of relationships’. Since ultimately the thought relationships as a network provide us with a ‘system of order’ in which everyday things are embedded, it appears desirable to work with as many ‘verified thought relationships’ as possible.
[1] The breath of the person opposite me, which for the Greeks connected my counterpart with the life force of the universe, which in turn is also connected with the spirit and the soul…
Hypotheses and Science
Challenge: Methodically Organized Guessing
The ability to think of possible relationships, and to articulate them through language, is innate [1], but the ‘use’ of this ability in everyday life, for example, to match thought relationships with the reality of everyday life, this ‘matching’/’verifying’ is not innate. We can do it, but we don’t have to. Therefore, it is interesting to realize that since the first appearance of Homo sapiens on this planet [2], 99.95% of the time has passed until the establishment of organized modern science about 150 years ago. This can be seen as an indication that the transition from ‘free guessing’ to ‘methodically organized systematic guessing’ must have been anything but easy. And if today still a large part of people—despite schooling and even higher education—[3] tend to lean towards ‘free guessing’ and struggle with organized verification, then there seems to be a not easy threshold that a person must overcome—and must continually overcome—to transition from ‘free’ to ‘methodically organized’ guessing.[4]
Starting Point for Science
The transition from everyday thinking to ‘scientific thinking’ is fluid. The generation of ‘thought relationships’ in conjunction with language, due to our ability of creativity/imagination, is ultimately also the starting point of science. While in everyday thinking we tend to spontaneously and pragmatically ‘verify’ ‘spontaneously thought relationships’, ‘science’ attempts to organize such verifications ‘systematically’ to then accept such ‘positively verified guesses’ as ’empirically verified guesses’ until proven otherwise as ‘conditionally true’. Instead of ‘guesses’, science likes to speak of ‘hypotheses’ or ‘working hypotheses’, but they remain ‘guesses’ through the power of our thinking and through the power of our imagination.[5]
[1] This means that the genetic information underlying the development of our bodies is designed so that our body with its brain is constructed during the growth phase in such a way that we have precisely this ability to ‘think of relationships’. It is interesting again to ask how it is possible that from a single cell about 13 trillion body cells (the approximately 100 trillion bacteria in the gut come ‘from outside’) can develop in such a way that they create the ‘impression of a human’ that we know.
[2] According to current knowledge, about 300,000 years ago in East Africa and North Africa, from where Homo sapiens then explored and conquered the entire world (there were still remnants of other human forms that had been there longer).
[3] I am not aware of representative empirical studies on how many people in a population tend to do this.
[4] Considering that we humans as the life form Homo sapiens only appeared on this planet after about 3.8 billion years, the 300,000 years of Homo sapiens make up roughly 0.008% of the total time since there has been life on planet Earth. Thus, not only are we as Homo sapiens a very late ‘product’ of the life process, but the ability to ‘systematically verify hypotheses’ also appears ‘very late’ in our Homo sapiens life process. Viewed across the entire life span, this ability seems to be extremely valuable, which is indeed true considering the incredible insights we as Homo sapiens have been able to gain with this form of thinking. The question is how we deal with this knowledge. This behavior of using systematically verified knowledge is not innate too.
[5] The ability of ‘imagination’ is not the opposite of ‘knowledge’, but is something completely different. ‘Imagination’ is a trait that ‘shows’ itself the moment we start to think, perhaps even in the fact ‘that’ we think at all. Since we can in principle think about ‘everything’ that is ‘accessible’ to our thinking, imagination is a factor that helps to ‘select’ what we think. In this respect, imagination is pre-posed to thinking.
Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.
CONTEXT
This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.
Start of the Lecture
Dear Auditorium,
This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.
In this conference many different views are presented that are possible on this topic.
TRANSDISCIPLINARY
My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.
An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.
‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.
HUMAN TEXT GENERATION
The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.
This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.
TEXT CAPABLE MACHINES
With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.
In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.
BIOLOGICAL — NON-BIOLOGICAL
On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.
BLANK INTELLIGENCE TERM
The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.
PREREQUISITES FOR TEXT GENERATION
If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.
VALIDITY
A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.
In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.
In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.
ASYMMETRY: APPLICABLE- NOT APPLICABLE
One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.
With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.
MEANING
This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.
If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):
KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.
LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.
MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.
Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.
FUTURE AND EMOTIONS
In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.
If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.
If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?
The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]
SCIENTIFIC DISCOURSE AND EVERYDAY SITUATIONS
In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.
The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?
For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.
This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.
The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.
Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.
From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.
The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.
Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.
SUSTAINABLE EMPIRICAL THEORY
With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.
While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.
However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]
If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.
In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]
MAN-MACHINE
After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?
My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.
In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.
The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.
This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.
Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?
Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.
Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.
In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.
But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?
A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.
My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]
COMMENTS
[1] https://zevedi.de/en/topics/ki-text-2/
[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/ ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.
(This text is a translation from the German blog of the author. The translation is supported by the deepL Software)
CONTEXT
The meaning of and adherence to moral values in the context of everyday actions has always been a source of tension, debate, and tangible conflict.
This text will briefly illuminate why this is so, and why it will probably never be different as long as we humans are the way we are.
FINITE-INFINITE WORLD
In this text it is assumed that the reality in which we ‘find’ ourselves from childhood is a ‘finite’ world. By this is meant that no phenomenon we encounter in this world – ourselves included – is ‘infinite’. In other words, all resources we encounter are ‘finite’. Even ‘solar energy’, which is considered ‘renewable’ in today’s parlance, is ‘finite’, although this finiteness outlasts the lifetimes of many generations of humans.
But this ‘finiteness’ is no contradiction to the fact that our finite world is continuously in a ‘process of change’ fed from many sides. An ‘itself-self-changing finiteness’ is with it, a something which in and in itself somehow ‘points beyond itself’! The ‘roots’ of this ‘immanent changeability’ are to a large extent perhaps still unclear, but the ‘effects’ of the ‘immanent changeability’ indicate that the respective ‘concrete finite’ is not the decisive thing; the ‘respective concrete finite’ is rather a kind of ‘indicator’ for an ‘immanent change cause’ which ‘manifests itself’ by means of concrete finites in change. The ‘forms of concrete manifestations of change’ can therefore perhaps be a kind of ‘expression’ of something that ‘works immanently behind’.
In physics there is the pair of terms ‘energy’ and ‘mass’, the latter as synonym for ‘matter’. Atomic physics and quantum mechanics have taught us that the different ‘manifestations of mass/matter’ can only be a ‘state form of energy’. The everywhere and always assumed ‘energy’ is that ‘enabling factor’, which can ‘manifest’ itself in all the known forms of matter. ‘Changing-matter’ can then be understood as a form of ‘information’ about the ‘enabling energy’.
If one sets what physics has found out so far about ‘energy’ as that form of ‘infinity’ which is accessible to us via the experiential world, then the various ‘manifestations of energy’ in diverse ‘forms of matter’ are forms of concrete finites, which, however, are ultimately not really finite in the context of infinite energy. All known material finites are only ‘transitions’ in a nearly infinite space of possible finites, which is ultimately grounded in ‘infinite energy’. Whether there is another ‘infinity’ ‘beside’ or ‘behind’ or ‘qualitatively again quite different to’ the ‘experienceable infinity’ is thus completely open.”[1]
EVERYDAY EXPERIENCES
Our normal life context is what we now call ‘everyday life’: a bundle of regular processes, often associated with characteristic behavioral roles. This includes the experience of having a ‘finite body’; that ‘processes take time in real terms’; that each process is characterized by its own ‘typical resource consumption’; that ‘all resources are finite’ (although there can be different time scales here (see the example with solar energy)).
But also here: the ’embeddedness’ of all resources and their consumption in a comprehensive variability makes ‘snapshots’ out of all data, which have their ‘truth’ not only ‘in the moment’, but in the ‘totality of the sequence’! In itself ‘small changes’ in the everyday life can, if they last, assume sizes and achieve effects which change a ‘known everyday life’ so far that long known ‘views’ and ‘long practiced behaviors’ are ‘no longer correct’ sometime: in that case the format of one’s own thinking and behavior can come into increasing contradiction with the experiential world. Then the point has come where the immanent infinity ‘manifests itself’ in the everyday finiteness and ‘demonstrates’ to us that the ‘imagined cosmos in our head’ is just not the ‘true cosmos’. In the end this immanent infinity is ‘truer’ than the ‘apparent finiteness’.
HOMO SAPIENS (WE)
Beside the life-free material processes in this finite world there are since approx. 3.5 billion years the manifestations, which we call ‘life’, and very late – quasi ‘just now’ – showed up in the billions of life forms one, which we call ‘Homo sapiens’. That is us.
The today’s knowledge of the ‘way’, which life has ‘taken’ in these 3.5 billion years, was and is only possible, because science has learned to understand the ‘seemingly finite’ as ‘snapshot’ of an ongoing process of change, which shows its ‘truth’ only in the ‘totality of the individual moments’. That we as human beings, as the ‘latecomers’ in this life-creation-process’, have the ability to ‘recognize’ successive ‘moments’ ‘individually’ as well as ‘in sequence’, is due to the special nature of the ‘brain’ in the ‘body’ and the way in which our body ‘interacts’ with the surrounding world. So, we don’t know about the ‘existence of an immanent infinity’ ‘directly’, but only ‘indirectly’ through the ‘processes in the brain’ that can identify, store, process and ‘arrange’ moments in possible sequences in a ‘neuronally programmed way’. So: our brain enables us on the basis of a given neuronal and physical structure to ‘construct’ an ‘image/model’ of a possible immanent infinity, which we assume to ‘represent’ the ‘events around us’ reasonably well.
THINKING
One characteristic attributed to Homo Sapiens is called ‘thinking’; a term which until today is described only vaguely and very variously by different sciences. From another Homo Sapiens we learn about his thinking only by his way of ‘behaving’, and a special case of it is ‘linguistic communication’.
Linguistic communication is characterized by the fact that it basically works with ‘abstract concepts’, to which as such no single object in the real world directly corresponds (‘cup’, ‘house’, ‘dog’, ‘tree’, ‘water’ etc.). Instead, the human brain assigns ‘completely automatically’ (‘unconsciously’!) most different concrete perceptions to one or the other abstract concept in such a way that a human A can agree with a human B whether one assigns this concrete phenomenon there in front to the abstract concept ‘cup’, ‘house’, ‘dog’, ‘tree’, or ‘water’. At some point in everyday life, person A knows which concrete phenomena can be meant when person B asks him whether he has a ‘cup of tea’, or whether the ‘tree’ carries apples etc.
This empirically proven ‘automatic formation’ of abstract concepts by our brain is not only based on a single moment, but these automatic construction processes work with the ‘perceptual sequences’ of finite moments ’embedded in changes’, which the brain itself also automatically ‘creates’. ‘Change as such’ is insofar not a ‘typical object’ of perception, but is the ‘result of a process’ taking place in the brain, which constructs ‘sequences of single perceptions’, and these ‘calculated sequences’ enter as ‘elements’ into the formation of ‘abstract concepts’: a ‘house’ is from this point of view not a ‘static concept’, but a concept, which can comprise many single properties, but which is ‘dynamically generated’ as a ‘concept’, so that ‘new elements’ can be added or ‘existing elements’ may be ‘taken away’ again.
MODEL: WORLD AS A PROCESS
(The words are from the German text)
Although there is no universally accepted comprehensive theory of human thought to date, there are many different models (everyday term for the more correct term ‘theories’) that attempt to approximate important aspects of human thought.
The preceding image shows the outlines of a minimally simple model to our thinking.
This model assumes that the surrounding world – with ourselves as components of that world – is to be understood as a ‘process’ in which, at a chosen ‘point in time’, one can describe in an idealized way all the ‘observable phenomena’ that are important to the observer at that point in time. This description of a ‘section of the world’ is here called ‘situation description’ at time t or simply ‘situation’ at t.
Then one needs a ‘knowledge about possible changes’ of elements of the situation description in the way (simplified): ‘If X is element of situation description at t, then for a subsequent situation at t either X is deleted or replaced by a new X*’. There may be several alternatives for deletion or replacement with different probabilities. Such ‘descriptions of changes’ are here simplified called ‘change rules’.
Additionally, as part of the model, there is a ‘game instruction’ (classically: ‘inference term’), which explains when and how to apply a change rule to a given situation Sit at t in such a way that at the subsequent time t+1, there is a situation Sit* in which the changes have been made that the change rule describes.
Normally, there is more than one change rule that can be applied simultaneously with the others. This is also part of the game instructions.
This minimal model can and must be seen against the background of continuous change.
For this structure of knowledge it is assumed that one can describe ‘situations’, possible changes of such a situation, and that one can have a concept how to apply descriptions of recognized possible changes to a given situation.
With the recognition of an immanent infinity manifested in many concrete finite situations, it is immediately clear that the set of assumed descriptions of change should correspond with the observable changes, otherwise the theory has little practical use. Likewise, of course, it is important that the assumed situation descriptions correspond with the observable world. Fulfilling the correspondence requirements or checking that they are true is anything but trivial.
ABSTRACT – REAL – INDETERMINATE
To these ‘correspondence requirements’ here some additional considerations, in which the view of the everyday perspective comes up.
It is to be noted that a ‘model’ is not the environment itself, but only a ‘symbolic description’ of a section of the environment from the point of view and with the understanding of a human ‘author’! To which properties of the environment a description refers, only the author himself knows, who ‘links’ the chosen ‘symbols’ (text or language) ‘in his head’ with certain properties of the environment, whereby these properties of the environment must also be represented ‘in the head’, quasi ‘knowledge images’ of ‘perception events’, which have been triggered by the environmental properties. These ‘knowledge images in the head’ are ‘real’ for the respective head; compared to the environment, however, they are basically only ‘fictitious’; unless there is currently a connection between current fictitious ‘images in the head’ and the ‘current perceptions’ of ‘environmental events’, which makes the ‘concrete elements of perception’ appear as ‘elements of the fictitious images’. Then the ‘fictitious’ pictures would be ‘fictitious and real’.
Due to the ‘memory’, whose ‘contents’ are more or less ‘unconscious’ in the ‘normal state’, we can however ‘remember’ that certain ‘fictitious pictures’ were once ‘fictitious and real’ in the past. This can lead to a tendency in everyday life to ascribe a ‘presumed reality’ to fictional images that were once ‘real’ in the past, even in the current present. This tendency is probably of high practical importance in everyday life. In many cases these ‘assumptions’ also work. However, this ‘spontaneous-for-real-holding’ can often be off the mark; a common source of error.
The ‘spontaneous-for-real-holding’ can be disadvantageous for many reasons. For example, the fictional images (as inescapably abstract images) may in themselves be only ‘partially appropriate’. The context of the application may have changed. In general, the environment is ‘in flux’: facts that were given yesterday may be different today.
The reasons for the persistent changes are different. Besides such changes, which we could recognize by our experience as an ‘identifiable pattern’, there are also changes, which we could not assign to a pattern yet; these can have a ‘random character’ for us. Finally there are also the different ‘forms of life’, which are basically ‘not determined’ by their system structure in spite of all ‘partial determinateness’ (one can also call this ‘immanent freedom’). The behavior of these life forms can be contrary to all other recognized patterns. Furthermore, life forms behave only partially ‘uniformly’, although everyday structures with their ‘rules of behavior’ – and many other factors – can ‘push’ life forms with their behavior into a certain direction.
If one remembers at this point again the preceding thoughts about the ‘immanent infinity’ and the view that the single, finite moments are only understandable as ‘part of a process’, whose ‘logic’ is not decoded to a large extent until today, then it is clear, that any kind of ‘modeling’ within the comprehensive change processes can only have a preliminary approximation character, especially since it is aggravated by the fact that the human actors are not only ‘passively receiving’, but at the same time always also ‘actively acting’, and thereby they influence the change process by their actions! These human influences result from the same immanent infinity as those which cause all other changes. The people (like the whole life) are thus inevitably real ‘co-creative’ …. with all the responsibilities which result from it.
MORALITY ABOVE ALL
What exactly one has to understand by ‘morality’, one has to read out of many hundreds – or even more – different texts. Every time – and even every region in this world – has developed different versions.
In this text it is assumed that with ‘moral’ such ‘views’ are meant, which should contribute to the fact that an individual person (or a group or …) in questions of the ‘decision’ of the kind “Should I rather do A or B?” should get ‘hints’, how this question can be answered ‘best’.
If one remembers at this point what was said before about that form of thinking which allows ‘prognoses’ (thinking in explicit ‘models’ or ‘theories’), then there should be an ‘evaluation’ of the ‘possible continuations’ independent of a current ‘situation description’ and independent of the possible ‘knowledge of change’. So there must be ‘besides’ the description of a situation as it ‘is’ at least a ‘second level’ (a ‘meta-level’), which can ‘talk about’ the elements of the ‘object-level’ in such a way that e.g. it can be said that an ‘element A’ from the object-level is ‘good’ or ‘bad’ or ‘neutral’ or with a certain gradual ‘tuning’ ‘good’ or ‘bad’ or ‘neutral’ at the meta-level. This can also concern several elements or whole subsets of the object level. This can be done. But for it to be ‘rationally acceptable’, these valuations would have to be linked to ‘some form of motivation’ as to ‘why’ this valuation should be accepted. Without such a ‘motivation of evaluations’ such an evaluation would appear as ‘pure arbitrariness’.
At this point the ‘air’ becomes quite ‘thin’: in the history so far no convincing model for a moral justification became known, which is in the end not dependent from the decision of humans to set certain rules as ‘valid for all’ (family, village, tribe, …). Often the justifications can still be located in the concrete ‘circumstances of life’, just as often the concrete circumstances of life ‘recede into the background’ in the course of time and instead abstract concepts are introduced, which one endows with a ‘normative power’, which elude a more concrete analysis. Rational access is then hardly possible, if at all.
In a time like in the year 2023, in which the available knowledge is sufficient to be able to recognize the interdependencies of literally everybody from everybody, in addition the change dynamics, which can threaten with the components ‘global warming’ the ‘sustainable existence of life on earth’ substantially, ‘abstractly set normative terms’ appear not only ‘out of time’, no, they are highly dangerous, since they can substantially hinder the preservation of life in the further future.
META-MORAL (Philosophy)
The question then arises whether this ‘rational black hole’ of ‘justification-free normative concepts’ marks the end of human thinking or whether thinking should instead just begin here?
Traditionally, ‘philosophy’ understands itself as that attitude of thinking, in which every ‘given’ – including any kind of normative concepts – can be made an ‘object of thinking’. And just the philosophical thinking has produced exactly this result in millennia of struggle: there is no point in thinking, from which all ought/all evaluating can be derived ‘just like that’.
In the space of philosophical thinking, on the meta-moral level, it is possible to ‘thematize’ more and more aspects of our situation as ‘mankind’ in a dynamic environment (with man himself as part of this environment), to ‘name’ them, to place them in a ‘potential relations’, to make ‘thinking experiments’ about ‘possible developments’, but this philosophical meta-moral knowledge is completely transparent and always identifiable. The inferences about why something seems ‘better’ than something else are always ’embedded’, ‘related’. The demands for an ‘autonomous morality’, for an ‘absolute morality’ besides philosophical thinking appear ‘groundless’, ‘arbitrary’, ‘alien’ to the ‘matter’ against this background. A rational justification is not possible.
A ‘rationally unknowable’ may exist, exists even inescapably, but this rationally unknowable is our sheer existence, the actual real occurrence, for which so far there is no rational ‘explanation’, more precisely: not yet. But this is not a ‘free pass’ for irrationality. In ‘irrationality’ everything disappears, even the ‘rationally unrecognizable’, and this belongs to the most important ‘facts’ in the world of life.
COMMENTS
[1] The different forms of ‘infinity’, which have been introduced into mathematics with the works of Georg Cantor and have been intensively further investigated, have nothing to do with the experienceable finiteness/ infinity described in the text: https://en.wikipedia.org/wiki/Georg_Cantor . However, if one wants to ‘describe’ the ‘experience’ of real finiteness/ infinity, then one will possibly want to fall back on descriptive means of mathematics. But it is not a foregone conclusion whether the mathematical concepts ‘harmonize’ with the empirical experience standing to the matter.
The whole text shows a dynamic, which induces many changes. Difficult to plan ‘in advance’.
Perhaps, some time, it will look like a ‘book’, at least ‘for a moment’.
I have started a ‘book project’ in parallel. This was motivated by the need to provide potential users of our new oksimo.R software with a coherent explanation of how the oksimo.R software, when used, generates an empirical theory in the format of a screenplay. The primary source of the book is in German and will be translated step by step here in the uffmm.blog.
INTRODUCTION
In a rather foundational paper about an idea, how one can generalize ‘systems engineering’ [*1] to the art of ‘theory engineering’ [1] a new conceptual framework has been outlined for a ‘sustainable applied empirical theory (SAET)’. Part of this new framework has been the idea that the classical recourse to groups of special experts (mostly ‘engineers’ in engineering) is too restrictive in the light of the new requirement of being sustainable: sustainability is primarily based on ‘diversity’ combined with the ‘ability to predict’ from this diversity probable future states which keep life alive. The aspect of diversity induces the challenge to see every citizen as a ‘natural expert’, because nobody can know in advance and from some non-existing absolut point of truth, which knowledge is really important. History shows that the ‘mainstream’ is usually to a large degree ‘biased’ [*1b].
With this assumption, that every citizen is a ‘natural expert’, science turns into a ‘general science’ where all citizens are ‘natural members’ of science. I will call this more general concept of science ‘sustainable citizen science (SCS)’ or ‘Citizen Science 2.0 (CS2)’. The important point here is that a sustainable citizen science is not necessarily an ‘arbitrary’ process. While the requirement of ‘diversity’ relates to possible contents, to possible ideas, to possible experiments, and the like, it follows from the other requirement of ‘predictability’/ of being able to make some useful ‘forecasts’, that the given knowledge has to be in a format, which allows in a transparent way the construction of some consequences, which ‘derive’ from the ‘given’ knowledge and enable some ‘new’ knowledge. This ability of forecasting has often been understood as the business of ‘logic’ providing an ‘inference concept’ given by ‘rules of deduction’ and a ‘practical pattern (on the meta level)’, which defines how these rules have to be applied to satisfy the inference concept. But, looking to real life, to everyday life or to modern engineering and economy, one can learn that ‘forecasting’ is a complex process including much more than only cognitive structures nicely fitting into some formulas. For this more realistic forecasting concept we will use here the wording ‘common logic’ and for the cognitive adventure where common logic is applied we will use the wording ‘common science’. ‘Common science’ is structurally not different from ‘usual science’, but it has a substantial wider scope and is using the whole of mankind as ‘experts’.
The following chapters/ sections try to illustrate this common science view by visiting different special views which all are only ‘parts of a whole’, a whole which we can ‘feel’ in every moment, but which we can not yet completely grasp with our theoretical concepts.
CONTENT
Language (Main message: “The ordinary language is the ‘meta language’ to every special language. This can be used as a ‘hint’ to something really great: the mystery of the ‘self-creating’ power of the ordinary language which for most people is unknown although it happens every moment.”)
Concrete Abstract Statements (Main message: “… you will probably detect, that nearly all words of a language are ‘abstract words’ activating ‘abstract meanings’. …If you cannot provide … ‘concrete situations’ the intended meaning of your abstract words will stay ‘unclear’: they can mean ‘nothing or all’, depending from the decoding of the hearer.”)
True False Undefined (Main message: “… it reveals that ’empirical (observational) evidence’ is not necessarily an automatism: it presupposes appropriate meaning spaces embedded in sets of preferences, which are ‘observation friendly’.“
Beyond Now (Main message: “With the aid of … sequences revealing possible changes the NOW is turned into a ‘moment’ embedded in a ‘process’, which is becoming the more important reality. The NOW is something, but the PROCESS is more.“)
Playing with the Future (Main message: “In this sense seems ‘language’ to be the master tool for every brain to mediate its dynamic meaning structures with symbolic fix points (= words, expressions) which as such do not change, but the meaning is ‘free to change’ in any direction. And this ‘built in ‘dynamics’ represents an ‘internal potential’ for uncountable many possible states, which could perhaps become ‘true’ in some ‘future state’. Thus ‘future’ can begin in these potentials, and thinking is the ‘playground’ for possible futures.(but see [18])”)
Forecasting – Prediction: What? (This chapter explains the cognitive machinery behind forecasting/ predictions, how groups of human actors can elaborate shared descriptions, and how it is possible to start with sequences of singularities to built up a growing picture of the empirical world which appears as a radical infinite and indeterministic space. )
!!! From here all the following chapters have to be re-written !!!
Boolean Logic (Explains what boolean logic is, how it enables the working of programmable machines, but that it is of nearly no help for the ‘heart’ of forecasting.)
/* Often people argue against the usage of the wikipedia encyclopedia as not ‘scientific’ because the ‘content’ of an entry in this encyclopedia can ‘change’. This presupposes the ‘classical view’ of scientific texts to be ‘stable’, which presupposes further, that such a ‘stable text’ describes some ‘stable subject matter’. But this view of ‘steadiness’ as the major property of ‘true descriptions’ is in no correspondence with real scientific texts! The reality of empirical science — even as in some special disciplines like ‘physics’ — is ‘change’. Looking to Aristotle’s view of nature, to Galileo Galilei, to Newton, to Einstein and many others, you will not find a ‘single steady picture’ of nature and science, and physics is only a very simple strand of science compared to the live-sciences and many others. Thus wikipedia is a real scientific encyclopedia give you the breath of world knowledge with all its strengths and limits at once. For another, more general argument, see In Favour for Wikipedia */
[*1] Meaning operator ‘…’ : In this text (and in nearly all other texts of this author) the ‘inverted comma’ is used quite heavily. In everyday language this is not common. In some special languages (theory of formal languages or in programming languages or in meta-logic) the inverted comma is used in some special way. In this text, which is primarily a philosophical text, the inverted comma sign is used as a ‘meta-language operator’ to raise the intention of the reader to be aware, that the ‘meaning’ of the word enclosed in the inverted commas is ‘text specific’: in everyday language usage the speaker uses a word and assumes tacitly that his ‘intended meaning’ will be understood by the hearer of his utterance as ‘it is’. And the speaker will adhere to his assumption until some hearer signals, that her understanding is different. That such a difference is signaled is quite normal, because the ‘meaning’ which is associated with a language expression can be diverse, and a decision, which one of these multiple possible meanings is the ‘intended one’ in a certain context is often a bit ‘arbitrary’. Thus, it can be — but must not — a meta-language strategy, to comment to the hearer (or here: the reader), that a certain expression in a communication is ‘intended’ with a special meaning which perhaps is not the commonly assumed one. Nevertheless, because the ‘common meaning’ is no ‘clear and sharp subject’, a ‘meaning operator’ with the inverted commas has also not a very sharp meaning. But in the ‘game of language’ it is more than nothing 🙂
[*1b] That the main stream ‘is biased’ is not an accident, not a ‘strange state’, not a ‘failure’, it is the ‘normal state’ based on the deeper structure how human actors are ‘built’ and ‘genetically’ and ‘cultural’ ‘programmed’. Thus the challenge to ‘survive’ as part of the ‘whole biosphere’ is not a ‘partial task’ to solve a single problem, but to solve in some sense the problem how to ‘shape the whole biosphere’ in a way, which enables a live in the universe for the time beyond that point where the sun is turning into a ‘red giant’ whereby life will be impossible on the planet earth (some billion years ahead)[22]. A remarkable text supporting this ‘complex view of sustainability’ can be found in Clark and Harvey, summarized at the end of the text. [23]
[*2] The meaning of the expression ‘normal’ is comparable to a wicked problem. In a certain sense we act in our everyday world ‘as if there exists some standard’ for what is assumed to be ‘normal’. Look for instance to houses, buildings: to a certain degree parts of a house have a ‘standard format’ assuming ‘normal people’. The whole traffic system, most parts of our ‘daily life’ are following certain ‘standards’ making ‘planning’ possible. But there exists a certain percentage of human persons which are ‘different’ compared to these introduced standards. We say that they have a ‘handicap’ compared to this assumed ‘standard’, but this so-called ‘standard’ is neither 100% true nor is the ‘given real world’ in its properties a ‘100% subject’. We have learned that ‘properties of the real world’ are distributed in a rather ‘statistical manner’ with different probabilities of occurrences. To ‘find our way’ in these varying occurrences we try to ‘mark’ the main occurrences as ‘normal’ to enable a basic structure for expectations and planning. Thus, if in this text the expression ‘normal’ is used it refers to the ‘most common occurrences’.
[*3] Thus we have here a ‘threefold structure’ embracing ‘perception events, memory events, and expression events’. Perception events represent ‘concrete events’; memory events represent all kinds of abstract events but they all have a ‘handle’ which maps to subsets of concrete events; expression events are parts of an abstract language system, which as such is dynamically mapped onto the abstract events. The main source for our knowledge about perceptions, memory and expressions is experimental psychology enhanced by many other disciplines.
[*4] Characterizing language expressions by meaning – the fate of any grammar: the sentence ” … ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like.” is pointing to a deep property of every ordinary language, which represents the real power of language but at the same time the great weakness too: expressions as such have no meaning. Hundreds, thousands, millions of words arranged in ‘texts’, ‘documents’ can show some statistical patterns’ and as such these patterns can give some hint which expressions occur ‘how often’ and in ‘which combinations’, but they never can give a clue to the associated meaning(s). During more than three-thousand years humans have tried to describe ordinary language in a more systematic way called ‘grammar’. Due to this radically gap between ‘expressions’ as ‘observable empirical facts’ and ‘meaning constructs’ hidden inside the brain it was all the time a difficult job to ‘classify’ expressions as representing a certain ‘type’ of expression like ‘nouns’, ‘predicates’, ‘adjectives’, ‘defining article’ and the like. Without regressing to the assumed associated meaning such a classification is not possible. On account of the fuzziness of every meaning ‘sharp definitions’ of such ‘word classes’ was never and is not yet possible. One of the last big — perhaps the biggest ever — project of a complete systematic grammar of a language was the grammar project of the ‘Akademie der Wissenschaften der DDR’ (‘Academy of Sciences of the GDR’) from 1981 with the title “Grundzüge einer Deutschen Grammatik” (“Basic features of a German grammar”). A huge team of scientists worked together using many modern methods. But in the preface you can read, that many important properties of the language are still not sufficiently well describable and explainable. See: Karl Erich Heidolph, Walter Flämig, Wolfgang Motsch et al.: Grundzüge einer deutschen Grammatik. Akademie, Berlin 1981, 1028 Seiten.
[*5] Differing opinions about a given situation manifested in uttered expressions are a very common phenomenon in everyday communication. In some sense this is ‘natural’, can happen, and it should be no substantial problem to ‘solve the riddle of being different’. But as you can experience, the ability of people to solve the occurrence of different opinions is often quite weak. Culture is suffering by this as a whole.
[1] Gerd Doeben-Henisch, 2022, From SYSTEMS Engineering to THEORYEngineering, see: https://www.uffmm.org/2022/05/26/from-systems-engineering-to-theory-engineering/(Remark: At the time of citation this post was not yet finished, because there are other posts ‘corresponding’ with that post, which are too not finished. Knowledge is a dynamic network of interwoven views …).
[1d] ‘usual science’ is the game of science without having a sustainable format like in citizen science 2.0.
[2] Science, see e.g. wkp-en: https://en.wikipedia.org/wiki/Science
Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testableconjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”
[2b] History of science in wkp-en: https://en.wikipedia.org/wiki/History_of_science#Scientific_Revolution_and_birth_of_New_Science
[3] Theory, see wkp-en: https://en.wikipedia.org/wiki/Theory#:~:text=A%20theory%20is%20a%20rational,or%20no%20discipline%20at%20all.
Citation = “A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory’s assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.”
Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testableconjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”
[4b] Empiricism in wkp-en: https://en.wikipedia.org/wiki/Empiricism
[4c] Scientific method in wkp-en: https://en.wikipedia.org/wiki/Scientific_method
Citation =”The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3] [4c]
and
Citation = “The purpose of an experiment is to determine whether observations[A][a][b] agree with or conflict with the expectations deduced from a hypothesis.[6]: Book I, [6.54] pp.372, 408 [b] Experiments can take place anywhere from a garage to a remote mountaintop to CERN’s Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.[8][9]”
[5] Gerd Doeben-Henisch, “Is Mathematics a Fake? No! Discussing N.Bourbaki, Theory of Sets (1968) – Introduction”, 2022, https://www.uffmm.org/2022/06/06/n-bourbaki-theory-of-sets-1968-introduction/
[6] Logic, see wkp-en: https://en.wikipedia.org/wiki/Logic
[7] W. C. Kneale, The Development of Logic, Oxford University Press (1962)
[8] Set theory, in wkp-en: https://en.wikipedia.org/wiki/Set_theory
[9] N.Bourbaki, Theory of Sets , 1968, with a chapter about structures, see: https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments_de_math%C3%A9matique
[10] = [5]
[11] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein
[12] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */
[13] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act
[14] While the world view constructed in a brain is ‘virtual’ compared to the ‘real word’ outside the brain (where the body outside the brain is also functioning as ‘real world’ in relation to the brain), does the ‘virtual world’ in the brain function for the brain mostly ‘as if it is the real world’. Only under certain conditions can the brain realize a ‘difference’ between the triggering outside real world and the ‘virtual substitute for the real world’: You want to use your bicycle ‘as usual’ and then suddenly you have to notice that it is not at that place where is ‘should be’. …
[15] Propositional Calculus, see wkp-en: https://en.wikipedia.org/wiki/Propositional_calculus#:~:text=Propositional%20calculus%20is%20a%20branch,of%20arguments%20based%20on%20them.
[16] Boolean algebra, see wkp-en: https://en.wikipedia.org/wiki/Boolean_algebra
[17] Boolean (or propositional) Logic: As one can see in the mentioned articles of the English wikipedia, the term ‘boolean logic’ is not common. The more logic-oriented authors prefer the term ‘boolean calculus’ [15] and the more math-oriented authors prefer the term ‘boolean algebra’ [16]. In the view of this author the general view is that of ‘language use’ with ‘logic inference’ as leading idea. Therefore the main topic is ‘logic’, in the case of propositional logic reduced to a simple calculus whose similarity with ‘normal language’ is widely ‘reduced’ to a play with abstract names and operators. Recommended: the historical comments in [15].
[18] Clearly, thinking alone can not necessarily induce a possible state which along the time line will become a ‘real state’. There are numerous factors ‘outside’ the individual thinking which are ‘driving forces’ to push real states to change. But thinking can in principle synchronize with other individual thinking and — in some cases — can get a ‘grip’ on real factors causing real changes.
[19] This kind of knowledge is not delivered by brain science alone but primarily from experimental (cognitive) psychology which examines observable behavior and ‘interprets’ this behavior with functional models within an empirical theory.
[20] Predicate Logic or First-Order Logic or … see: wkp-en: https://en.wikipedia.org/wiki/First-order_logic#:~:text=First%2Dorder%20logic%E2%80%94also%20known,%2C%20linguistics%2C%20and%20computer%20science.
[21] Gerd Doeben-Henisch, In Favour of Wikipedia, https://www.uffmm.org/2022/07/31/in-favour-of-wikipedia/, 31 July 2022
[22] The sun, see wkp-ed https://en.wikipedia.org/wiki/Sun (accessed 8 Aug 2022)
[23] By Clark, William C., and Alicia G. Harley – https://doi.org/10.1146/annurev-environ-012420-043621, Clark, William C., and Alicia G. Harley. 2020. “Sustainability Science: Toward a Synthesis.” Annual Review of Environment and Resources 45 (1): 331–86, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=109026069
[24] Sustainability in wkp-en: https://en.wikipedia.org/wiki/Sustainability#Dimensions_of_sustainability
[27] SDG 4 in wkp-en: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4
[28] Thomas Rid, Rise of the Machines. A Cybernetic History, W.W.Norton & Company, 2016, New York – London
[29] Doeben-Henisch, G., 2006, Reducing Negative Complexity by a Semiotic System In: Gudwin, R., & Queiroz, J., (Eds). Semiotics and Intelligent Systems Development. Hershey et al: Idea Group Publishing, 2006, pp.330-342
[30] Döben-Henisch, G., Reinforcing the global heartbeat: Introducing the planet earth simulator project, In M. Faßler & C. Terkowsky (Eds.), URBAN FICTIONS. Die Zukunft des Städtischen. München, Germany: Wilhelm Fink Verlag, 2006, pp.251-263
[29] The idea that individual disciplines are not good enough for the ‘whole of knowledge’ is expressed in a clear way in a video of the theoretical physicist and philosopher Carlo Rovell: Carlo Rovelli on physics and philosophy, June 1, 2022, Video from the Perimeter Institute for Theoretical Physics. Theoretical physicist, philosopher, and international bestselling author Carlo Rovelli joins Lauren and Colin for a conversation about the quest for quantum gravity, the importance of unlearning outdated ideas, and a very unique way to get out of a speeding ticket.
[] By Azote for Stockholm Resilience Centre, Stockholm University – https://www.stockholmresilience.org/research/research-news/2016-06-14-how-food-connects-all-the-sdgs.html, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=112497386
[] Sierra Club in wkp-en: https://en.wikipedia.org/wiki/Sierra_Club
[] Herbert Bruderer, Where is the Cradle of the Computer?, June 20, 2022, URL: https://cacm.acm.org/blogs/blog-cacm/262034-where-is-the-cradle-of-the-computer/fulltext (accessed: July 20, 2022)
[] UN. Secretary-General; World Commission on Environment and Development, 1987, Report of the World Commission on Environment and Development : note / by the Secretary-General., https://digitallibrary.un.org/record/139811 (accessed: July 20, 2022) (A more readable format: https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf )
/* Comment: Gro Harlem Brundtland (Norway) has been the main coordinator of this document */
[] Chaudhuri, S.,et al.Neurosymbolic programming. Foundations and Trends in Programming Languages 7, 158-243 (2021).
[] Nello Cristianini, Teresa Scantamburlo, James Ladyman, The social turn of artificial intelligence, in: AI & SOCIETY, https://doi.org/10.1007/s00146-021-01289-8
[] Carl DiSalvo, Phoebe Sengers, and Hrönn Brynjarsdóttir, Mapping the landscape of sustainable hci, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, page 1975–1984, New York, NY, USA, 2010. Association for Computing Machinery.
[] Claude Draude, Christian Gruhl, Gerrit Hornung, Jonathan Kropf, Jörn Lamla, Jan Marco Leimeister, Bernhard Sick, Gerd Stumme, Social Machines, in: Informatik Spektrum, https://doi.org/10.1007/s00287-021-01421-4
[] EU: High-Level Expert Group on AI (AI HLEG), A definition of AI: Main capabilities and scientific disciplines, European Commission communications published on 25 April 2018 (COM(2018) 237 final), 7 December 2018 (COM(2018) 795 final) and 8 April 2019 (COM(2019) 168 final). For our definition of Artificial Intelligence (AI), please refer to our document published on 8 April 2019: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341
[] EU: High-Level Expert Group on AI (AI HLEG), Policy and investment recommendations for trustworthy Artificial Intelligence, 2019, https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence
[] European Union. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC General Data Protection Regulation; http://eur-lex.europa.eu/eli/reg/2016/679/oj (Wirksam ab 25.Mai 2018) [26.2.2022]
[] C.S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1):1–23, 1973
[] John P. van Gigch. 1991. System Design Modeling and Metamodeling. Springer US. DOI:https://doi.org/10.1007/978-1-4899-0676-2
[] Gudwin, R.R. (2003), On a Computational Model of the Peircean Semiosis, IEEE KIMAS 2003 Proceedings
[] J.A. Jacko and A. Sears, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 1st edition, 2003.
[] LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).
[] Lenat, D. What AI can learn from Romeo & Juliet.Forbes (2019)
[] Pierre Lévy, Collective Intelligence. mankind’s emerging world in cyberspace, Perseus books, Cambridge (M A), 1997 (translated from the French Edition 1994 by Robert Bonnono)
[] Lexikon der Nachhaltigkeit, ‘Starke Nachhaltigkeit‘, https://www.nachhaltigkeit.info/artikel/schwache_vs_starke_nachhaltigkeit_1687.htm (acessed: July 21, 2022)
[] Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report.
[] Kathryn Merrick. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41:38 – 55, 2017
[] Illah Reza Nourbakhsh and Jennifer Keating, AI and Humanity, MIT Press, 2020 /* An examination of the implications for society of rapidly advancing artificial intelligence systems, combining a humanities perspective with technical analysis; includes exercises and discussion questions. */
[] Olazaran, M. , A sociological history of the neural network controversy. Advances in Computers37, 335-425 (1993).
[] Friedrich August Hayek (1945), The use of knowledge in society. The American Economic Review 35, 4 (1945), 519–530
[] Karl Popper, „A World of Propensities“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1988, leicht erweitert neu abgedruckt 1990, repr. 1995)
[] Karl Popper, „Towards an Evolutionary Theory of Knowledge“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1989, ab gedruckt in 1990, repr. 1995)
[] Karl Popper, „All Life is Problem Solving“, Artikel, ursprünglich ein Vortrag 1991 auf Deutsch, erstmalig publiziert in dem Buch (auf Deutsch) „Alles Leben ist Problemlösen“ (1994), dann in dem Buch (auf Englisch) „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York
[] A. Sears and J.A. Jacko, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 2nd edition, 2008.
[] Skaburskis, Andrejs (19 December 2008). “The origin of “wicked problems””. Planning Theory & Practice. 9 (2): 277-280. doi:10.1080/14649350802041654. At the end of Rittel’s presentation, West Churchman responded with that pensive but expressive movement of voice that some may well remember, ‘Hmm, those sound like “wicked problems.”‘
[] Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).
[] Wurm, Daniel; Zielinski, Oliver; Lübben, Neeske; Jansen, Maike; Ramesohl, Stephan (2021) : Wege in eine ökologische Machine Economy: Wir brauchen eine ‘Grüne Governance der Machine Economy’, um das Zusammenspiel von Internet of Things, Künstlicher Intelligenz und Distributed Ledger Technology ökologisch zu gestalten, Wuppertal Report, No. 22, Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal, https://doi.org/10.48506/opus-7828
[] Aimee van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of AI, in: AI and Ethics (2021) 1:213–218, see: https://doi.org/10.1007/s43681
[] R. I. Damper (2000), Editorial for the special issue on ‘Emergent Properties of Complex Systems’: Emergence and levels of abstraction. International Journal of Systems Science 31, 7 (2000), 811–818. DOI:https://doi.org/10.1080/002077200406543
[] Gerd Doeben-Henisch (2004), The Planet Earth Simulator Project – A Case Study in Computational Semiotics, IEEE AFRICON 2004, pp.417 – 422
[] Eric Bonabeau (2009), Decisions 2.0: The power of collective intelligence. MIT Sloan Management Review 50, 2 (Winter 2009), 45-52.
[] Jim Giles (2005), Internet encyclopaedias go head to head. Nature 438, 7070 (Dec. 2005), 900–901. DOI:https://doi.org/10.1038/438900a
[] T. Bosse, C. M. Jonker, M. C. Schut, and J. Treur (2006), Collective representational content for shared extended mind. Cognitive Systems Research 7, 2-3 (2006), pp.151-174, DOI:https://doi.org/10.1016/j.cogsys.2005.11.007
[] Romina Cachia, Ramón Compañó, and Olivier Da Costa (2007), Grasping the potential of online social networks for foresight. Technological Forecasting and Social Change 74, 8 (2007), oo.1179-1203. DOI:https://doi.org/10.1016/j.techfore.2007.05.006
[] Tom Gruber (2008), Collective knowledge systems: Where the social web meets the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 6, 1 (2008), 4–13. DOI:https://doi.org/10.1016/j.websem.2007.11.011
[] Luca Iandoli, Mark Klein, and Giuseppe Zollo (2009), Enabling on-line deliberation and collective decision-making through large-scale argumentation. International Journal of Decision Support System Technology 1, 1 (Jan. 2009), 69–92. DOI:https://doi.org/10.4018/jdsst.2009010105
[] Shuangling Luo, Haoxiang Xia, Taketoshi Yoshida, and Zhongtuo Wang (2009), Toward collective intelligence of online communities: A primitive conceptual model. Journal of Systems Science and Systems Engineering 18, 2 (01 June 2009), 203–221. DOI:https://doi.org/10.1007/s11518-009-5095-0
[] Dawn G. Gregg (2010), Designing for collective intelligence. Communications of the ACM 53, 4 (April 2010), 134–138. DOI:https://doi.org/10.1145/1721654.1721691
[] Rolf Pfeifer, Jan Henrik Sieg, Thierry Bücheler, and Rudolf Marcel Füchslin. 2010. Crowdsourcing, open innovation and collective intelligence in the scientific method: A research agenda and operational framework. (2010). DOI:https://doi.org/10.21256/zhaw-4094
[] Martijn C. Schut. 2010. On model design for simulation of collective intelligence. Information Sciences 180, 1 (2010), 132–155. DOI:https://doi.org/10.1016/j.ins.2009.08.006 Special Issue on Collective Intelligence
[] Dimitrios J. Vergados, Ioanna Lykourentzou, and Epaminondas Kapetanios (2010), A resource allocation framework for collective intelligence system engineering. In Proceedings of the International Conference on Management of Emergent Digital EcoSystems (MEDES’10). ACM, New York, NY, 182–188. DOI:https://doi.org/10.1145/1936254.1936285
[] Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone (2010), Evidence for a collective intelligence factor in the performance of human groups. Science 330, 6004 (2010), 686–688. DOI:https://doi.org/10.1126/science.1193147
[] Michael A. Woodley and Edward Bell (2011), Is collective intelligence (mostly) the General Factor of Personality? A comment on Woolley, Chabris, Pentland, Hashmi and Malone (2010). Intelligence 39, 2 (2011), 79–81. DOI:https://doi.org/10.1016/j.intell.2011.01.004
[] Joshua Introne, Robert Laubacher, Gary Olson, and Thomas Malone (2011), The climate CoLab: Large scale model-based collaborative planning. In Proceedings of the 2011 International Conference on Collaboration Technologies and Systems (CTS’11). 40–47. DOI:https://doi.org/10.1109/CTS.2011.5928663
[] Miguel de Castro Neto and Ana Espírtio Santo (2012), Emerging collective intelligence business models. In MCIS 2012 Proceedings. Mediterranean Conference on Information Systems. https://aisel.aisnet.org/mcis2012/14
[] Peng Liu, Zhizhong Li (2012), Task complexity: A review and conceptualization framework, International Journal of Industrial Ergonomics 42 (2012), pp. 553 – 568
[] Sean Wise, Robert A. Paton, and Thomas Gegenhuber. (2012), Value co-creation through collective intelligence in the public sector: A review of US and European initiatives. VINE 42, 2 (2012), 251–276. DOI:https://doi.org/10.1108/03055721211227273
[] Antonietta Grasso and Gregorio Convertino (2012), Collective intelligence in organizations: Tools and studies. Computer Supported Cooperative Work (CSCW) 21, 4 (01 Oct 2012), 357–369. DOI:https://doi.org/10.1007/s10606-012-9165-3
[] Sandro Georgi and Reinhard Jung (2012), Collective intelligence model: How to describe collective intelligence. In Advances in Intelligent and Soft Computing. Vol. 113. Springer, 53–64. DOI:https://doi.org/10.1007/978-3-642-25321-8_5
[] H. Santos, L. Ayres, C. Caminha, and V. Furtado (2012), Open government and citizen participation in law enforcement via crowd mapping. IEEE Intelligent Systems 27 (2012), 63–69. DOI:https://doi.org/10.1109/MIS.2012.80
[] Jörg Schatzmann & René Schäfer & Frederik Eichelbaum (2013), Foresight 2.0 – Definition, overview & evaluation, Eur J Futures Res (2013) 1:15 DOI 10.1007/s40309-013-0015-4
[] Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin (2013), How diversity can drive innovation. Harvard Business Review 91, 12 (2013), 30–30
[] Tony Diggle (2013), Water: How collective intelligence initiatives can address this challenge. Foresight 15, 5 (2013), 342–353. DOI:https://doi.org/10.1108/FS-05-2012-0032
[] Hélène Landemore and Jon Elster. 2012. Collective Wisdom: Principles and Mechanisms. Cambridge University Press. DOI:https://doi.org/10.1017/CBO9780511846427
[] Jerome C. Glenn (2013), Collective intelligence and an application by the millennium project. World Futures Review 5, 3 (2013), 235–243. DOI:https://doi.org/10.1177/1946756713497331
[] Detlef Schoder, Peter A. Gloor, and Panagiotis Takis Metaxas (2013), Social media and collective intelligence—Ongoing and future research streams. KI – Künstliche Intelligenz 27, 1 (1 Feb. 2013), 9–15. DOI:https://doi.org/10.1007/s13218-012-0228-x
[] V. Singh, G. Singh, and S. Pande (2013), Emergence, self-organization and collective intelligence—Modeling the dynamics of complex collectives in social and organizational settings. In 2013 UKSim 15th International Conference on Computer Modelling and Simulation. 182–189. DOI:https://doi.org/10.1109/UKSim.2013.77
[] A. Kornrumpf and U. Baumöl (2014), A design science approach to collective intelligence systems. In 2014 47th Hawaii International Conference on System Sciences. 361–370. DOI:https://doi.org/10.1109/HICSS.2014.53
[] Michael A. Peters and Richard Heraud. 2015. Toward a political theory of social innovation: Collective intelligence and the co-creation of social goods. 3, 3 (2015), 7–23. https://researchcommons.waikato.ac.nz/handle/10289/9569
[] Juho Salminen. 2015. The Role of Collective Intelligence in Crowdsourcing Innovation. PhD dissertation. Lappeenranta University of Technology
[] Aelita Skarzauskiene and Monika Maciuliene (2015), Modelling the index of collective intelligence in online community projects. In International Conference on Cyber Warfare and Security. Academic Conferences International Limited, 313
[] AYA H. KIMURA and ABBY KINCHY (2016), Citizen Science: Probing the Virtues and Contexts of Participatory Research, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099
[] Philip Tetlow, Dinesh Garg, Leigh Chase, Mark Mattingley-Scott, Nicholas Bronn, Kugendran Naidoo†, Emil Reinert (2022), Towards a Semantic Information Theory (Introducing Quantum Corollas), arXiv:2201.05478v1 [cs.IT] 14 Jan 2022, 28 pages
[] Melanie Mitchell, What Does It Mean to Align AI With Human Values?, quanta magazin, Quantized Columns, 19.Devember 2022, https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213#
Comment by Gerd Doeben-Henisch:
[] Nick Bostrom. Superintelligence. Paths, Dangers, Strategies. Oxford University Press, Oxford (UK), 1 edition, 2014.
[] Scott Aaronson, Reform AI Alignment, Update: 22.November 2022, https://scottaaronson.blog/?p=6821
[] Andrew Y. Ng, Stuart J. Russell, Algorithms for Inverse Reinforcement Learning, ICML 2000: Proceedings of the Seventeenth International Conference on Machine LearningJune 2000 Pages 663–670
[] Pat Langley (ed.), ICML ’00: Proceedings of the Seventeenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 340 Pine Street, Sixth Floor, San Francisco, CA, United States, Conference 29 June 2000- 2 July 2000, 29.June 2000
Abstract: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations Daniel S. Brown * 1 Wonjoon Goo * 1 Prabhat Nagarajan 2 Scott Niekum 1 You can read in the abstract: “A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (ap- proximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo bench- mark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.”
In the abstract you can read: “For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.
In the abstract you can read: “Conceptual abstraction and analogy-making are key abilities underlying humans’ abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress
In the abstract you can read: “Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.”
[] Stuart Russell, (2019), Human Compatible: AI and the Problem of Control, Penguin books, Allen Lane; 1. Edition (8. Oktober 2019)
In the preface you can read: “This book is about the past , present , and future of our attempt to understand and create intelligence . This matters , not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future . The world’s great powers are waking up to this fact , and the world’s largest corporations have known it for some time . We cannot predict exactly how the technology will develop or on what timeline . Nevertheless , we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world . What then ? Everything civilization has to offer is the product of our intelligence ; gaining access to considerably greater intelligence would be the biggest event in human history . The purpose of the book is to explain why it might be the last event in human history and how to make sure that it is not .”
[] David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina, (2022), Method Cards for Prescriptive Machine-Learning Transparency, 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), CAIN’22, May 16–24, 2022, Pittsburgh, PA, USA, pp. 90 – 100, Association for Computing Machinery, ACM ISBN 978-1-4503-9275-4/22/05, New York, NY, USA, https://doi.org/10.1145/3522664.3528600
In the abstract you can read: “Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets, AI FactSheets, and Model Cards have taken a mainly descriptive approach, providing various details about the system components. While the above information is essential for product developers and external experts to assess whether the ML system meets their requirements, other stakeholders might find it less actionable. In particular, ML engineers need guidance on how to mitigate po- tential shortcomings in order to fix bugs or improve the system’s performance. We propose a documentation artifact that aims to provide such guidance in a prescriptive way. Our proposal, called Method Cards, aims to increase the transparency and reproducibil- ity of ML systems by allowing stakeholders to reproduce the models, understand the rationale behind their designs, and introduce adap- tations in an informed way. We showcase our proposal with an example in small object detection, and demonstrate how Method Cards can communicate key considerations that help increase the transparency and reproducibility of the detection model. We fur- ther highlight avenues for improving the user experience of ML engineers based on Method Cards.”
[] John H. Miller, (2022), Ex Machina: Coevolving Machines and the Origins of the Social Universe, The SFI Press Scholars Series, 410 pages Paperback ISBN: 978-1947864429 , DOI: 10.37911/9781947864429
In the announcement of the book you can read: “If we could rewind the tape of the Earth’s deep history back to the beginning and start the world anew—would social behavior arise yet again? While the study of origins is foundational to many scientific fields, such as physics and biology, it has rarely been pursued in the social sciences. Yet knowledge of something’s origins often gives us new insights into the present. In Ex Machina, John H. Miller introduces a methodology for exploring systems of adaptive, interacting, choice-making agents, and uses this approach to identify conditions sufficient for the emergence of social behavior. Miller combines ideas from biology, computation, game theory, and the social sciences to evolve a set of interacting automata from asocial to social behavior. Readers will learn how systems of simple adaptive agents—seemingly locked into an asocial morass—can be rapidly transformed into a bountiful social world driven only by a series of small evolutionary changes. Such unexpected revolutions by evolution may provide an important clue to the emergence of social life.”
In the abstract you can read: “Analyzing the spatial and temporal properties of information flow with a multi-century perspective could illuminate the sustainability of human resource-use strategies. This paper uses historical and archaeological datasets to assess how spatial, temporal, cognitive, and cultural limitations impact the generation and flow of information about ecosystems within past societies, and thus lead to tradeoffs in sustainable practices. While it is well understood that conflicting priorities can inhibit successful outcomes, case studies from Eastern Polynesia, the North Atlantic, and the American Southwest suggest that imperfect information can also be a major impediment to sustainability. We formally develop a conceptual model of Environmental Information Flow and Perception (EnIFPe) to examine the scale of information flow to a society and the quality of the information needed to promote sustainable coupled natural-human systems. In our case studies, we assess key aspects of information flow by focusing on food web relationships and nutrient flows in socio-ecological systems, as well as the life cycles, population dynamics, and seasonal rhythms of organisms, the patterns and timing of species’ migration, and the trajectories of human-induced environmental change. We argue that the spatial and temporal dimensions of human environments shape society’s ability to wield information, while acknowledging that varied cultural factors also focus a society’s ability to act on such information. Our analyses demonstrate the analytical importance of completed experiments from the past, and their utility for contemporary debates concerning managing imperfect information and addressing conflicting priorities in modern environmental management and resource use.”
This text is part of a philosophy of science analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive posts dedicated to the HMI-Analysis for this software.
POPPERs POSITION IN THE CHAPTERS 1-17
In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.
Scientific Theory
A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference ⊢, which allows the deduction of a certain theorem t if one makes some additional concrete assumptions H.
Example: Theory T1 = <AX1,⊢>
AX1= {Birds can fly}
H1= {Peter is a bird}
⊢: Peter can fly
Because there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.
One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completelytrue. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)
The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:
AX1= {Birds can fly}
H2= {Doxy is a bird, Doxy cannot fly}
⊢: ‘Doxy can fly’ & ~’Doxy can fly’
If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:
{AX1, H2} ⊢‘Doxy can fly’ & ~’Doxy can fly’
In this case the set {AX1, H2} is called inconsistent.
If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.
Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof a falsification of a theory T can destroy the ability of this theory to distinguish between true and false statements.
Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try it.(cf. p.33, 63)
Meta Theory, Logic of Scientific Discovery, Philosophy of Science
Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.
Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person like Popper.
Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:
A scientific theory from a point of view of Philosophy of Science represents a structure like the following one (minimal version):
MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>
In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L. Based on their built-in adaptive meaning function μ the human actors A can relate properties of the situation S with expressions E of L. Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions E-. Both sets of expressions are true subsets of E: E+ ⊂ E and E- ⊂ E. Additionally the experts can define some special set of expressions called axioms AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:
{AX1, H2} ⊢‘Doxy can fly’ & ~’Doxy can fly’
Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions that this unified set confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions can not distinguish between true and false expressions.
Popper gives some general requirements for the axioms of a theory (cf. p.71):
Axioms must be free from contradiction.
The axioms must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.
While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.
Empirical Interpretation(s)
Popper assumes that the universal statements of scientific theories are linguistic representations, and this means they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning. Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true), or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).
Examples:
Two human actors in a situation S agree about the fact, that there is ‘something’ which they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one). To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’. If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
The expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’ they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations
The basic idea with a system of axioms AX is — according to Popper — that the axioms as universal expressions represent a system of equations where the general terms should be able to be substituted by certain values. The set of admissible values is different from the set of inadmissible values. The relation between those values which can be substituted for the terms is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)
But Popper has difficulties with an axiomatic system interpreted as a system of equations since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is, that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)
On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)
But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.
Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.
As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!
Example:
Lets us have a look to another example.
If we take the system of axioms AX as the following schema: AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m as follows: m={<1,2,3>, <2,3,5>}. The values are again given as a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers which can satisfy the general concept of the equation. In this secondary interpretation m is becoming a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.
It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing objects; the individual contributions will be reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derivedforecast of the theory 1+2 = 3 and the empirically measured value 1+2 = 2. If one would define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …
Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.
SOURCES
[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by Basic Books, New York (more editions have been published later; I am using the eBook version of Routledge (2002))
In this review I discuss the ideas of the book The Psychology of Science (1966) from A.Maslow. His book is in a certain sense outstanding because the point of view is in one respect inspired by an artificial borderline between the mainstream-view of empirical science and the mainstream-view of psychotherapy. In another respect the book discusses a possible integrated view of empirical science with psychotherapy as an integral part. The point of view of the reviewer is the new paradigm of a Generative Cultural Anthropology[GCA]. Part I of this review gives a summary of the content of the book as understood by the reviewer and part II reports some considerations reflecting the relationship of the point of view of Maslow and the point of view of GCA.
Changes: July 20.2019 (Rewriting the introduction)
CONTEXT
This Philosophy Lab section of the uffmm science blog is the last extension of the uffmm blog, happening July 2019. It has been provoked by the meta reflections about the AAI engineering approach.
SCOPE OF SECTION
This section deals with the following topics:
How can we talk about science including the scientist (and engineer!) as the main actors? In a certain sense one can say that science is mainly a specific way how to communicate and to verify the communication content. This presupposes that there is something called knowledge located in the heads of the actors.
The presupposed knowledge usually is targeting different scopes encoded in different languages. The language enables or delimits meaning and meaning objects can either enable or delimit a certain language. As part of the society and as exemplars of the homo sapiens species scientists participate in the main behavior tendencies to assimilate majority behavior and majority meanings. This can reduce the realm of knowledge in many ways. Biological life in general is the opposite to physical entropy by generating auotopoietically during the course of time more and more complexity. This is due to a built-in creativity and the freedom to select. Thus life is always oscillating between conformity and experiment.
The survival of modern societies depends highly on the ability to communicate with maximal sharing of experience by exploring fast and extensively possible state spaces with their pros and cons. Knowledge must be round the clock visible to all, computable, modular, constructive, in the format of interactive games with transparent rules. Machines should be re-formatted as primarily helping humans, not otherwise around.
To enable such new open and dynamic knowledge spaces one has to redefine computing machines extending the Turing machine (TM) concept to a world machine (WM) concept which offers several new services for social groups, whole cities or countries. In the future there is no distinction between man and machine because there is a complete symbiotic unification because the machines have become an integral part of a personality, the extension of the body in some new way; probably far beyond the cyborg paradigm.
The basic creativity and freedom of biological life has been further developed in a fundamental all embracing spirituality of life in the universe which is targeting a re-creation of the whole universe by using the universe for the universe.
Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458