The abstract elements introduced so far are still few, but they already allow to delineate a certain ‘abstract space’. Thus there are so far
Abstract elements in current memory (also ‘consciousness’) based on concrete perception,
which then can pass over into stored abstract – and dynamic – elements of potential memory,
further abstract concepts of n.th order in current as well as in potential memory,
Abstract elements in current memory (also ‘consciousness’) based on concrete perception, which function as linguistic elements,
which can then also pass over into stored abstract – and dynamic – elements of potential (linguistic) memory,
likewise abstract linguistic concepts of nth order in actual as well as in potential memory,
abstract relations between abstract linguistic elements and abstract other elements of current as well as potential memory (‘meaning relations’).
linguistic expressions for the description of factual changes and
linguistic expressions for the description of analytic changes.
The generation of abstract linguistic elements thus allows in many ways the description of changes of something given, which (i) is either only ‘described’ as an ‘unconditional’ event or (ii) works with ‘rules of change’, which clearly distinguishes between ‘condition’ and ‘effect’. This second case with change-rules can be related to many varieties of ‘logical inference’. In fact, any known form of ‘logic’ can be ’emulated’ with this general concept of change rules.
This idea, only hinted at here, will be explored in some detail and demonstrated in various applications as we proceed.
Glimpses of an Ontology
Already these few considerations about ‘abstract elements’ show that there are different forms of ‘being’.[1].
In the scheme of FIG. 1, there are those givens in the real external world which can become the trigger of perceptions. However, our brain cannot directly recognize these ‘real givens’, only their ‘effects in the nervous system’: first (i) as ‘perceptual event’, then (ii) as ‘memory construct’ distinguished into (ii.1) ‘current memory (working memory, short-term memory, …) and (ii.2) ‘potential memory’ (long-term memory, various functional classifications, …).”[2]
If one calls the ‘contents’ of perception and current memory ‘conscious’ [3], then the primary form of ‘being’, which we can directly get hold of, would be those ‘conscious contents’, which our brain ‘presents’ to us from all its neuronal calculations. Our ‘current perceptions’ then stand for the ‘reality out there’, although we actually cannot grasp ‘the reality out there’ ‘directly, immediately’, but only ‘mediated, indirectly’.
Insofar as we are ‘aware’ of ‘current contents’ that ‘potential memory’ makes ‘available’ to us (usually called ‘remembering’ in everyday life; as a result, a ‘memory’), we also have some form of ‘primary being’ available, but this primary being need not have any current perceptual counterpart; hence we classify it as ‘only remembered’ or ‘only thought’ or ‘abstract’ without ‘concrete’ perceptual reference.
For the question of the correspondence in content between ‘real givenness’ and ‘perceived givenness’ as well as between ‘perceived givenness’ and ‘remembered givenness’ there are countless findings, all of which indicate that these two relations are not ‘1-to-1’ mappings under the aspect of ‘mapping similarity’. This is due to multiple reasons.
In the case of the perceptual similarity with the triggering real givens, already the interaction between real givens and the respective sense organs plays a role, then the processing of the primary sense data by the sense organ itself as well as by the following processing in the nervous system. The brain works with ‘time slices’, with ‘selection/condensation’ and with ‘interpretation’. The latter results from the ‘echo’ from potential memory that ‘comments’ on current neural events. In addition, different ’emotions’ can influence the perceptual process. [4] The ‘final’ product of transmission, processing, selection, interpretation and emotions is then what we call ‘perceptual content’.
In the case of ‘memory similarity’ the processing of ‘abstracting’ and ‘storing’, the continuous ‘activations’ of memory contents as well as the ‘interactions’ between remembered things indicate that ‘memory contents’ can change significantly in the course of time without the respective person, who is currently remembering, being able to read this from the memory contents themselves. In order to be able to recognize these changes, one needs ‘records’ of preceding points in time (photos, films, protocols, …), which can provide clues to the real circumstances with which one can compare one’s memories.”[5]
As one can see from these considerations, the question of ‘being’ is not a trivial question. Single fragments of perceptions or memories tend to be no 1-to-1 ‘representatives’ of possible real conditions. In addition, there is the high ‘rate of change’ of the real world, not least also by the activities of humans themselves.
COMMENTS
[1] The word ‘being’ is one of the oldest and most popular concepts in philosophy. In the case of European philosophy, the concept of ‘being’ appears in the context of classical Greek philosophy, and spreads through the centuries and millennia throughout Europe and then in those cultures that had/have an exchange of ideas with the European culture. The systematic occupation with the concept ‘being’ the philosophers called and call ‘ontology’. See for this the article ‘Ontology’ in wkp-en: https://en.wikipedia.org/wiki/Ontology .
[2] On the subject of ‘perception’ and ‘memory’ there is a huge literature in various empirical disciplines. The most important may well be ‘biology’, ‘experimental pschology’ and ‘brain science’; these supplemented by philosophical ‘phenomenology’, and then combinations of these such as ‘neuro-psychology’ or ‘neuro-phenomenology’, etc. In addition there are countless other special disciplines such as ‘linguistics’ and ‘neuro-linguistics’.
[3] A question that remains open is how the concept of ‘consciousness’, which is common in everyday life, is to be placed in this context. Like the concept of ‘being’, the concept of ‘consciousness’ has been and still is very prominent in recent European philosophy, but it has also received strong attention in many empirical disciplines; especially in the field of tension between philosophical phenomenology, psychology and brain research, there is a long and intense debate about what is to be understood by ‘consciousness’. Currently (2023) there is no clear, universally accepted outcome of these discussions. Of the many available working hypotheses, the author of this text considers the connection to the empirical models of ‘current memory’ in close connection with the models of ‘perception’ to be the most comprehensible so far. In this context also the concept of the ‘unconscious’ would be easy to explain. For an overview see the entry ‘consciousness’ in wkp-en: https://en.wikipedia.org/wiki/Consciousness
[4] In everyday life we constantly experience that different people perceive the same real events differently, depending on which ‘mood’ they are in, which current needs they have at the moment, which ‘previous knowledge’ they have, and what their real position to the real situation is, to name just a few factors that can play a role.
[5] Classical examples for the lack of quality of memories have always been ‘testimonies’ to certain events. Testimonies almost never agree ‘1-to-1′, at best ‘structurally’, and even in this there can be ‘deviations’ of varying strength.
In Part 1, the beginning of a simple example was presented, where an actor (here: ‘Gerd’) is sitting in his office, feels hungry, and imagines that he does not want to be hungry. In part 1, he decides to leave his office and go out to eat. Embedded in the mini-theory of this example, several concepts are explained: text types (ACTUAL description, TARGET description, CHANGE description), rule application, oksimo.R software contextualization, theory testing, inference testing, goal fulfillment testing, starting a simulation, and logical inference.
In part 2, the mini-theory will be completed. The story ends with the actor Gerd not feeling hungry anymore (at least not for the moment :-)).
Continuation of the story
An oksimo.R theory can be understood simply as a ‘story’, a kind of ‘script’, although this story has all the properties of a full empirical theory (more on theory below).
The story so far is simply told:
Starting point (Scene 1): Gerd is sitting in his office.Gerd is hungry. Target: Gerd is not hungry.
Scene 2: Gerd leaves his office.Gerd is hungry. Target achievement so far: 0%.
The transition from Scene 1 to Scene 2 was only possible because a change rule was adopted which states that Scene 1 can be changed if the condition ‘Gerd is hungry’ holds. Since this is the case, the property ‘Gerd is sitting in his office’ was removed and the new property ‘Gerd is leaving his office’ was added.
For another continuation, a rule is missing at the moment. However, the only change rule so far can be reapplied over and over again, so that scene 2 is repeated any number of times (like a record player hitting a broken groove in the record, so that the record player repeats that track endlessly until we turn it off.)
This ‘repeatability’ can become a problem if you’re not careful. Here’s an example of unwanted repetition (which we ultimately don’t want!).
Unwanted repetition(s)
Since there is a Greek bistro ‘around the corner to the left’ where Gerd could eat a snack, we write down the following new change rule:
CHANGE Description 2:
IF:
Gerd is hungry.
THEN:
Add as a property to the ACTUAL situation: Gerd decides to go to the Greek around the corner.
Remove as a property from the ACTUAL situation: – Nothing -.
APPLICATION of the change description:
Since the condition ‘Gerd is hungry.’ is met, the rule could be applied and we would get the following result with this rule:
THEN:
NEW ACTUAL situation (with rule 2):
Gerd decides to go to the Greek around the corner. Gerd is hungry.
However, there is still rule 1, which does not disappear (as an option, however, conceivable). This rule has the same condition as rule 2 and can therefore also be applied. It would produce the following result:
NEW ACTUAL situation (with rule 1):
Gerd leaves his office. Gerd is hungry.
A ‘union’ of the continuation according to rule 1 and the continuation according to rule 2 leads to the following result:
Gerd decides to go to the Greek around the corner. Gerd is hungry. Gerd leaves his office.
With the oksimo.R software (level 2) this would look like this:
Entering a Change Rule
Rule:Food1-Location1 Conditions: Gerd is hungry. Positive Effects: Gerd decides to go to the Greek around the corner.
Negative Effects: — Nothing —
Starting a New Simulation
With code number one you can start a new simulation. We need the following ‘ingredients’:
Your vision:
Gerd is not hungry.
Initial states:
Gerd is hungry.,Gerd is sitting in his office.
Round 1
Current states: Gerd is hungry.,Gerd leaves his office.,Gerd decides to go to the Greek around the corner.
Current visions: Gerd is not hungry.
0.00 percent of your vision was achieved by reaching the following states:
None
Round 2
Current states: Gerd is hungry.,Gerd leaves his office.,Gerd decides to go to the Greek around the corner.
Current visions: Gerd is not hungry.
0.00 percent of your vision was achieved by reaching the following states:
None
Already after two simulation cycles one recognizes that everything repeats itself. And with knowledge of the change rules one knows that both are ‘activated’ again and again as long as their condition is fulfilled. In the concrete example this is the case. This points to a general structure of rule-driven changes with situational reference.
On the meta-logic of situational change rules
At this point it should be remembered again that an ACTUAL description is nothing more than a ‘set of linguistic expressions’ of the respective language chosen. Here the English language is used. In the original source of the oksimo.org blog the German language is used. Any other language is also possible.
However, from the point of view of the respective actor working with such IS-descriptions, every linguistic expression used in the space of his ‘linguistic understanding’ has additionally a ‘special meaning’, which can partially be ‘correlate’ with ‘properties of the external body world’ in a ‘specific way’. So, if someone reads the expression ‘Gerd’, he will mostly associate with it the idea that it is the ‘name of an individual’. And when one reads the linguistic expression ‘… sitting in his office’, one will usually think of a ‘room in a building’. Both notions ‘name of an individual’ as well as ‘room’ in a building’ have – normally – the property that one can ‘relate’ to them concrete ‘objects of the external body world’ via ‘individual perception’. This can happen in many ways, e.g. in which someone else says to me “Look (and he points to a person), this is Gerd”, or I come into the room 204 in building 1 of the Frankfurt University of Applied Sciences and someone says to me “Look, this is Gerd’s office”. In both cases, a concrete perception can then connect with an ‘imagined conception’ in such a way that the inherently ‘abstract’ conception of an individual person in a room connects (associates) with a bundle of sensually perceived properties.
With this background knowledge one can then understand why an IS-description as a set of linguistic expressions has ‘two faces’: (i) At first sight there are only a set of linguistic expressions without any recognizable further property, and (ii) , starting from the linguistic expressions, mediated by the linguistic meaning knowledge of a speaker-hearer of the respective language, a set of meanings appears, which in the case of an IS-description must by agreement all have at least one concrete reference to the external body world. Roughly, one can therefore say at this point that every linguistic expression of a normal language can be linked (associated) with a ‘property’ of the external body world. In this second sense, an ACTUAL description then represents not only a ‘set of linguistic expressions’ but at the same time also (language comprehension in the actor presupposed) a ‘set of body-world properties’. The removal of a linguistic expression then means at the same time the removal of a property, and the addition of a linguistic expression the addition of a property.
Due to this generally assumed ‘linguistic dimension of meaning’ in each involved actor, ACTUAL descriptions thus potentially represent a connection between the virtual images in the brain of an actor to possible sensually perceptible correlates of an external body world linked to it, for which a ‘self-driven dynamic’ is assumed. By this is meant that the world of our sensual perception (linked with our memory!), apparently constantly ‘partially changes’ and simultaneous ‘partial stays constant’. The ‘extension’ of the ‘quantity of the properties of the external body world’ seems to be almost ‘infinite’ and at the same time also the possible extent of the changes.
Against this background (largely always hypothetical), any ACTUAL description always appears as a ‘very small selection’ of this body world property set and a concrete ACTUAL description forms a kind of ‘snapshot’ of a continuously dynamic event which can only be ‘traced’ in a highly simplified way via the explicitly formulated rules of change. In particular, there is a problem of how to keep an ACTUAL description ‘up to date’ when the external body world is continuously changing due to its ‘inherent dynamics’ without any oksimo.R theory-builder actor having formulated a single rule of change. In other words, an ACTUAL description ‘becomes obsolete’ by itself if the ‘coupling’ of the ACTUAL description to the external body world is not ensured with ‘appropriate’ change rules. In order to be able to do this, one needs a ‘translator’ who continuously ‘maps’ the changes of the external body world into the linguistic meaning space of the actors and these then generate corresponding linguistic expression sets.
Further possible requirements for a process
After these meta-logical considerations about the function of ACTUAL descriptions in the interplay with an assumed external body world with its own inherent dynamics, some further aspects shall be brought up here, which are/can be significant for the creation of a ‘plan’.
So far the small oksimo.R theory – the current story – has the following format:
Initial state (Scene 0):
Gerd is hungry.Gerd is sitting in his office.
The vision:
Gerd is not hungry.
Scene 1:
Gerd is hungry.Gerd leaves his office.Gerd decides to go to the Greek around the corner.
Success: 0.00 percent
Scene 2:
Gerd is hungry.Gerd leaves his office. Gerd decides to go to the Greek around the corner.
Success: 0.00 percent
The goal is still that the actor Gerd reaches his goal, the ‘Greek around the corner’, so that he can eat, for example, so that his feeling of hunger disappears.
For this, on the one hand, there must be rules that move the actor ‘through space’ to the ‘Greek around the corner’, on the other hand, the rules must be such that they cannot activate properties that should no longer occur in the process at all.
A rule like ‘Food-Location1′, which ensures that Gerd leaves his office, should not be applied again at a ‘later time’, similarly the rule ‘Food1-Decision1’, which describes the decision that Gerd wants to go to the ‘Greek around the corner’.
Since the activation of a change rule depends on the respective ‘condition’, this means that the condition for a rule should be such that the ‘triggering property’ is as ‘process-specific’ as possible. For the property ‘Gerd is hungry’, which is valid throughout the whole story until the actual eating, this is rather not true. Since all rules with this ‘non-specific trigger’ would be activated again and again, until at some point the eating produces the new property ‘Gerd is not hungry’.
This raises the question of how an ACTUAL description should be formatted such that, in addition to ‘long-living’ properties, there are also ‘short-living’ properties that can actually serve selectively as ‘triggers for rule activation’.
Time information is often not enough
In everyday life we are used to link events to a certain time, thereby assuming the existence of clocks that are synchronized worldwide; or the whole thing extended by a calendar with days, weeks, months and years. Such a tool can easily be introduced into an oksimo.R theory. But this solves the problem only partially. For many events one knows in advance neither ‘whether’ they occur at all, nor ‘when’ this will happen. In that case, the only possibility is to link a ‘subsequent event’ directly to a certain ‘preceding’ event: For example, it only makes sense to open the umbrella when it actually rains. There is usually no exact date when this event will occur.
Design perspectives: Goal and precision
What use are these considerations in the specific example where a ‘sequence’ is sought that leads to Gerd experiencing that his feeling of hunger disappears?
Two general considerations may be helpful here:
Thinking from the end (goal)
What ‘accuracy’ is required/desired?
If one knows a goal (which is not self-evident; often one first has to find out what a meaningful goal could be), then one can try to think ‘backwards’ from the goal by being guided by the question, ‘Which action A do I have to do to achieve result B?’. In the case of the desired goal state ‘Gerd is not hungry’, the usual experience would be to eat something ‘appropriate’, which leads to the ‘disappearance of the feeling of hunger’ (most of the time). Then you have to know what that ‘food’ might be, where to get it, and what you would have to do to get there (let’s ignore the case of someone just bringing something from home to eat). From such ‘backward-thinking’ a hypothetical sequence of actions can emerge, which can become the basis for a ‘plan’, which the actor will work out ‘in his head’ and then implement piecemeal by corresponding ‘real actions’.
The question of ‘accuracy of representation’ (of a story, of a theory) is not easy to answer. If engineers have to program a robot that is supposed to be able to perform certain operations, then this will normally require an almost merciless accuracy (apart from the case that there are already many ready-made modules that can take care of ‘small stuff’ (such as so-called ‘machine learning’ after successful training)). If it is the author of a crime novel or the author of a screenplay, then besides ‘factual aspects’ very much also the ‘effect on the readers / viewers’ must be considered. In the case of achieving a concrete goal in a concrete world, the potential success of the implementation of a description depends entirely on whether the concrete requirements of the world – here the everyday world – are completely satisfied. Of course, the reader/listener/user of a description also plays a major role: If we can assume that we are dealing with ‘experts’ who ‘know’ the process to be performed well, we can perhaps work with hints only; if we are dealing more with ‘newcomers’, then we must provide very detailed information. Sometimes a purely text-based description is not sufficient; more is then needed: pictures, videos or even your own training.
With a target and with ‘everyday’ accuracy
In the concrete case, there exists a target and ‘everyday experience’ is to be taken as a yardstick for accuracy; the latter, of course, leaves much ‘room for interpretation’.
Starting from the goal ‘thought backwards’ the following chain of actions seems plausible as a ‘hypothetical plan’:
Gerd is not hungry’ because:
‘Gerd is eating his stew’ because:
‘Gerd gets his order’ because:
‘Gerd is ordering a stew’ because:
‘Gerd is standing in front of the counter’ because:
‘Gerd enters the bistro’ because:
‘Gerd goes to the Greek around the corner’ because:
‘Gerd decides to go to the Greek around the corner’ because:
‘Gerd is hungry’, ‘Gerd is in his office’, because:
… there is a ‘cut’ here: arbitrary decision where to start the story/theory …
In fact, at any moment, there is not only one choice, and many things can happen during the ‘execution’ of this ‘plan’, which can result in a change of the plan. And, of course, there are many more possible aspects that could (or should) be relevant for the execution of this plan.
Constant and variable properties
As observed earlier, there are properties that are ‘rather constant’ and those that are ‘short-lived’. For example, in the context of the ‘plan’ above, the property ‘Gerd is hungry’ is constant from the beginning until the event ‘Gerd is not hungry’. Another property like ‘Gerd leaves his office’ is rather short-lived.
If we take the above hypothetical plan as a reference point, the following distribution of ‘rather constant’ and ‘rather short-lived’ properties suggests itself (left column ‘rather constant’, right column ‘rather short-lived’):
Gerd is hungry.
Gerd is in his office
Gerd is hungry.
Gerd decides …
Gerd is hungry.
Gerd walks …
Gerd is hungry.
Gerd enters …
Gerd is hungry.
Gerd stands in front of ..
Gerd is hungry.
Gerd orders …
Gerd is hungry.
Gerd gets …
Gerd is hungry.
Gerd eats …
Gerd is not hungry.
A simple strategy to avoid inappropriate repetitions would be the one in which the condition of a change rule refers to a ‘rather short-lived’ property that ‘automatically’ disappears with the implementation of a change rule.
Example (short form):
If: ‘Gerd is hungry’ and ‘Gerd is in his office’, Then: ‘Gerd decides to…’.
If ‘Gerd is hungry’ and ‘Gerd decides…’, Then add: ‘Gerd goes…’, Delete: ‘Gerd in office…’
If ‘Gerd is hungry’ and ‘Gerd goes…’, then add: ‘Gerd enters…’, delete: ‘Gerd goes…’
If ‘Gerd is hungry’ and ‘Gerd enters…’, then add: ‘Gerd stands in front of…’, delete: ‘Gerd enters…’.
If ‘Gerd is hungry’ and ‘Gerd stands in front of…’, then add: ‘Gerd orders …’, delete: ‘Gerd stands in front of …’
If ‘Gerd is hungry’ and ‘Gerd orders …’, then add: ‘Gerd gets …’, delete: ‘Gerd orders …’
If ‘Gerd is hungry’ and ‘Gerd gets …’, then add: ‘Gerd eats…’, delete: ‘Gerd gets’.
If ‘Gerd is hungry’ and ‘Gerd eats’, then add: ‘Gerd is not hungry’, delete: ‘Gerd eats…’.
This small example already shows very clearly the ‘double nature’ of our everyday reality: one is what we do ourselves, and the other is the ‘effects’ of our doing in the external body world. When someone intends to ‘walk’ and then actually walks, then one moves the body, which ‘automatically’ changes the position of the body in the external body world. Normally, one does not describe these ‘effects’ explicitly, because every person knows that this is so, based on everyday world experience. But if one wants to create a ‘description’ of the external body world with its properties, which is such that an ACTUAL description contains everything that is important for the description of a process, then one must also make some of the ‘implicit properties’ ‘explicit’ by including them in the description. Most important is the attention to ‘more ephemeral’ (temporary) properties, whose presence or absence is crucial for many actions.
Simulation extension
The extended simulation adopts the action outline from ‘backward thinking’ (see above). New change rules are formulated for this purpose.
The previous ACTUAL description is retained:
Eat1
Gerd is sitting in his office. Gerd is hungry.
The current TARGET description is retained:
Eat1-v1
Gerd is not hungry.
The following change rules are reformulated:
Eat1-Decision1
Rule name: Eat1-Decision1 Conditions: Gerd is hungry. Gerd is sitting in his office. Effects plus: Gerd goes to the Greek. Gerd decides to go to the Greek restaurant around the corner. Effects minus: Gerd is sitting in his office.
Eat1-Enter1
Rule: Eat1-Enter1 Conditions: Gerd goes to the Greek.
Positive Effects: Gerd is in the bistro. Gerd enters the bistro.
Negative Effects: Gerd goes to the Greek.
Gerd decides to go to the Greek restaurant around the corner.
Eat1-Stand-Before1
Rule: Eat1-Stand-Before1 Conditions: Gerd enters the bistro. Positive Effects: Gerd stands in front of the counter.
Negative Effects: Gerd enters the bistro.
Eat1-Order1
Rule: Eat1-Order1 Conditions: Gerd stands in front of the counter. Positive Effects: Gerd orders a stew.
Negative Effects: Gerd stands in front of the counter.
Eat1-Come1
Rule name: Eat1-Come1 Conditions: Gerd orders a stew. Effects plus: Gerd gets his stew. Effects minus: Gerd orders a stew.
Eat1-Food1
Rule: Eat1-Food1 Conditions: Gerd gets his stew. Positive Effects: Gerd eats his stew.
Negative Effects: Gerd gets his stew.
Eat1-Not-Hungry1
Rule:Eat1-Not-Hungry1 Conditions: Gerd eats his stew. Positive Effects: Gerd is not hungry.
Negative Effects: Gerd is hungry. Gerd eats his stew.
Collecting single Rules in one Rules Document
If you wanted to start a new simulation now, you would normally have to enter each rule individually. When experimenting, this can quickly become very annoying. Instead, you can combine all rules that ‘thematically’ ‘belong together’ in a ‘rule document’. Then you only need to enter the name of the rule document in the future.
In the present case, a rule document with the name ‘Eat1-RQuantity1′ is created. This document then includes the following rules:
Eat1-Decision1
Eat1-Enter1
Eat1-Stand-Before1
Eat1-Order1
Eat1-Come1
Eat1-Food1
Eat1-Not-Hungry1
To start a new simulation, you then only need to enter the following:
Your vision:
Gerd is not hungry.
Initial states:
Gerd is hungry.,Gerd is sitting in his office.
Initial math states
Round 1
Current states: Gerd is hungry.,Gerd decides to go to the Greek restaurant around the corner.,Gerd goes to the Greek.
Current visions: Gerd is not hungry.
Current values:
0.00 percent of your vision was achieved by reaching the following states:
None
Round 2
Current states: Gerd is hungry.,Gerd is in the bistro.,Gerd enters the bistro.
Current visions: Gerd is not hungry.
Current values:
0.00 percent of your vision was achieved by reaching the following states:
None
Round 3
Current states: Gerd is hungry.,Gerd stands in front of the counter.,Gerd is in the bistro.
Current visions: Gerd is not hungry.
Current values:
0.00 percent of your vision was achieved by reaching the following states:
None
Round 4
Current states: Gerd is hungry.,Gerd orders a stew.,Gerd is in the bistro.
Current visions: Gerd is not hungry.
Current values:
0.00 percent of your vision was achieved by reaching the following states:
None
Round 5
Current states: Gerd is hungry.,Gerd gets his stew.,Gerd is in the bistro.
Current visions: Gerd is not hungry.
Current values:
0.00 percent of your vision was achieved by reaching the following states:
None
Round 6
Current states: Gerd is hungry.,Gerd is in the bistro.,Gerd eats his stew.
Current visions: Gerd is not hungry.
Current values:
0.00 percent of your vision was achieved by reaching the following states:
None
Round 7
Current states: Gerd is not hungry.,Gerd is in the bistro.
Current visions: Gerd is not hungry.
Current values:
100.00 percent of your vision was achieved by reaching the following states:
Gerd is not hungry.,
The whole text shows a dynamic, which induces many changes. Difficult to plan ‘in advance’.
Perhaps, some time, it will look like a ‘book’, at least ‘for a moment’.
I have started a ‘book project’ in parallel. This was motivated by the need to provide potential users of our new oksimo.R software with a coherent explanation of how the oksimo.R software, when used, generates an empirical theory in the format of a screenplay. The primary source of the book is in German and will be translated step by step here in the uffmm.blog.
INTRODUCTION
In a rather foundational paper about an idea, how one can generalize ‘systems engineering’ [*1] to the art of ‘theory engineering’ [1] a new conceptual framework has been outlined for a ‘sustainable applied empirical theory (SAET)’. Part of this new framework has been the idea that the classical recourse to groups of special experts (mostly ‘engineers’ in engineering) is too restrictive in the light of the new requirement of being sustainable: sustainability is primarily based on ‘diversity’ combined with the ‘ability to predict’ from this diversity probable future states which keep life alive. The aspect of diversity induces the challenge to see every citizen as a ‘natural expert’, because nobody can know in advance and from some non-existing absolut point of truth, which knowledge is really important. History shows that the ‘mainstream’ is usually to a large degree ‘biased’ [*1b].
With this assumption, that every citizen is a ‘natural expert’, science turns into a ‘general science’ where all citizens are ‘natural members’ of science. I will call this more general concept of science ‘sustainable citizen science (SCS)’ or ‘Citizen Science 2.0 (CS2)’. The important point here is that a sustainable citizen science is not necessarily an ‘arbitrary’ process. While the requirement of ‘diversity’ relates to possible contents, to possible ideas, to possible experiments, and the like, it follows from the other requirement of ‘predictability’/ of being able to make some useful ‘forecasts’, that the given knowledge has to be in a format, which allows in a transparent way the construction of some consequences, which ‘derive’ from the ‘given’ knowledge and enable some ‘new’ knowledge. This ability of forecasting has often been understood as the business of ‘logic’ providing an ‘inference concept’ given by ‘rules of deduction’ and a ‘practical pattern (on the meta level)’, which defines how these rules have to be applied to satisfy the inference concept. But, looking to real life, to everyday life or to modern engineering and economy, one can learn that ‘forecasting’ is a complex process including much more than only cognitive structures nicely fitting into some formulas. For this more realistic forecasting concept we will use here the wording ‘common logic’ and for the cognitive adventure where common logic is applied we will use the wording ‘common science’. ‘Common science’ is structurally not different from ‘usual science’, but it has a substantial wider scope and is using the whole of mankind as ‘experts’.
The following chapters/ sections try to illustrate this common science view by visiting different special views which all are only ‘parts of a whole’, a whole which we can ‘feel’ in every moment, but which we can not yet completely grasp with our theoretical concepts.
CONTENT
Language (Main message: “The ordinary language is the ‘meta language’ to every special language. This can be used as a ‘hint’ to something really great: the mystery of the ‘self-creating’ power of the ordinary language which for most people is unknown although it happens every moment.”)
Concrete Abstract Statements (Main message: “… you will probably detect, that nearly all words of a language are ‘abstract words’ activating ‘abstract meanings’. …If you cannot provide … ‘concrete situations’ the intended meaning of your abstract words will stay ‘unclear’: they can mean ‘nothing or all’, depending from the decoding of the hearer.”)
True False Undefined (Main message: “… it reveals that ’empirical (observational) evidence’ is not necessarily an automatism: it presupposes appropriate meaning spaces embedded in sets of preferences, which are ‘observation friendly’.“
Beyond Now (Main message: “With the aid of … sequences revealing possible changes the NOW is turned into a ‘moment’ embedded in a ‘process’, which is becoming the more important reality. The NOW is something, but the PROCESS is more.“)
Playing with the Future (Main message: “In this sense seems ‘language’ to be the master tool for every brain to mediate its dynamic meaning structures with symbolic fix points (= words, expressions) which as such do not change, but the meaning is ‘free to change’ in any direction. And this ‘built in ‘dynamics’ represents an ‘internal potential’ for uncountable many possible states, which could perhaps become ‘true’ in some ‘future state’. Thus ‘future’ can begin in these potentials, and thinking is the ‘playground’ for possible futures.(but see [18])”)
Forecasting – Prediction: What? (This chapter explains the cognitive machinery behind forecasting/ predictions, how groups of human actors can elaborate shared descriptions, and how it is possible to start with sequences of singularities to built up a growing picture of the empirical world which appears as a radical infinite and indeterministic space. )
!!! From here all the following chapters have to be re-written !!!
Boolean Logic (Explains what boolean logic is, how it enables the working of programmable machines, but that it is of nearly no help for the ‘heart’ of forecasting.)
/* Often people argue against the usage of the wikipedia encyclopedia as not ‘scientific’ because the ‘content’ of an entry in this encyclopedia can ‘change’. This presupposes the ‘classical view’ of scientific texts to be ‘stable’, which presupposes further, that such a ‘stable text’ describes some ‘stable subject matter’. But this view of ‘steadiness’ as the major property of ‘true descriptions’ is in no correspondence with real scientific texts! The reality of empirical science — even as in some special disciplines like ‘physics’ — is ‘change’. Looking to Aristotle’s view of nature, to Galileo Galilei, to Newton, to Einstein and many others, you will not find a ‘single steady picture’ of nature and science, and physics is only a very simple strand of science compared to the live-sciences and many others. Thus wikipedia is a real scientific encyclopedia give you the breath of world knowledge with all its strengths and limits at once. For another, more general argument, see In Favour for Wikipedia */
[*1] Meaning operator ‘…’ : In this text (and in nearly all other texts of this author) the ‘inverted comma’ is used quite heavily. In everyday language this is not common. In some special languages (theory of formal languages or in programming languages or in meta-logic) the inverted comma is used in some special way. In this text, which is primarily a philosophical text, the inverted comma sign is used as a ‘meta-language operator’ to raise the intention of the reader to be aware, that the ‘meaning’ of the word enclosed in the inverted commas is ‘text specific’: in everyday language usage the speaker uses a word and assumes tacitly that his ‘intended meaning’ will be understood by the hearer of his utterance as ‘it is’. And the speaker will adhere to his assumption until some hearer signals, that her understanding is different. That such a difference is signaled is quite normal, because the ‘meaning’ which is associated with a language expression can be diverse, and a decision, which one of these multiple possible meanings is the ‘intended one’ in a certain context is often a bit ‘arbitrary’. Thus, it can be — but must not — a meta-language strategy, to comment to the hearer (or here: the reader), that a certain expression in a communication is ‘intended’ with a special meaning which perhaps is not the commonly assumed one. Nevertheless, because the ‘common meaning’ is no ‘clear and sharp subject’, a ‘meaning operator’ with the inverted commas has also not a very sharp meaning. But in the ‘game of language’ it is more than nothing 🙂
[*1b] That the main stream ‘is biased’ is not an accident, not a ‘strange state’, not a ‘failure’, it is the ‘normal state’ based on the deeper structure how human actors are ‘built’ and ‘genetically’ and ‘cultural’ ‘programmed’. Thus the challenge to ‘survive’ as part of the ‘whole biosphere’ is not a ‘partial task’ to solve a single problem, but to solve in some sense the problem how to ‘shape the whole biosphere’ in a way, which enables a live in the universe for the time beyond that point where the sun is turning into a ‘red giant’ whereby life will be impossible on the planet earth (some billion years ahead)[22]. A remarkable text supporting this ‘complex view of sustainability’ can be found in Clark and Harvey, summarized at the end of the text. [23]
[*2] The meaning of the expression ‘normal’ is comparable to a wicked problem. In a certain sense we act in our everyday world ‘as if there exists some standard’ for what is assumed to be ‘normal’. Look for instance to houses, buildings: to a certain degree parts of a house have a ‘standard format’ assuming ‘normal people’. The whole traffic system, most parts of our ‘daily life’ are following certain ‘standards’ making ‘planning’ possible. But there exists a certain percentage of human persons which are ‘different’ compared to these introduced standards. We say that they have a ‘handicap’ compared to this assumed ‘standard’, but this so-called ‘standard’ is neither 100% true nor is the ‘given real world’ in its properties a ‘100% subject’. We have learned that ‘properties of the real world’ are distributed in a rather ‘statistical manner’ with different probabilities of occurrences. To ‘find our way’ in these varying occurrences we try to ‘mark’ the main occurrences as ‘normal’ to enable a basic structure for expectations and planning. Thus, if in this text the expression ‘normal’ is used it refers to the ‘most common occurrences’.
[*3] Thus we have here a ‘threefold structure’ embracing ‘perception events, memory events, and expression events’. Perception events represent ‘concrete events’; memory events represent all kinds of abstract events but they all have a ‘handle’ which maps to subsets of concrete events; expression events are parts of an abstract language system, which as such is dynamically mapped onto the abstract events. The main source for our knowledge about perceptions, memory and expressions is experimental psychology enhanced by many other disciplines.
[*4] Characterizing language expressions by meaning – the fate of any grammar: the sentence ” … ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like.” is pointing to a deep property of every ordinary language, which represents the real power of language but at the same time the great weakness too: expressions as such have no meaning. Hundreds, thousands, millions of words arranged in ‘texts’, ‘documents’ can show some statistical patterns’ and as such these patterns can give some hint which expressions occur ‘how often’ and in ‘which combinations’, but they never can give a clue to the associated meaning(s). During more than three-thousand years humans have tried to describe ordinary language in a more systematic way called ‘grammar’. Due to this radically gap between ‘expressions’ as ‘observable empirical facts’ and ‘meaning constructs’ hidden inside the brain it was all the time a difficult job to ‘classify’ expressions as representing a certain ‘type’ of expression like ‘nouns’, ‘predicates’, ‘adjectives’, ‘defining article’ and the like. Without regressing to the assumed associated meaning such a classification is not possible. On account of the fuzziness of every meaning ‘sharp definitions’ of such ‘word classes’ was never and is not yet possible. One of the last big — perhaps the biggest ever — project of a complete systematic grammar of a language was the grammar project of the ‘Akademie der Wissenschaften der DDR’ (‘Academy of Sciences of the GDR’) from 1981 with the title “Grundzüge einer Deutschen Grammatik” (“Basic features of a German grammar”). A huge team of scientists worked together using many modern methods. But in the preface you can read, that many important properties of the language are still not sufficiently well describable and explainable. See: Karl Erich Heidolph, Walter Flämig, Wolfgang Motsch et al.: Grundzüge einer deutschen Grammatik. Akademie, Berlin 1981, 1028 Seiten.
[*5] Differing opinions about a given situation manifested in uttered expressions are a very common phenomenon in everyday communication. In some sense this is ‘natural’, can happen, and it should be no substantial problem to ‘solve the riddle of being different’. But as you can experience, the ability of people to solve the occurrence of different opinions is often quite weak. Culture is suffering by this as a whole.
[1] Gerd Doeben-Henisch, 2022, From SYSTEMS Engineering to THEORYEngineering, see: https://www.uffmm.org/2022/05/26/from-systems-engineering-to-theory-engineering/(Remark: At the time of citation this post was not yet finished, because there are other posts ‘corresponding’ with that post, which are too not finished. Knowledge is a dynamic network of interwoven views …).
[1d] ‘usual science’ is the game of science without having a sustainable format like in citizen science 2.0.
[2] Science, see e.g. wkp-en: https://en.wikipedia.org/wiki/Science
Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testableconjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”
[2b] History of science in wkp-en: https://en.wikipedia.org/wiki/History_of_science#Scientific_Revolution_and_birth_of_New_Science
[3] Theory, see wkp-en: https://en.wikipedia.org/wiki/Theory#:~:text=A%20theory%20is%20a%20rational,or%20no%20discipline%20at%20all.
Citation = “A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory’s assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.”
Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testableconjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”
[4b] Empiricism in wkp-en: https://en.wikipedia.org/wiki/Empiricism
[4c] Scientific method in wkp-en: https://en.wikipedia.org/wiki/Scientific_method
Citation =”The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3] [4c]
and
Citation = “The purpose of an experiment is to determine whether observations[A][a][b] agree with or conflict with the expectations deduced from a hypothesis.[6]: Book I, [6.54] pp.372, 408 [b] Experiments can take place anywhere from a garage to a remote mountaintop to CERN’s Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.[8][9]”
[5] Gerd Doeben-Henisch, “Is Mathematics a Fake? No! Discussing N.Bourbaki, Theory of Sets (1968) – Introduction”, 2022, https://www.uffmm.org/2022/06/06/n-bourbaki-theory-of-sets-1968-introduction/
[6] Logic, see wkp-en: https://en.wikipedia.org/wiki/Logic
[7] W. C. Kneale, The Development of Logic, Oxford University Press (1962)
[8] Set theory, in wkp-en: https://en.wikipedia.org/wiki/Set_theory
[9] N.Bourbaki, Theory of Sets , 1968, with a chapter about structures, see: https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments_de_math%C3%A9matique
[10] = [5]
[11] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein
[12] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */
[13] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act
[14] While the world view constructed in a brain is ‘virtual’ compared to the ‘real word’ outside the brain (where the body outside the brain is also functioning as ‘real world’ in relation to the brain), does the ‘virtual world’ in the brain function for the brain mostly ‘as if it is the real world’. Only under certain conditions can the brain realize a ‘difference’ between the triggering outside real world and the ‘virtual substitute for the real world’: You want to use your bicycle ‘as usual’ and then suddenly you have to notice that it is not at that place where is ‘should be’. …
[15] Propositional Calculus, see wkp-en: https://en.wikipedia.org/wiki/Propositional_calculus#:~:text=Propositional%20calculus%20is%20a%20branch,of%20arguments%20based%20on%20them.
[16] Boolean algebra, see wkp-en: https://en.wikipedia.org/wiki/Boolean_algebra
[17] Boolean (or propositional) Logic: As one can see in the mentioned articles of the English wikipedia, the term ‘boolean logic’ is not common. The more logic-oriented authors prefer the term ‘boolean calculus’ [15] and the more math-oriented authors prefer the term ‘boolean algebra’ [16]. In the view of this author the general view is that of ‘language use’ with ‘logic inference’ as leading idea. Therefore the main topic is ‘logic’, in the case of propositional logic reduced to a simple calculus whose similarity with ‘normal language’ is widely ‘reduced’ to a play with abstract names and operators. Recommended: the historical comments in [15].
[18] Clearly, thinking alone can not necessarily induce a possible state which along the time line will become a ‘real state’. There are numerous factors ‘outside’ the individual thinking which are ‘driving forces’ to push real states to change. But thinking can in principle synchronize with other individual thinking and — in some cases — can get a ‘grip’ on real factors causing real changes.
[19] This kind of knowledge is not delivered by brain science alone but primarily from experimental (cognitive) psychology which examines observable behavior and ‘interprets’ this behavior with functional models within an empirical theory.
[20] Predicate Logic or First-Order Logic or … see: wkp-en: https://en.wikipedia.org/wiki/First-order_logic#:~:text=First%2Dorder%20logic%E2%80%94also%20known,%2C%20linguistics%2C%20and%20computer%20science.
[21] Gerd Doeben-Henisch, In Favour of Wikipedia, https://www.uffmm.org/2022/07/31/in-favour-of-wikipedia/, 31 July 2022
[22] The sun, see wkp-ed https://en.wikipedia.org/wiki/Sun (accessed 8 Aug 2022)
[23] By Clark, William C., and Alicia G. Harley – https://doi.org/10.1146/annurev-environ-012420-043621, Clark, William C., and Alicia G. Harley. 2020. “Sustainability Science: Toward a Synthesis.” Annual Review of Environment and Resources 45 (1): 331–86, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=109026069
[24] Sustainability in wkp-en: https://en.wikipedia.org/wiki/Sustainability#Dimensions_of_sustainability
[27] SDG 4 in wkp-en: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4
[28] Thomas Rid, Rise of the Machines. A Cybernetic History, W.W.Norton & Company, 2016, New York – London
[29] Doeben-Henisch, G., 2006, Reducing Negative Complexity by a Semiotic System In: Gudwin, R., & Queiroz, J., (Eds). Semiotics and Intelligent Systems Development. Hershey et al: Idea Group Publishing, 2006, pp.330-342
[30] Döben-Henisch, G., Reinforcing the global heartbeat: Introducing the planet earth simulator project, In M. Faßler & C. Terkowsky (Eds.), URBAN FICTIONS. Die Zukunft des Städtischen. München, Germany: Wilhelm Fink Verlag, 2006, pp.251-263
[29] The idea that individual disciplines are not good enough for the ‘whole of knowledge’ is expressed in a clear way in a video of the theoretical physicist and philosopher Carlo Rovell: Carlo Rovelli on physics and philosophy, June 1, 2022, Video from the Perimeter Institute for Theoretical Physics. Theoretical physicist, philosopher, and international bestselling author Carlo Rovelli joins Lauren and Colin for a conversation about the quest for quantum gravity, the importance of unlearning outdated ideas, and a very unique way to get out of a speeding ticket.
[] By Azote for Stockholm Resilience Centre, Stockholm University – https://www.stockholmresilience.org/research/research-news/2016-06-14-how-food-connects-all-the-sdgs.html, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=112497386
[] Sierra Club in wkp-en: https://en.wikipedia.org/wiki/Sierra_Club
[] Herbert Bruderer, Where is the Cradle of the Computer?, June 20, 2022, URL: https://cacm.acm.org/blogs/blog-cacm/262034-where-is-the-cradle-of-the-computer/fulltext (accessed: July 20, 2022)
[] UN. Secretary-General; World Commission on Environment and Development, 1987, Report of the World Commission on Environment and Development : note / by the Secretary-General., https://digitallibrary.un.org/record/139811 (accessed: July 20, 2022) (A more readable format: https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf )
/* Comment: Gro Harlem Brundtland (Norway) has been the main coordinator of this document */
[] Chaudhuri, S.,et al.Neurosymbolic programming. Foundations and Trends in Programming Languages 7, 158-243 (2021).
[] Nello Cristianini, Teresa Scantamburlo, James Ladyman, The social turn of artificial intelligence, in: AI & SOCIETY, https://doi.org/10.1007/s00146-021-01289-8
[] Carl DiSalvo, Phoebe Sengers, and Hrönn Brynjarsdóttir, Mapping the landscape of sustainable hci, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, page 1975–1984, New York, NY, USA, 2010. Association for Computing Machinery.
[] Claude Draude, Christian Gruhl, Gerrit Hornung, Jonathan Kropf, Jörn Lamla, Jan Marco Leimeister, Bernhard Sick, Gerd Stumme, Social Machines, in: Informatik Spektrum, https://doi.org/10.1007/s00287-021-01421-4
[] EU: High-Level Expert Group on AI (AI HLEG), A definition of AI: Main capabilities and scientific disciplines, European Commission communications published on 25 April 2018 (COM(2018) 237 final), 7 December 2018 (COM(2018) 795 final) and 8 April 2019 (COM(2019) 168 final). For our definition of Artificial Intelligence (AI), please refer to our document published on 8 April 2019: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341
[] EU: High-Level Expert Group on AI (AI HLEG), Policy and investment recommendations for trustworthy Artificial Intelligence, 2019, https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence
[] European Union. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC General Data Protection Regulation; http://eur-lex.europa.eu/eli/reg/2016/679/oj (Wirksam ab 25.Mai 2018) [26.2.2022]
[] C.S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1):1–23, 1973
[] John P. van Gigch. 1991. System Design Modeling and Metamodeling. Springer US. DOI:https://doi.org/10.1007/978-1-4899-0676-2
[] Gudwin, R.R. (2003), On a Computational Model of the Peircean Semiosis, IEEE KIMAS 2003 Proceedings
[] J.A. Jacko and A. Sears, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 1st edition, 2003.
[] LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).
[] Lenat, D. What AI can learn from Romeo & Juliet.Forbes (2019)
[] Pierre Lévy, Collective Intelligence. mankind’s emerging world in cyberspace, Perseus books, Cambridge (M A), 1997 (translated from the French Edition 1994 by Robert Bonnono)
[] Lexikon der Nachhaltigkeit, ‘Starke Nachhaltigkeit‘, https://www.nachhaltigkeit.info/artikel/schwache_vs_starke_nachhaltigkeit_1687.htm (acessed: July 21, 2022)
[] Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report.
[] Kathryn Merrick. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41:38 – 55, 2017
[] Illah Reza Nourbakhsh and Jennifer Keating, AI and Humanity, MIT Press, 2020 /* An examination of the implications for society of rapidly advancing artificial intelligence systems, combining a humanities perspective with technical analysis; includes exercises and discussion questions. */
[] Olazaran, M. , A sociological history of the neural network controversy. Advances in Computers37, 335-425 (1993).
[] Friedrich August Hayek (1945), The use of knowledge in society. The American Economic Review 35, 4 (1945), 519–530
[] Karl Popper, „A World of Propensities“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1988, leicht erweitert neu abgedruckt 1990, repr. 1995)
[] Karl Popper, „Towards an Evolutionary Theory of Knowledge“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1989, ab gedruckt in 1990, repr. 1995)
[] Karl Popper, „All Life is Problem Solving“, Artikel, ursprünglich ein Vortrag 1991 auf Deutsch, erstmalig publiziert in dem Buch (auf Deutsch) „Alles Leben ist Problemlösen“ (1994), dann in dem Buch (auf Englisch) „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York
[] A. Sears and J.A. Jacko, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 2nd edition, 2008.
[] Skaburskis, Andrejs (19 December 2008). “The origin of “wicked problems””. Planning Theory & Practice. 9 (2): 277-280. doi:10.1080/14649350802041654. At the end of Rittel’s presentation, West Churchman responded with that pensive but expressive movement of voice that some may well remember, ‘Hmm, those sound like “wicked problems.”‘
[] Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).
[] Wurm, Daniel; Zielinski, Oliver; Lübben, Neeske; Jansen, Maike; Ramesohl, Stephan (2021) : Wege in eine ökologische Machine Economy: Wir brauchen eine ‘Grüne Governance der Machine Economy’, um das Zusammenspiel von Internet of Things, Künstlicher Intelligenz und Distributed Ledger Technology ökologisch zu gestalten, Wuppertal Report, No. 22, Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal, https://doi.org/10.48506/opus-7828
[] Aimee van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of AI, in: AI and Ethics (2021) 1:213–218, see: https://doi.org/10.1007/s43681
[] R. I. Damper (2000), Editorial for the special issue on ‘Emergent Properties of Complex Systems’: Emergence and levels of abstraction. International Journal of Systems Science 31, 7 (2000), 811–818. DOI:https://doi.org/10.1080/002077200406543
[] Gerd Doeben-Henisch (2004), The Planet Earth Simulator Project – A Case Study in Computational Semiotics, IEEE AFRICON 2004, pp.417 – 422
[] Eric Bonabeau (2009), Decisions 2.0: The power of collective intelligence. MIT Sloan Management Review 50, 2 (Winter 2009), 45-52.
[] Jim Giles (2005), Internet encyclopaedias go head to head. Nature 438, 7070 (Dec. 2005), 900–901. DOI:https://doi.org/10.1038/438900a
[] T. Bosse, C. M. Jonker, M. C. Schut, and J. Treur (2006), Collective representational content for shared extended mind. Cognitive Systems Research 7, 2-3 (2006), pp.151-174, DOI:https://doi.org/10.1016/j.cogsys.2005.11.007
[] Romina Cachia, Ramón Compañó, and Olivier Da Costa (2007), Grasping the potential of online social networks for foresight. Technological Forecasting and Social Change 74, 8 (2007), oo.1179-1203. DOI:https://doi.org/10.1016/j.techfore.2007.05.006
[] Tom Gruber (2008), Collective knowledge systems: Where the social web meets the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 6, 1 (2008), 4–13. DOI:https://doi.org/10.1016/j.websem.2007.11.011
[] Luca Iandoli, Mark Klein, and Giuseppe Zollo (2009), Enabling on-line deliberation and collective decision-making through large-scale argumentation. International Journal of Decision Support System Technology 1, 1 (Jan. 2009), 69–92. DOI:https://doi.org/10.4018/jdsst.2009010105
[] Shuangling Luo, Haoxiang Xia, Taketoshi Yoshida, and Zhongtuo Wang (2009), Toward collective intelligence of online communities: A primitive conceptual model. Journal of Systems Science and Systems Engineering 18, 2 (01 June 2009), 203–221. DOI:https://doi.org/10.1007/s11518-009-5095-0
[] Dawn G. Gregg (2010), Designing for collective intelligence. Communications of the ACM 53, 4 (April 2010), 134–138. DOI:https://doi.org/10.1145/1721654.1721691
[] Rolf Pfeifer, Jan Henrik Sieg, Thierry Bücheler, and Rudolf Marcel Füchslin. 2010. Crowdsourcing, open innovation and collective intelligence in the scientific method: A research agenda and operational framework. (2010). DOI:https://doi.org/10.21256/zhaw-4094
[] Martijn C. Schut. 2010. On model design for simulation of collective intelligence. Information Sciences 180, 1 (2010), 132–155. DOI:https://doi.org/10.1016/j.ins.2009.08.006 Special Issue on Collective Intelligence
[] Dimitrios J. Vergados, Ioanna Lykourentzou, and Epaminondas Kapetanios (2010), A resource allocation framework for collective intelligence system engineering. In Proceedings of the International Conference on Management of Emergent Digital EcoSystems (MEDES’10). ACM, New York, NY, 182–188. DOI:https://doi.org/10.1145/1936254.1936285
[] Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone (2010), Evidence for a collective intelligence factor in the performance of human groups. Science 330, 6004 (2010), 686–688. DOI:https://doi.org/10.1126/science.1193147
[] Michael A. Woodley and Edward Bell (2011), Is collective intelligence (mostly) the General Factor of Personality? A comment on Woolley, Chabris, Pentland, Hashmi and Malone (2010). Intelligence 39, 2 (2011), 79–81. DOI:https://doi.org/10.1016/j.intell.2011.01.004
[] Joshua Introne, Robert Laubacher, Gary Olson, and Thomas Malone (2011), The climate CoLab: Large scale model-based collaborative planning. In Proceedings of the 2011 International Conference on Collaboration Technologies and Systems (CTS’11). 40–47. DOI:https://doi.org/10.1109/CTS.2011.5928663
[] Miguel de Castro Neto and Ana Espírtio Santo (2012), Emerging collective intelligence business models. In MCIS 2012 Proceedings. Mediterranean Conference on Information Systems. https://aisel.aisnet.org/mcis2012/14
[] Peng Liu, Zhizhong Li (2012), Task complexity: A review and conceptualization framework, International Journal of Industrial Ergonomics 42 (2012), pp. 553 – 568
[] Sean Wise, Robert A. Paton, and Thomas Gegenhuber. (2012), Value co-creation through collective intelligence in the public sector: A review of US and European initiatives. VINE 42, 2 (2012), 251–276. DOI:https://doi.org/10.1108/03055721211227273
[] Antonietta Grasso and Gregorio Convertino (2012), Collective intelligence in organizations: Tools and studies. Computer Supported Cooperative Work (CSCW) 21, 4 (01 Oct 2012), 357–369. DOI:https://doi.org/10.1007/s10606-012-9165-3
[] Sandro Georgi and Reinhard Jung (2012), Collective intelligence model: How to describe collective intelligence. In Advances in Intelligent and Soft Computing. Vol. 113. Springer, 53–64. DOI:https://doi.org/10.1007/978-3-642-25321-8_5
[] H. Santos, L. Ayres, C. Caminha, and V. Furtado (2012), Open government and citizen participation in law enforcement via crowd mapping. IEEE Intelligent Systems 27 (2012), 63–69. DOI:https://doi.org/10.1109/MIS.2012.80
[] Jörg Schatzmann & René Schäfer & Frederik Eichelbaum (2013), Foresight 2.0 – Definition, overview & evaluation, Eur J Futures Res (2013) 1:15 DOI 10.1007/s40309-013-0015-4
[] Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin (2013), How diversity can drive innovation. Harvard Business Review 91, 12 (2013), 30–30
[] Tony Diggle (2013), Water: How collective intelligence initiatives can address this challenge. Foresight 15, 5 (2013), 342–353. DOI:https://doi.org/10.1108/FS-05-2012-0032
[] Hélène Landemore and Jon Elster. 2012. Collective Wisdom: Principles and Mechanisms. Cambridge University Press. DOI:https://doi.org/10.1017/CBO9780511846427
[] Jerome C. Glenn (2013), Collective intelligence and an application by the millennium project. World Futures Review 5, 3 (2013), 235–243. DOI:https://doi.org/10.1177/1946756713497331
[] Detlef Schoder, Peter A. Gloor, and Panagiotis Takis Metaxas (2013), Social media and collective intelligence—Ongoing and future research streams. KI – Künstliche Intelligenz 27, 1 (1 Feb. 2013), 9–15. DOI:https://doi.org/10.1007/s13218-012-0228-x
[] V. Singh, G. Singh, and S. Pande (2013), Emergence, self-organization and collective intelligence—Modeling the dynamics of complex collectives in social and organizational settings. In 2013 UKSim 15th International Conference on Computer Modelling and Simulation. 182–189. DOI:https://doi.org/10.1109/UKSim.2013.77
[] A. Kornrumpf and U. Baumöl (2014), A design science approach to collective intelligence systems. In 2014 47th Hawaii International Conference on System Sciences. 361–370. DOI:https://doi.org/10.1109/HICSS.2014.53
[] Michael A. Peters and Richard Heraud. 2015. Toward a political theory of social innovation: Collective intelligence and the co-creation of social goods. 3, 3 (2015), 7–23. https://researchcommons.waikato.ac.nz/handle/10289/9569
[] Juho Salminen. 2015. The Role of Collective Intelligence in Crowdsourcing Innovation. PhD dissertation. Lappeenranta University of Technology
[] Aelita Skarzauskiene and Monika Maciuliene (2015), Modelling the index of collective intelligence in online community projects. In International Conference on Cyber Warfare and Security. Academic Conferences International Limited, 313
[] AYA H. KIMURA and ABBY KINCHY (2016), Citizen Science: Probing the Virtues and Contexts of Participatory Research, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099
[] Philip Tetlow, Dinesh Garg, Leigh Chase, Mark Mattingley-Scott, Nicholas Bronn, Kugendran Naidoo†, Emil Reinert (2022), Towards a Semantic Information Theory (Introducing Quantum Corollas), arXiv:2201.05478v1 [cs.IT] 14 Jan 2022, 28 pages
[] Melanie Mitchell, What Does It Mean to Align AI With Human Values?, quanta magazin, Quantized Columns, 19.Devember 2022, https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213#
Comment by Gerd Doeben-Henisch:
[] Nick Bostrom. Superintelligence. Paths, Dangers, Strategies. Oxford University Press, Oxford (UK), 1 edition, 2014.
[] Scott Aaronson, Reform AI Alignment, Update: 22.November 2022, https://scottaaronson.blog/?p=6821
[] Andrew Y. Ng, Stuart J. Russell, Algorithms for Inverse Reinforcement Learning, ICML 2000: Proceedings of the Seventeenth International Conference on Machine LearningJune 2000 Pages 663–670
[] Pat Langley (ed.), ICML ’00: Proceedings of the Seventeenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 340 Pine Street, Sixth Floor, San Francisco, CA, United States, Conference 29 June 2000- 2 July 2000, 29.June 2000
Abstract: Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations Daniel S. Brown * 1 Wonjoon Goo * 1 Prabhat Nagarajan 2 Scott Niekum 1 You can read in the abstract: “A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (ap- proximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo bench- mark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.”
In the abstract you can read: “For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.
In the abstract you can read: “Conceptual abstraction and analogy-making are key abilities underlying humans’ abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress
In the abstract you can read: “Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.”
[] Stuart Russell, (2019), Human Compatible: AI and the Problem of Control, Penguin books, Allen Lane; 1. Edition (8. Oktober 2019)
In the preface you can read: “This book is about the past , present , and future of our attempt to understand and create intelligence . This matters , not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future . The world’s great powers are waking up to this fact , and the world’s largest corporations have known it for some time . We cannot predict exactly how the technology will develop or on what timeline . Nevertheless , we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world . What then ? Everything civilization has to offer is the product of our intelligence ; gaining access to considerably greater intelligence would be the biggest event in human history . The purpose of the book is to explain why it might be the last event in human history and how to make sure that it is not .”
[] David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina, (2022), Method Cards for Prescriptive Machine-Learning Transparency, 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), CAIN’22, May 16–24, 2022, Pittsburgh, PA, USA, pp. 90 – 100, Association for Computing Machinery, ACM ISBN 978-1-4503-9275-4/22/05, New York, NY, USA, https://doi.org/10.1145/3522664.3528600
In the abstract you can read: “Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets, AI FactSheets, and Model Cards have taken a mainly descriptive approach, providing various details about the system components. While the above information is essential for product developers and external experts to assess whether the ML system meets their requirements, other stakeholders might find it less actionable. In particular, ML engineers need guidance on how to mitigate po- tential shortcomings in order to fix bugs or improve the system’s performance. We propose a documentation artifact that aims to provide such guidance in a prescriptive way. Our proposal, called Method Cards, aims to increase the transparency and reproducibil- ity of ML systems by allowing stakeholders to reproduce the models, understand the rationale behind their designs, and introduce adap- tations in an informed way. We showcase our proposal with an example in small object detection, and demonstrate how Method Cards can communicate key considerations that help increase the transparency and reproducibility of the detection model. We fur- ther highlight avenues for improving the user experience of ML engineers based on Method Cards.”
[] John H. Miller, (2022), Ex Machina: Coevolving Machines and the Origins of the Social Universe, The SFI Press Scholars Series, 410 pages Paperback ISBN: 978-1947864429 , DOI: 10.37911/9781947864429
In the announcement of the book you can read: “If we could rewind the tape of the Earth’s deep history back to the beginning and start the world anew—would social behavior arise yet again? While the study of origins is foundational to many scientific fields, such as physics and biology, it has rarely been pursued in the social sciences. Yet knowledge of something’s origins often gives us new insights into the present. In Ex Machina, John H. Miller introduces a methodology for exploring systems of adaptive, interacting, choice-making agents, and uses this approach to identify conditions sufficient for the emergence of social behavior. Miller combines ideas from biology, computation, game theory, and the social sciences to evolve a set of interacting automata from asocial to social behavior. Readers will learn how systems of simple adaptive agents—seemingly locked into an asocial morass—can be rapidly transformed into a bountiful social world driven only by a series of small evolutionary changes. Such unexpected revolutions by evolution may provide an important clue to the emergence of social life.”
In the abstract you can read: “Analyzing the spatial and temporal properties of information flow with a multi-century perspective could illuminate the sustainability of human resource-use strategies. This paper uses historical and archaeological datasets to assess how spatial, temporal, cognitive, and cultural limitations impact the generation and flow of information about ecosystems within past societies, and thus lead to tradeoffs in sustainable practices. While it is well understood that conflicting priorities can inhibit successful outcomes, case studies from Eastern Polynesia, the North Atlantic, and the American Southwest suggest that imperfect information can also be a major impediment to sustainability. We formally develop a conceptual model of Environmental Information Flow and Perception (EnIFPe) to examine the scale of information flow to a society and the quality of the information needed to promote sustainable coupled natural-human systems. In our case studies, we assess key aspects of information flow by focusing on food web relationships and nutrient flows in socio-ecological systems, as well as the life cycles, population dynamics, and seasonal rhythms of organisms, the patterns and timing of species’ migration, and the trajectories of human-induced environmental change. We argue that the spatial and temporal dimensions of human environments shape society’s ability to wield information, while acknowledging that varied cultural factors also focus a society’s ability to act on such information. Our analyses demonstrate the analytical importance of completed experiments from the past, and their utility for contemporary debates concerning managing imperfect information and addressing conflicting priorities in modern environmental management and resource use.”
In a preceding post I have illustrated how one can apply the concept of an empirical theory — highly inspired by Karl Popper — to an everyday problem given as a county and its demographic problem(s). In this post I like to develop this idea a little more.
AN EMPIRICAL THEORY AS A DEVELOPMENT PROCESS
CITIZENs – natural experts
As starting point we assume citizens understood as our ‘natural experts’ being members of a democratic society with political parties, an freely elected parliament, which can create some helpful laws for the societal life and some authorities serving the need of the citizens.
SYMBOLIC DESCRIPTIONS
To coordinate their actions by a sufficient communication the citizens produce symbolic descriptions to make public how they see the ‘given situation’, which kinds of ‘future states’ (‘goals’) they want to achieve, and a list of ‘actions’ which can ‘change/ transform’ the given situation step wise into the envisioned future state.
LEVELS OF ABSTRACTIONS
Using an everyday language — possibly enriched with some math expressions – one can talk about our world of experience on different levels of abstraction. To get a rather wide scope one starts with most abstract concepts, and then one can break down these abstract concepts more and more with concrete properties/ features until these concrete expressions are ‘touching the real experience’. It can be helpful — in most cases — not to describe everything in one description but one does a partition of ‘the whole’ into several more concrete descriptions to get the main points. Afterwards it should be possible to ‘unify’ these more concrete descriptions into one large picture showing how all these concrete descriptions ‘work together’.
LOGICAL INFERENCE BY SIMULATION
A very useful property of empirical theories is the possibility to derive from given assumptions and assumed rules of inference possible consequences which are ‘true’ if the assumptions an the rules of inference are ‘true’.
The above outlined descriptions are seen in this post as texts which satisfy the requirements of an empirical theory such that the ‘simulator’ is able to derive from these assumptions all possible ‘true’ consequences if these assumptions are assumed to be ‘true’. Especially will the simulator deliver not only one single consequence only but a whole ‘sequence of consequences’ following each other in time.
PURE WWW KNOWLEDGE SPACE
This simple outline describes the application format of the oksimo software which is understood here as a kind of a ‘theory machine’ for everybody.
It is assumed that a symbolic description is given as a pure text file or as a given HTML page somewhere in the world wide web [WWW].
The simulator realized as an oksimo program can load such a file and can run a simulation. The output will be send back as an HTML page.
No special special data base is needed inside of the oksimo application. All oksimo related HTML pages located by a citizen somewhere in the WWW are constituting a ‘global public knowledge space’ accessible by everybody.
DISTRIBUTED OKSIMO INSTANCES
An oksimo server positioned behind the oksimo address ‘oksimo.com’ can produce for a simulation demand a ‘simulator instance’ running one simulation. There can be many simulations running in parallel. A simulation can also be connected in real time to Internet-of-Things [IoT] instances to receive empirical data being used in the simulation. In ‘interactive mode’ an oksimo simulation does furthermore allow the participation of ‘actors’ which function as a ‘dynamic rule instance’: they receive input from the simulated given situation and can respond ‘on their own’. This turns a simulation into an ‘open process’ like we do encounter during ‘everyday real processes’. An ‘actor’ must not necessarily be a ‘human’ actor; it can also be a ‘non-human’ actor. Furthermore it is possible to establish a ‘simulation-meta-level’: because a simulation as a whole represents a ‘full theory’ on can feed this whole theory to an ‘artificial intelligence algorithm’ which dos not run only one simulation but checks the space of ‘all possible simulations’ and thereby identifies those sub-spaces which are — according to the defined goals — ‘zones of special interest’.
In the uffmm review section the different papers and books are discussed from the point of view of the oksimo paradigm. [2] Here the author reads the book “Logic. The Theory Of Inquiry” by John Dewey, 1938. [1]
Part I – Chapter I
THE PROBLEM OF LOGICAL SUBJECT-MATTER
In this chapter Dewey tries to characterize the subject-matter of logic. From the year 1938 backwards one can look into a long history of thoughts with at least 2500 years dealing in one or another sense with what has been called ‘logic’. His rough judgment is that the participants of the logic language game “proximate subject-matter of logic” seem to be widely in agreement what it is, but in the case of the “ultimate subject-matter of logic” language game there seem to exist different or even conflicting opinions.(cf. p.8)
Logic as a philosophic theory
Dewey illustrates the variety of views about the ultimate subject-matter of logic by citing several different positions.(cf. p.10) Having done this Dewey puts all these views together into a kind of a ‘meta-view’ stating that logic “is a branch of philosophic theory and therefore can express different philosophies.”(p.10) But exercising philosophy ” itself must satisfy logical requirements.”(p.10)
And in general he thinks that “any statement that logic is so-and-so, can … be offered only as a hypothesis and an indication of a position to be developed.”(p.11)
Thus we see here that Dewey declares the ultimate logical subject-matter grounded in some philosophical perspective which should be able “to order and account for what has been called the proximate subject-matter.”(p.11) But the philosophical theory “must possess the property of verifiable existence in some domain, no matter how hypothetical it is in reference to the field in which it is proposed to apply it.”(p.11) This is an interesting point because this implies the question in which sense a philosophical foundation of logic can offer a verifiable existence.
Inquiry
Dewey gives some hint for a possible answer by stating “that all logical forms … arise within the operation of inquiry and are concerned with control of inquiry so that it may yield warranted assertions.”(p.11) While the inquiry as a process is real, the emergence of logical forms has to be located in the different kinds of interactions between the researchers and some additional environment in the process. Here should some verifiable reality be involved which is reflected in accompanying language expressions used by the researchers for communication. This implies further that the used language expressions — which can even talk about other language expressions — are associated with propositions which can be shown to be valid.[4]
And — with some interesting similarity with the modern concept of ‘diversity’ — he claims that in avoidance of any kind of dogmatism “any hypothesis, no matter how unfamiliar, should have a fair chance and be judged by its results.”(p.12)
While Dewey is quite clear to use the concept of inquiry as a process leading to some results which are depending from the starting point and the realized processes, he mentions additionally concepts like ‘methods’, ‘norms’, ‘instrumentalities’, and ‘procedures’, but these concepts are rather fuzzy. (cf. p.14f)
Warranted assertibility
Part of an inquiry are the individual actors which have psychological states like ‘doubt’ or ‘belief’ or ‘understanding’ (knowledge).(p.15) But from these concepts follows nothing about needed logical forms or rules.(cf.p.16f) Instead Dewey repeats his requirement with the words “In scientific inquiry, the criterion of what is taken to be settled, or to be knowledge, is being so settled that it is available as a resource in further inquiry; not being settled in such a way as not to be subject to revision in further inquiry.”(p.17) And therefore, instead of using fuzzy concepts like (subjective) ‘doubt’, ‘believe’ or ‘knowledge’, prefers to use the concept “warranted assertibility”. This says not only, that you can assert something, but that you can assert it also with ‘warranty’ based on the known process which has led to this result.(cf. p.10)
Introducing rationality
At this point the story takes a first ‘new turn’ because Dewey introduces now a first characterization of the concept ‘rationality’ (which is for him synonymous with ‘reasonableness’). While the basic terms of the descriptions in an inquiry process are at least partially descriptive (empirical) expressions, they are not completely “devoid of rational standing”.(cf. p.17) Furthermore the classification of final situations in an inquiry as ‘results’ which can be understood as ‘confirmations’ of initial assumptions, questions or problems, is only given in relations talking about the whole process and thereby they are talking about matters which are not rooted in limited descriptive facts only. Or, as Dewey states it, “relations which exist between means (methods) employed and conclusions attained as their consequence.”(p.17) Therefore the following practical principle is valid: “It is reasonable to search for and select the means that will, with the maximum probability, yield the consequences which are intended.”(p.18) And: “Hence,… the descriptive statement of methods that achieve progressively stable beliefs, or warranted assertibility, is also a rational statement in case the relation between them as means and assertibility as consequence is ascertained.”(p.18)
Suggested framework for ‘rationality’
Although Dewey does not exactly define the format of relations between selected means and successful consequences it seems ‘intuitively’ clear that the researchers have to have some ‘idea’ of such a relation which serves then as a new ‘ground for abstract meaning’ in their ‘thinking’. Within the oksimo paradigm [2] one could describe the problem at hand as follows:
The researchers participating in an inquiry process have perceptions of the process.
They have associated cognitive processing as well as language processing, where both are bi-directional mapped into each other, but not 1-to-1.
They can describe the individual properties, objects, actors, actions etc. which are part of the process in a timely order.
They can with their cognitive processing build more abstract concepts based on these primary concepts.
They can encode these more abstract cognitive structures and processes in propositions (and expressions) which correspond to these more abstract cognitive entities.
They can construct rule-like cognitive structures (within the oksimo paradigm called ‘change rules‘) with corresponding propositions (and expressions).
They can evaluate those change rules whether they describe ‘successful‘ consequences.
Change rules with successful consequences can become building blocks for those rules, which can be used for inferences/ deductions.
Thus one can look to the formal aspect of formal relations which can be generated by an inference mechanism, but such a formal inference must not necessarily yield results which are empirically sound. Whether this will be the case is a job on its own dealing with the encoded meaning of the inferred expressions and the outcome of the inquiry.(cf. p.19,21)
Limitations of formal logic
From this follows that the concrete logical operators as part of the inference machinery have to be qualified by their role within the more general relation between goals, means and success. The standard operators of modern formal logic are only a few and they are designed for a domain where you have a meaning space with only two objects: ‘being true’, being false’. In the real world of everyday experience we have a nearly infinite space of meanings. To describe this everyday large meaning space the standard logic of today is too limited. Normal language teaches us, how we can generate as many operators as we need only by using normal language. Inferring operators directly from normal language is not only more powerful but at the same time much, much easier to apply.[2]
Inquiry process – re-formulated
Let us fix a first hypothesis here. The ideas of Dewey can be re-framed with the following assumptions:
By doing an inquiry process with some problem (question,…) at the start and proceeding with clearly defined actions, we can reach final states which either are classified as being a positive answer (success) of the problem of the beginning or not.
If there exists a repeatable inquiry process with positive answers the whole process can be understood as a new ‘recipe’ (= complex operation, procedure, complex method, complex rule,law, …) how to get positive answers for certain kinds of questions.
If a recipe is available from preceding experiments one can use this recipe to ‘plan’ a new process to reach a certain ‘result’ (‘outcome’, ‘answer’, …).
The amount of failures as part of the whole number of trials in applying a recipe can be used to get some measure for the probability and quality of the recipe.
The description of a recipe needs a meta-level of ‘looking at’ the process. This meta-level description is sound (‘valid’) by the interaction with reality but as such the description includes some abstraction which enables a minimal rationality.
Habit
At this point Dewey introduces another term ‘habit’ which is not really very clear and which not really does explain more, but — for whatever reason — he introduces such a term.(cf. p.21f)
The intuition behind the term ‘habit’ is that independent of the language dimension there exists the real process driven by real actors doing real actions. It is further — tacitly — assumed that these real actors have some ‘internal processing’ which is ‘causing’ the observable actions. If these observable actions can be understood/ interpreted as an ‘inquiry process’ leading to some ‘positive answers’ then Dewey calls the underlying processes all together a ‘habit’: “Any habit is a way or manner of action, not a particular act or deed. “(p.20) If one observes such a real process one can describe it with language expressions; then it gets the format of a ‘rule’, a principle’ or a ‘law’.(cf. p.20)
If one would throw away the concept ‘habit’, nothing would be missing. Whichever internal processes are assumed, a description of these will be bound to its observability and will depend of some minimal language mechanisms. These must be explained. Everything beyond these is not necessary to explain rational behavior.[5]
At the end of chapter I Dewey points to some additional aspects in the context of logic. One aspect is the progressive character of logic as discipline in the course of history.(cf. p.22)[6]
Operational
Another aspect is introduced by his statement “The subject-matter of logic is determined operationally.”(p.22) And he characterizes the meaning of the term ‘operational’ as representing the “conditions by which subject-matter is (1) rendered fit to serve as means and (2) actually functions as such means in effecting the objective transformation which is the end of the inquiry.”(p.22) Thus, again, the concept of inquiry is the general framework organizing means to get to a successful end. This inquiry has an empirical material (or ‘existential‘) basis which additionally can be described symbolically. The material basis can be characterized by parts of it called ‘means’ which are necessary to enable objective transformations leading to the end of the inquiry.(cf. p.22f)
One has to consider at this point that the fact of the existential (empirical) basis of every inquiry process should not mislead to the view that this can work without a symbolic dimension! Besides extremely simple processes every process needs for its coordination between different brains a symbolic communication which has to use certain expressions of a language. Thus the cognitive concepts of the empirical means and the followed rules can only get ‘fixed’ and made ‘clear’ with the usage of accompanying symbolic expressions.
Postulational logic
Another aspect mentioned by Dewey is given by the statement: “Logical forms are postulational.“(p.24) Embedded in the framework of an inquiry Dewey identifies requirements (demands, postulates, …) in the beginning of the inquiry which have to be fulfilled through the inquiry process. And Dewey sees such requirements as part of the inquiry process itself.(cf. p.24f) If during such an inquiry process some kinds of logical postulates will be used they have no right on their own independent of the real process! They can only be used as long as they are in agreement with the real process. With the words of Dewey: “A postulate is thus neither arbitrary nor externally a priori. It is not the former because it issues from the relation of means to the end to be reached. It is not the latter, because it is not imposed upon inquiry from without, but is an acknowledgement of that to which the undertaking of inquiry commits us.”(p.26) .
Logic naturalistic
Dewey comments further on the topic that “Logic is a naturalistic theory.“(p.27 In some sense this is trivial because humans are biological systems and therefore every process is a biological (natural) process, also logical thinking as part of it.
Logic is social
Dewey mentions further that “Logic is a social discipline.“(p.27) This follows from the fact that “man is naturally a being that lives in association with others in communities possessing language, and therefore enjoying a transmitted culture. Inquiry is a mode of activity that is socially conditioned and that has cultural consequences.”(p.27) And therefore: “Any theory of logic has to take some stand on the question whether symbols are ready-made clothing for meanings that subsist independently, or whether they are necessary conditions for the existence of meanings — in terms often used, whether language is the dress of ‘thought’ or is something without which ‘thought’ cannot be.” (27f) This can be put also in the following general formula by Dewey: “…in every interaction that involves intelligent direction, the physical environment is part of a more inclusive social or cultural environment.” (p.28) The central means of culture is Language, which “is the medium in which culture exists and through which it is transmitted. Phenomena that are not recorded cannot be even discussed. Language is the record that perpetuates occurrences and renders them amenable to public consideration. On the other hand, ideas or meanings that exist only in symbols that are not communicable are fantastic beyond imagination”.(p.28)
Autonomous logic
The final aspect about logic which is mentioned by Dewey looks to the position which states that “Logic is autonomous“.(p.29) Although the position of the autonomy of logic — in various varieties — is very common in history, but Dewey argues against this position. The main point is — as already discussed before — that the open framework of an inquiry gives the main point of reference and logic must fit to this framework.[7]
SOME DISCUSSION
For a discussion of these ideas of Dewey see the next uocoming post.
COMMENTS
[1] John Dewey, Logic. The Theory Of Inquiry, New York, Henry Holt and Company, 1938 (see: https://archive.org/details/JohnDeweyLogicTheTheoryOfInquiry with several formats; I am using the kindle (= mobi) format: https://archive.org/download/JohnDeweyLogicTheTheoryOfInquiry/%5BJohn_Dewey%5D_Logic_-_The_Theory_of_Inquiry.mobi . This is for the direct work with a text very convenient. Additionally I am using a free reader ‘foliate’ under ubuntu 20.04: https://github.com/johnfactotum/foliate/releases/). The page numbers in the text of the review — like (p.13) — are the page numbers of the ebook as indicated in the ebook-reader foliate.(There exists no kindle-version for linux (although amazon couldn’t work without linux servers!))
[2] Gerd Doeben-Henisch, 2021, uffmm.org, THE OKSIMO PARADIGM An Introduction (Version 2), https://www.uffmm.org/wp-content/uploads/2021/03/oksimo-v1-part1-v2.pdf
[3] The new oksimo paradigm does exactly this. See oksimo.org
[4] For the conceptual framework for the term ‘proposition’ see the preceding part 2, where the author describes the basic epistemological assumptions of the oksimo paradigm.
[5] Clearly it is possible and desirable to extend our knowledge about the internal processing of human persons. This is mainly the subject-matter of biology, brain research, and physiology. Other disciplines are close by like Psychology, ethology, linguistics, phonetics etc. The main problem with all these disciplines is that they are methodologically disconnected: a really integrated theory is not yet possible and not in existence. Examples of integrations like Neuro-Psychology are far from what they should be.
[6] A very good overview about the development of logic can be found in the book The Development of Logic by William and Martha Kneale. First published 1962 with many successive corrected reprints by Clarendon Press, Oxford (and other cities.)
[7] Today we have the general problem that the concept of formal logic has developed the concept of logical inference in so many divergent directions that it is not a simple problem to evaluate all these different ‘kinds of logic’.
MEDIA
This is another unplugged recording dealing with the main idea of Dewey in chapter I: what is logic and how relates logic to a scientific inquiry.
This text is part of a philosophy of science analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive posts dedicated to the HMI-Analysis for this software.
THE OKSIMO THORY PARADIGM
The following text is a short illustration how the general theory concept as extracted from the text of Popper can be applied to the oksimo simulation software concept.
The starting point is the meta-theoetical schema as follows:
MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>
In the oksimo case we have also a given empirical context S, a non-epty set of human actors A[μ] whith a built-in meaning function for the expressions E of some language L, some axioms AX as a subset of the expressions E, an inference concept ⊢, and all the other concepts.
The human actors A[μ] can write some documents with the expressions E of language L. In one document S_U they can write down some universal facts they belief that these are true (e.g. ‘Birds can fly’). In another document S_E they can write down some empirical facts from the given situation S like ‘There is something named James. James is a bird’. And somehow they wish that James should be able to fly, thus they write down a vision text S_V with ‘James can fly’.
The interesting question is whether it is possible to generate a situation S_E.i in the future, which includes the fact ‘James can fly’.
With the knowledge already given they can built the change rule: IF it is valid, that {Birds can fly. James is a bird} THEN with probability π = 1 add the expression Eplus = {‘James can fly’} to the actual situation S_E.i. EMinus = {}. This rule is then an element of the set of change rules X.
The simulator ⊢X works according to the schema S’ = S – Eminus + Eplus.
Because we have S=S_U + S_E we are getting
S’ = {Birds can fly. Something is named James. James is a bird.} – Eminus + Eplus
S’ = {Birds can fly. Something is named James. James is a bird.} – {}+ {James can fly}
S’ = {Birds can fly. Something is named James. James is a bird. James can fly}
With regard to the vision which is used for evaluation one can state additionally:
|{James can fly} ⊆ {Birds can fly. Something is named James. James is a bird. James can fly}|= 1 ≥ 1
Thus the goal has been reached with 1 meaning with 100%.
THE ROLE OF MEANING
What makes a certain difference between classical concepts of an empirical theory and the oksimo paradigm is the role of meaning in the oksimo paradigm. While the classical empirical theory concept is using formal (mathematical) languages for their descriptions with the associated — nearly unsolvable — problem how to relate these concepts to the intended empirical world, does the oksimo paradigm assume the opposite: the starting point is always the ordinary language as basic language which on demand can be extended by special expressions (like e.g. set theoretical expressions, numbers etc.).
Furthermore it is in the oksimo paradigm assumed that the human actors with their built-in meaning function nearly always are able to decided whether an expression e of the used expressions E of the ordinary language L is matching certain properties of the given situation S. Thus the human actors are those who have the authority to decided by their meaning whether some expression is actually true or not.
The same holds with possible goals (visions) and possible inference rules (= change rules). Whether some consequence Y shall happen if some condition X is satisfied by a given actual situation S can only be decided by the human actors. There is no other knowledge available then that what is in the head of the human actors. [1] This knowledge can be narrow, it can even be wrong, but human actors can only decide with that knowledge what is available to them.
If they are using change rules (= inference rules) based on their knowledge and they derive some follow up situation as a theorem, then it can happen, that there exists no empiricial situation S which is matching the theorem. This would be an undefined truth case. If the theorem t would be a contradiction to the given situation S then it would be clear that the theory is inconsistent and therefore something seems to be wrong. Another case cpuld be that the theorem t is matching a situation. This would confirm the belief on the theory.
COMMENTS
[1] Well known knowledge tools are since long libraries and since not so long data-bases. The expressions stored there can only be of use (i) if a human actor knows about these and (ii) knows how to use them. As the amount of stored expressions is increasing the portion of expressions to be cognitively processed by human actors is decreasing. This decrease in the usable portion can be used for a measure of negative complexity which indicates a growng deterioration of the human knowledge space. The idea that certain kinds of algorithms can analyze these growing amounts of expressions instead of the human actor themself is only constructive if the human actor can use the results of these computations within his knowledge space. By general reasons this possibility is very small and with increasing negativ complexity it is declining.
This text is part of a philosophy of science analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive posts dedicated to the HMI-Analysis for this software.
POPPERs POSITION IN THE CHAPTERS 1-17
In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.
Scientific Theory
A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference ⊢, which allows the deduction of a certain theorem t if one makes some additional concrete assumptions H.
Example: Theory T1 = <AX1,⊢>
AX1= {Birds can fly}
H1= {Peter is a bird}
⊢: Peter can fly
Because there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.
One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completelytrue. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)
The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:
AX1= {Birds can fly}
H2= {Doxy is a bird, Doxy cannot fly}
⊢: ‘Doxy can fly’ & ~’Doxy can fly’
If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:
{AX1, H2} ⊢‘Doxy can fly’ & ~’Doxy can fly’
In this case the set {AX1, H2} is called inconsistent.
If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.
Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof a falsification of a theory T can destroy the ability of this theory to distinguish between true and false statements.
Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try it.(cf. p.33, 63)
Meta Theory, Logic of Scientific Discovery, Philosophy of Science
Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.
Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person like Popper.
Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:
A scientific theory from a point of view of Philosophy of Science represents a structure like the following one (minimal version):
MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>
In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L. Based on their built-in adaptive meaning function μ the human actors A can relate properties of the situation S with expressions E of L. Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions E-. Both sets of expressions are true subsets of E: E+ ⊂ E and E- ⊂ E. Additionally the experts can define some special set of expressions called axioms AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:
{AX1, H2} ⊢‘Doxy can fly’ & ~’Doxy can fly’
Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions that this unified set confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions can not distinguish between true and false expressions.
Popper gives some general requirements for the axioms of a theory (cf. p.71):
Axioms must be free from contradiction.
The axioms must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.
While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.
Empirical Interpretation(s)
Popper assumes that the universal statements of scientific theories are linguistic representations, and this means they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning. Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true), or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).
Examples:
Two human actors in a situation S agree about the fact, that there is ‘something’ which they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one). To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’. If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
The expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’ they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations
The basic idea with a system of axioms AX is — according to Popper — that the axioms as universal expressions represent a system of equations where the general terms should be able to be substituted by certain values. The set of admissible values is different from the set of inadmissible values. The relation between those values which can be substituted for the terms is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)
But Popper has difficulties with an axiomatic system interpreted as a system of equations since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is, that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)
On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)
But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.
Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.
As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!
Example:
Lets us have a look to another example.
If we take the system of axioms AX as the following schema: AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m as follows: m={<1,2,3>, <2,3,5>}. The values are again given as a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers which can satisfy the general concept of the equation. In this secondary interpretation m is becoming a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.
It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing objects; the individual contributions will be reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derivedforecast of the theory 1+2 = 3 and the empirically measured value 1+2 = 2. If one would define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …
Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.
SOURCES
[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by Basic Books, New York (more editions have been published later; I am using the eBook version of Routledge (2002))
Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458