In a preceding post I have illustrated how one can apply the concept of an empirical theory — highly inspired by Karl Popper — to an everyday problem given as a county and its demographic problem(s). In this post I like to develop this idea a little more.
AN EMPIRICAL THEORY AS A DEVELOPMENT PROCESS
CITIZENs – natural experts
As starting point we assume citizens understood as our ‘natural experts’ being members of a democratic society with political parties, an freely elected parliament, which can create some helpful laws for the societal life and some authorities serving the need of the citizens.
SYMBOLIC DESCRIPTIONS
To coordinate their actions by a sufficient communication the citizens produce symbolic descriptions to make public how they see the ‘given situation’, which kinds of ‘future states’ (‘goals’) they want to achieve, and a list of ‘actions’ which can ‘change/ transform’ the given situation step wise into the envisioned future state.
LEVELS OF ABSTRACTIONS
Using an everyday language — possibly enriched with some math expressions – one can talk about our world of experience on different levels of abstraction. To get a rather wide scope one starts with most abstract concepts, and then one can break down these abstract concepts more and more with concrete properties/ features until these concrete expressions are ‘touching the real experience’. It can be helpful — in most cases — not to describe everything in one description but one does a partition of ‘the whole’ into several more concrete descriptions to get the main points. Afterwards it should be possible to ‘unify’ these more concrete descriptions into one large picture showing how all these concrete descriptions ‘work together’.
LOGICAL INFERENCE BY SIMULATION
A very useful property of empirical theories is the possibility to derive from given assumptions and assumed rules of inference possible consequences which are ‘true’ if the assumptions an the rules of inference are ‘true’.
The above outlined descriptions are seen in this post as texts which satisfy the requirements of an empirical theory such that the ‘simulator’ is able to derive from these assumptions all possible ‘true’ consequences if these assumptions are assumed to be ‘true’. Especially will the simulator deliver not only one single consequence only but a whole ‘sequence of consequences’ following each other in time.
PURE WWW KNOWLEDGE SPACE
This simple outline describes the application format of the oksimo software which is understood here as a kind of a ‘theory machine’ for everybody.
It is assumed that a symbolic description is given as a pure text file or as a given HTML page somewhere in the world wide web [WWW].
The simulator realized as an oksimo program can load such a file and can run a simulation. The output will be send back as an HTML page.
No special special data base is needed inside of the oksimo application. All oksimo related HTML pages located by a citizen somewhere in the WWW are constituting a ‘global public knowledge space’ accessible by everybody.
DISTRIBUTED OKSIMO INSTANCES
An oksimo server positioned behind the oksimo address ‘oksimo.com’ can produce for a simulation demand a ‘simulator instance’ running one simulation. There can be many simulations running in parallel. A simulation can also be connected in real time to Internet-of-Things [IoT] instances to receive empirical data being used in the simulation. In ‘interactive mode’ an oksimo simulation does furthermore allow the participation of ‘actors’ which function as a ‘dynamic rule instance’: they receive input from the simulated given situation and can respond ‘on their own’. This turns a simulation into an ‘open process’ like we do encounter during ‘everyday real processes’. An ‘actor’ must not necessarily be a ‘human’ actor; it can also be a ‘non-human’ actor. Furthermore it is possible to establish a ‘simulation-meta-level’: because a simulation as a whole represents a ‘full theory’ on can feed this whole theory to an ‘artificial intelligence algorithm’ which dos not run only one simulation but checks the space of ‘all possible simulations’ and thereby identifies those sub-spaces which are — according to the defined goals — ‘zones of special interest’.
In the uffmm review section the different papers and books are discussed from the point of view of the oksimo paradigm. [2] Here the author reads the book “Logic. The Theory Of Inquiry” by John Dewey, 1938. [1]
Part I – Chapter I
THE PROBLEM OF LOGICAL SUBJECT-MATTER
In this chapter Dewey tries to characterize the subject-matter of logic. From the year 1938 backwards one can look into a long history of thoughts with at least 2500 years dealing in one or another sense with what has been called ‘logic’. His rough judgment is that the participants of the logic language game “proximate subject-matter of logic” seem to be widely in agreement what it is, but in the case of the “ultimate subject-matter of logic” language game there seem to exist different or even conflicting opinions.(cf. p.8)
Logic as a philosophic theory
Dewey illustrates the variety of views about the ultimate subject-matter of logic by citing several different positions.(cf. p.10) Having done this Dewey puts all these views together into a kind of a ‘meta-view’ stating that logic “is a branch of philosophic theory and therefore can express different philosophies.”(p.10) But exercising philosophy ” itself must satisfy logical requirements.”(p.10)
And in general he thinks that “any statement that logic is so-and-so, can … be offered only as a hypothesis and an indication of a position to be developed.”(p.11)
Thus we see here that Dewey declares the ultimate logical subject-matter grounded in some philosophical perspective which should be able “to order and account for what has been called the proximate subject-matter.”(p.11) But the philosophical theory “must possess the property of verifiable existence in some domain, no matter how hypothetical it is in reference to the field in which it is proposed to apply it.”(p.11) This is an interesting point because this implies the question in which sense a philosophical foundation of logic can offer a verifiable existence.
Inquiry
Dewey gives some hint for a possible answer by stating “that all logical forms … arise within the operation of inquiry and are concerned with control of inquiry so that it may yield warranted assertions.”(p.11) While the inquiry as a process is real, the emergence of logical forms has to be located in the different kinds of interactions between the researchers and some additional environment in the process. Here should some verifiable reality be involved which is reflected in accompanying language expressions used by the researchers for communication. This implies further that the used language expressions — which can even talk about other language expressions — are associated with propositions which can be shown to be valid.[4]
And — with some interesting similarity with the modern concept of ‘diversity’ — he claims that in avoidance of any kind of dogmatism “any hypothesis, no matter how unfamiliar, should have a fair chance and be judged by its results.”(p.12)
While Dewey is quite clear to use the concept of inquiry as a process leading to some results which are depending from the starting point and the realized processes, he mentions additionally concepts like ‘methods’, ‘norms’, ‘instrumentalities’, and ‘procedures’, but these concepts are rather fuzzy. (cf. p.14f)
Warranted assertibility
Part of an inquiry are the individual actors which have psychological states like ‘doubt’ or ‘belief’ or ‘understanding’ (knowledge).(p.15) But from these concepts follows nothing about needed logical forms or rules.(cf.p.16f) Instead Dewey repeats his requirement with the words “In scientific inquiry, the criterion of what is taken to be settled, or to be knowledge, is being so settled that it is available as a resource in further inquiry; not being settled in such a way as not to be subject to revision in further inquiry.”(p.17) And therefore, instead of using fuzzy concepts like (subjective) ‘doubt’, ‘believe’ or ‘knowledge’, prefers to use the concept “warranted assertibility”. This says not only, that you can assert something, but that you can assert it also with ‘warranty’ based on the known process which has led to this result.(cf. p.10)
Introducing rationality
At this point the story takes a first ‘new turn’ because Dewey introduces now a first characterization of the concept ‘rationality’ (which is for him synonymous with ‘reasonableness’). While the basic terms of the descriptions in an inquiry process are at least partially descriptive (empirical) expressions, they are not completely “devoid of rational standing”.(cf. p.17) Furthermore the classification of final situations in an inquiry as ‘results’ which can be understood as ‘confirmations’ of initial assumptions, questions or problems, is only given in relations talking about the whole process and thereby they are talking about matters which are not rooted in limited descriptive facts only. Or, as Dewey states it, “relations which exist between means (methods) employed and conclusions attained as their consequence.”(p.17) Therefore the following practical principle is valid: “It is reasonable to search for and select the means that will, with the maximum probability, yield the consequences which are intended.”(p.18) And: “Hence,… the descriptive statement of methods that achieve progressively stable beliefs, or warranted assertibility, is also a rational statement in case the relation between them as means and assertibility as consequence is ascertained.”(p.18)
Suggested framework for ‘rationality’
Although Dewey does not exactly define the format of relations between selected means and successful consequences it seems ‘intuitively’ clear that the researchers have to have some ‘idea’ of such a relation which serves then as a new ‘ground for abstract meaning’ in their ‘thinking’. Within the oksimo paradigm [2] one could describe the problem at hand as follows:
The researchers participating in an inquiry process have perceptions of the process.
They have associated cognitive processing as well as language processing, where both are bi-directional mapped into each other, but not 1-to-1.
They can describe the individual properties, objects, actors, actions etc. which are part of the process in a timely order.
They can with their cognitive processing build more abstract concepts based on these primary concepts.
They can encode these more abstract cognitive structures and processes in propositions (and expressions) which correspond to these more abstract cognitive entities.
They can construct rule-like cognitive structures (within the oksimo paradigm called ‘change rules‘) with corresponding propositions (and expressions).
They can evaluate those change rules whether they describe ‘successful‘ consequences.
Change rules with successful consequences can become building blocks for those rules, which can be used for inferences/ deductions.
Thus one can look to the formal aspect of formal relations which can be generated by an inference mechanism, but such a formal inference must not necessarily yield results which are empirically sound. Whether this will be the case is a job on its own dealing with the encoded meaning of the inferred expressions and the outcome of the inquiry.(cf. p.19,21)
Limitations of formal logic
From this follows that the concrete logical operators as part of the inference machinery have to be qualified by their role within the more general relation between goals, means and success. The standard operators of modern formal logic are only a few and they are designed for a domain where you have a meaning space with only two objects: ‘being true’, being false’. In the real world of everyday experience we have a nearly infinite space of meanings. To describe this everyday large meaning space the standard logic of today is too limited. Normal language teaches us, how we can generate as many operators as we need only by using normal language. Inferring operators directly from normal language is not only more powerful but at the same time much, much easier to apply.[2]
Inquiry process – re-formulated
Let us fix a first hypothesis here. The ideas of Dewey can be re-framed with the following assumptions:
By doing an inquiry process with some problem (question,…) at the start and proceeding with clearly defined actions, we can reach final states which either are classified as being a positive answer (success) of the problem of the beginning or not.
If there exists a repeatable inquiry process with positive answers the whole process can be understood as a new ‘recipe’ (= complex operation, procedure, complex method, complex rule,law, …) how to get positive answers for certain kinds of questions.
If a recipe is available from preceding experiments one can use this recipe to ‘plan’ a new process to reach a certain ‘result’ (‘outcome’, ‘answer’, …).
The amount of failures as part of the whole number of trials in applying a recipe can be used to get some measure for the probability and quality of the recipe.
The description of a recipe needs a meta-level of ‘looking at’ the process. This meta-level description is sound (‘valid’) by the interaction with reality but as such the description includes some abstraction which enables a minimal rationality.
Habit
At this point Dewey introduces another term ‘habit’ which is not really very clear and which not really does explain more, but — for whatever reason — he introduces such a term.(cf. p.21f)
The intuition behind the term ‘habit’ is that independent of the language dimension there exists the real process driven by real actors doing real actions. It is further — tacitly — assumed that these real actors have some ‘internal processing’ which is ‘causing’ the observable actions. If these observable actions can be understood/ interpreted as an ‘inquiry process’ leading to some ‘positive answers’ then Dewey calls the underlying processes all together a ‘habit’: “Any habit is a way or manner of action, not a particular act or deed. “(p.20) If one observes such a real process one can describe it with language expressions; then it gets the format of a ‘rule’, a principle’ or a ‘law’.(cf. p.20)
If one would throw away the concept ‘habit’, nothing would be missing. Whichever internal processes are assumed, a description of these will be bound to its observability and will depend of some minimal language mechanisms. These must be explained. Everything beyond these is not necessary to explain rational behavior.[5]
At the end of chapter I Dewey points to some additional aspects in the context of logic. One aspect is the progressive character of logic as discipline in the course of history.(cf. p.22)[6]
Operational
Another aspect is introduced by his statement “The subject-matter of logic is determined operationally.”(p.22) And he characterizes the meaning of the term ‘operational’ as representing the “conditions by which subject-matter is (1) rendered fit to serve as means and (2) actually functions as such means in effecting the objective transformation which is the end of the inquiry.”(p.22) Thus, again, the concept of inquiry is the general framework organizing means to get to a successful end. This inquiry has an empirical material (or ‘existential‘) basis which additionally can be described symbolically. The material basis can be characterized by parts of it called ‘means’ which are necessary to enable objective transformations leading to the end of the inquiry.(cf. p.22f)
One has to consider at this point that the fact of the existential (empirical) basis of every inquiry process should not mislead to the view that this can work without a symbolic dimension! Besides extremely simple processes every process needs for its coordination between different brains a symbolic communication which has to use certain expressions of a language. Thus the cognitive concepts of the empirical means and the followed rules can only get ‘fixed’ and made ‘clear’ with the usage of accompanying symbolic expressions.
Postulational logic
Another aspect mentioned by Dewey is given by the statement: “Logical forms are postulational.“(p.24) Embedded in the framework of an inquiry Dewey identifies requirements (demands, postulates, …) in the beginning of the inquiry which have to be fulfilled through the inquiry process. And Dewey sees such requirements as part of the inquiry process itself.(cf. p.24f) If during such an inquiry process some kinds of logical postulates will be used they have no right on their own independent of the real process! They can only be used as long as they are in agreement with the real process. With the words of Dewey: “A postulate is thus neither arbitrary nor externally a priori. It is not the former because it issues from the relation of means to the end to be reached. It is not the latter, because it is not imposed upon inquiry from without, but is an acknowledgement of that to which the undertaking of inquiry commits us.”(p.26) .
Logic naturalistic
Dewey comments further on the topic that “Logic is a naturalistic theory.“(p.27 In some sense this is trivial because humans are biological systems and therefore every process is a biological (natural) process, also logical thinking as part of it.
Logic is social
Dewey mentions further that “Logic is a social discipline.“(p.27) This follows from the fact that “man is naturally a being that lives in association with others in communities possessing language, and therefore enjoying a transmitted culture. Inquiry is a mode of activity that is socially conditioned and that has cultural consequences.”(p.27) And therefore: “Any theory of logic has to take some stand on the question whether symbols are ready-made clothing for meanings that subsist independently, or whether they are necessary conditions for the existence of meanings — in terms often used, whether language is the dress of ‘thought’ or is something without which ‘thought’ cannot be.” (27f) This can be put also in the following general formula by Dewey: “…in every interaction that involves intelligent direction, the physical environment is part of a more inclusive social or cultural environment.” (p.28) The central means of culture is Language, which “is the medium in which culture exists and through which it is transmitted. Phenomena that are not recorded cannot be even discussed. Language is the record that perpetuates occurrences and renders them amenable to public consideration. On the other hand, ideas or meanings that exist only in symbols that are not communicable are fantastic beyond imagination”.(p.28)
Autonomous logic
The final aspect about logic which is mentioned by Dewey looks to the position which states that “Logic is autonomous“.(p.29) Although the position of the autonomy of logic — in various varieties — is very common in history, but Dewey argues against this position. The main point is — as already discussed before — that the open framework of an inquiry gives the main point of reference and logic must fit to this framework.[7]
SOME DISCUSSION
For a discussion of these ideas of Dewey see the next uocoming post.
COMMENTS
[1] John Dewey, Logic. The Theory Of Inquiry, New York, Henry Holt and Company, 1938 (see: https://archive.org/details/JohnDeweyLogicTheTheoryOfInquiry with several formats; I am using the kindle (= mobi) format: https://archive.org/download/JohnDeweyLogicTheTheoryOfInquiry/%5BJohn_Dewey%5D_Logic_-_The_Theory_of_Inquiry.mobi . This is for the direct work with a text very convenient. Additionally I am using a free reader ‘foliate’ under ubuntu 20.04: https://github.com/johnfactotum/foliate/releases/). The page numbers in the text of the review — like (p.13) — are the page numbers of the ebook as indicated in the ebook-reader foliate.(There exists no kindle-version for linux (although amazon couldn’t work without linux servers!))
[2] Gerd Doeben-Henisch, 2021, uffmm.org, THE OKSIMO PARADIGM An Introduction (Version 2), https://www.uffmm.org/wp-content/uploads/2021/03/oksimo-v1-part1-v2.pdf
[3] The new oksimo paradigm does exactly this. See oksimo.org
[4] For the conceptual framework for the term ‘proposition’ see the preceding part 2, where the author describes the basic epistemological assumptions of the oksimo paradigm.
[5] Clearly it is possible and desirable to extend our knowledge about the internal processing of human persons. This is mainly the subject-matter of biology, brain research, and physiology. Other disciplines are close by like Psychology, ethology, linguistics, phonetics etc. The main problem with all these disciplines is that they are methodologically disconnected: a really integrated theory is not yet possible and not in existence. Examples of integrations like Neuro-Psychology are far from what they should be.
[6] A very good overview about the development of logic can be found in the book The Development of Logic by William and Martha Kneale. First published 1962 with many successive corrected reprints by Clarendon Press, Oxford (and other cities.)
[7] Today we have the general problem that the concept of formal logic has developed the concept of logical inference in so many divergent directions that it is not a simple problem to evaluate all these different ‘kinds of logic’.
MEDIA
This is another unplugged recording dealing with the main idea of Dewey in chapter I: what is logic and how relates logic to a scientific inquiry.
This text is part of a philosophy of science analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive posts dedicated to the HMI-Analysis for this software.
THE OKSIMO THORY PARADIGM
The following text is a short illustration how the general theory concept as extracted from the text of Popper can be applied to the oksimo simulation software concept.
The starting point is the meta-theoetical schema as follows:
MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>
In the oksimo case we have also a given empirical context S, a non-epty set of human actors A[μ] whith a built-in meaning function for the expressions E of some language L, some axioms AX as a subset of the expressions E, an inference concept ⊢, and all the other concepts.
The human actors A[μ] can write some documents with the expressions E of language L. In one document S_U they can write down some universal facts they belief that these are true (e.g. ‘Birds can fly’). In another document S_E they can write down some empirical facts from the given situation S like ‘There is something named James. James is a bird’. And somehow they wish that James should be able to fly, thus they write down a vision text S_V with ‘James can fly’.
The interesting question is whether it is possible to generate a situation S_E.i in the future, which includes the fact ‘James can fly’.
With the knowledge already given they can built the change rule: IF it is valid, that {Birds can fly. James is a bird} THEN with probability π = 1 add the expression Eplus = {‘James can fly’} to the actual situation S_E.i. EMinus = {}. This rule is then an element of the set of change rules X.
The simulator ⊢X works according to the schema S’ = S – Eminus + Eplus.
Because we have S=S_U + S_E we are getting
S’ = {Birds can fly. Something is named James. James is a bird.} – Eminus + Eplus
S’ = {Birds can fly. Something is named James. James is a bird.} – {}+ {James can fly}
S’ = {Birds can fly. Something is named James. James is a bird. James can fly}
With regard to the vision which is used for evaluation one can state additionally:
|{James can fly} ⊆ {Birds can fly. Something is named James. James is a bird. James can fly}|= 1 ≥ 1
Thus the goal has been reached with 1 meaning with 100%.
THE ROLE OF MEANING
What makes a certain difference between classical concepts of an empirical theory and the oksimo paradigm is the role of meaning in the oksimo paradigm. While the classical empirical theory concept is using formal (mathematical) languages for their descriptions with the associated — nearly unsolvable — problem how to relate these concepts to the intended empirical world, does the oksimo paradigm assume the opposite: the starting point is always the ordinary language as basic language which on demand can be extended by special expressions (like e.g. set theoretical expressions, numbers etc.).
Furthermore it is in the oksimo paradigm assumed that the human actors with their built-in meaning function nearly always are able to decided whether an expression e of the used expressions E of the ordinary language L is matching certain properties of the given situation S. Thus the human actors are those who have the authority to decided by their meaning whether some expression is actually true or not.
The same holds with possible goals (visions) and possible inference rules (= change rules). Whether some consequence Y shall happen if some condition X is satisfied by a given actual situation S can only be decided by the human actors. There is no other knowledge available then that what is in the head of the human actors. [1] This knowledge can be narrow, it can even be wrong, but human actors can only decide with that knowledge what is available to them.
If they are using change rules (= inference rules) based on their knowledge and they derive some follow up situation as a theorem, then it can happen, that there exists no empiricial situation S which is matching the theorem. This would be an undefined truth case. If the theorem t would be a contradiction to the given situation S then it would be clear that the theory is inconsistent and therefore something seems to be wrong. Another case cpuld be that the theorem t is matching a situation. This would confirm the belief on the theory.
COMMENTS
[1] Well known knowledge tools are since long libraries and since not so long data-bases. The expressions stored there can only be of use (i) if a human actor knows about these and (ii) knows how to use them. As the amount of stored expressions is increasing the portion of expressions to be cognitively processed by human actors is decreasing. This decrease in the usable portion can be used for a measure of negative complexity which indicates a growng deterioration of the human knowledge space. The idea that certain kinds of algorithms can analyze these growing amounts of expressions instead of the human actor themself is only constructive if the human actor can use the results of these computations within his knowledge space. By general reasons this possibility is very small and with increasing negativ complexity it is declining.
This text is part of a philosophy of science analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive posts dedicated to the HMI-Analysis for this software.
POPPERs POSITION IN THE CHAPTERS 1-17
In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.
Scientific Theory
A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference ⊢, which allows the deduction of a certain theorem t if one makes some additional concrete assumptions H.
Example: Theory T1 = <AX1,⊢>
AX1= {Birds can fly}
H1= {Peter is a bird}
⊢: Peter can fly
Because there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.
One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completelytrue. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)
The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:
AX1= {Birds can fly}
H2= {Doxy is a bird, Doxy cannot fly}
⊢: ‘Doxy can fly’ & ~’Doxy can fly’
If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:
{AX1, H2} ⊢‘Doxy can fly’ & ~’Doxy can fly’
In this case the set {AX1, H2} is called inconsistent.
If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.
Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof a falsification of a theory T can destroy the ability of this theory to distinguish between true and false statements.
Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try it.(cf. p.33, 63)
Meta Theory, Logic of Scientific Discovery, Philosophy of Science
Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.
Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person like Popper.
Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:
A scientific theory from a point of view of Philosophy of Science represents a structure like the following one (minimal version):
MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>
In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L. Based on their built-in adaptive meaning function μ the human actors A can relate properties of the situation S with expressions E of L. Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions E-. Both sets of expressions are true subsets of E: E+ ⊂ E and E- ⊂ E. Additionally the experts can define some special set of expressions called axioms AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:
{AX1, H2} ⊢‘Doxy can fly’ & ~’Doxy can fly’
Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions that this unified set confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions can not distinguish between true and false expressions.
Popper gives some general requirements for the axioms of a theory (cf. p.71):
Axioms must be free from contradiction.
The axioms must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.
While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.
Empirical Interpretation(s)
Popper assumes that the universal statements of scientific theories are linguistic representations, and this means they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning. Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true), or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).
Examples:
Two human actors in a situation S agree about the fact, that there is ‘something’ which they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one). To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’. If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
The expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’ they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations
The basic idea with a system of axioms AX is — according to Popper — that the axioms as universal expressions represent a system of equations where the general terms should be able to be substituted by certain values. The set of admissible values is different from the set of inadmissible values. The relation between those values which can be substituted for the terms is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)
But Popper has difficulties with an axiomatic system interpreted as a system of equations since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is, that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)
On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)
But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.
Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.
As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!
Example:
Lets us have a look to another example.
If we take the system of axioms AX as the following schema: AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m as follows: m={<1,2,3>, <2,3,5>}. The values are again given as a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers which can satisfy the general concept of the equation. In this secondary interpretation m is becoming a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.
It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing objects; the individual contributions will be reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derivedforecast of the theory 1+2 = 3 and the empirically measured value 1+2 = 2. If one would define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …
Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.
SOURCES
[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by Basic Books, New York (more editions have been published later; I am using the eBook version of Routledge (2002))
Integrating Engineering and the Human Factor (info@uffmm.org) eJournal uffmm.org ISSN 2567-6458