POPPER – Objective Knowledge (1971). Summary, Comments, how to develope further


eJournal: uffmm.org
ISSN 2567-6458, 07.March 22 – 12.March 2022, 10:55h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Philosophy of Science theme which is part of the uffmm blog.

PREFACE

In this post a short summary of Poppers view of an empirical theory is outlined as he describes it in his article “Conjectural Knowledge: My Solution of the Problem of Induction” from 1971.[1] The view of Popper will be commented and the relationsship to the oksimo paradigm of the author will be outlined.

Empirical Theory according to Popper in a Nutshell

Figure: Popper’s concept from 1971 of an empirical theory, compressed in a nutshell. Graphic by Gerd Doeben-Henisch based on the article using Popper’s summarizing ideas on the pages 29-31

POPPER’S POSITION 1971

In this article from 1971 Popper discusses several positions. Finally he offers the following ‘demarcation’ between only two cases: ‘Pseudo Science’ and ‘Empirical Science’.(See p.29) In doing so this triggers the question how it is possible to declare something as an ‘objective empirical theory’ without claiming to have some ‘absolute truth’?

Although Popper denies to have some kind of absolute truth he will “not give up the search for truth”, which finally leads to a “true explanatory theory”.(cf. p.29) “Truth” plays the “role of a regulative idea”.(cf. p.30) Thus according to Popper one can “guess for truth” and some of the hypotheses “may well be true”.(cf.p.30)

In Popper’s view finally ‘observation’ shows up as that behaviour which enables the production of ‘statements’ as the ’empirical basis’ for all arguments.(cf.p.30) Empirical statements are a ‘function of the used language’.(cf. p.31)

This dimension of language leads Popper to the concept of ‘deductive logic’ which describes formal mechanisms to derive from a set of statements — which are assumed to be true — those statements, which are ‘true’ by logical deduction only. If statements are ‘logically false’ then this can be used to classify the set of assumed statements as ‘logically not consistent’. (cf. p.31)

comments on popper’s 1971-position 50 years later

The preceding outline of Popper’s position reveals a minimalist account of the ingredients of an ‘objective empirical theory’. But we as the readers of these ideas are living 50 years later. Our minds are shaped differently. The author of this text thinks that Popper is basically ‘true’, although there are some points in Popper’s argument, which deserve some comments.

Subjective – Absolute

Popper is moving between two boundaries: One boundary is the so called ‘subjective believe’ which can support any idea, and which thereby can include pure nonsense; the other boundary is ‘absolute truth’, which is requiring to hold all the time at all places although the ‘known world’ is evidently showing a steady change.

Empirical Basis

In searching for a possible position between these boundaries, which would allow a minimum of ‘rationality’, he is looking for an ’empirical Basis’ as a point of reference for a ‘rational theory’. He is locating such an empirical basis in ‘observation statements’ which can be used for ‘testing a theory’.

In his view a ‘rational empirical theory’ has to have a ‘set of statements’ (often called ‘assumptions’ of the theory or ‘axioms’) which are assumed to ‘describe the observable world’ in a way that these statements should be able to be ‘confirmed’ or be ‘falsified’.

Confirmation – Falsification

A ‘confirmation’ does not imply that the confirmed statement is ‘absolutely true’ (his basic conviction); but one can experience that a confirmed statement can function as a ‘hypothesis/ conjecture’ which ‘workes in the actual observation’. This does not exclude that it perhaps will not work in a future test. The pragmatical difference between ‘interesting conjectures’ and those which are of less interest is that a ‘repeated confirmation’ increases the ‘probability’, that such a confirmation can happen again. An ‘increasing probability’ can induce an ‘increased expectation’. Nevertheless, increased probabilities and associated increased expectations are no substitutes for ‘truth’.

A test which shows ‘no confirmation’ for a logically derived statement from the theory is difficult to interpret:

Case (i): A theory is claiming that a statement S refers to a proposition A to be ‘true in a certain experiment’, but in the real experiment the observation reveals a proposition B which translates to non-A which can interpreted as ‘the opposite to A is being the case’ (= being ‘true’). This outcome will be interpreted in the way that the proposition B interpreted as ‘non-A’ contradicts ‘A’ and this will be interpreted further in the way, that the statement S of the theory represents a partial contradiction to the observable world.

Case (ii): A theory is claiming that a statement S refers to a proposition A to be ‘true in a certain experiment’, but in the real experiment the observation reveals a proposition B ‘being the case’ (= being ‘true’) which shows a different proposition. And this outcome cannot be related to the proposition ‘A’ which is forecasted by the theory. If the statement ‘can not be interpreted sufficiently well’ then the situation is neither ‘true’ nor ‘false’; it is ‘undefined’.

Discussion: Case (ii) reveals that there exist an observable (empirical) fact which is not related to a certain ‘logically derived’ statement with proposition A. There can be many circumstances why the observation did not generate the ‘expected proposition A’. If one would assume that the observation is related to an ‘agreed process of generating an outcome M’, which can be ‘repeated at will’ from ‘everybody’, then the observed fact of a ‘proposition B distinguished from proposition A’ could be interpreted in the way, that the expectation of the theory cannot be reproduced with the agreed procedure M. This lets the question open, whether there could eventually exist another procedure M’ producing an outcome ‘A’. This case is for the actors which are running the procedure M with regard to the logically derived statement S talking about proposition A ‘unclear’, ‘not defined’, a ‘non-confirmation’. Otherwise it is at the same time no confirmation either.

Discussion: Case (i) seems — at a first glance — to be more ‘clear’ in its interpretation. Assuming here too that the observation is associated with an agreed procedure M producing the proposition B which can be interpreted as non-A (B = non-A). If everybody accepts this ‘classification’ of B as ‘non-A’, then by ‘purely logical reasons’ (depending from the assumed concept of logic !) ‘non-A’ contradicts ‘A’. But in the ‘real world’ with ‘real observations’ things are usually not as ‘clear-cut’ as a theory may assume. The observable outcome B of an agreed procedure M can show a broad spectrum of ‘similarities’ with proposition A varying between 100% and less. Even if one repeats the agreed procedure M several times it can show a ‘sequence of propositions <B1, B2, …, Bn>’ which all are not exactly 100% similar to proposition A. To speak in such a case (the normal case!), of a logical contradiction it is difficult if not impossible. The idea of Popper-1971 with a possible ‘falsification’ of a theory would then become difficult to interpret. A possible remedy for this situation could be to modify a theory in the way that a theory does forecast only statements with a proposition A which is represented as a ‘field of possible instances A = <a1, a2, …, am>’, where every ‘ai‘ represents some kind of a variation. In that modified case it would be ‘more probable’ to judge a non-confirmation between A as <a1, a2, …, am> and B as <B1, B2, …, Bn>, if one would take into account the ‘variability’ of a proposition.[3]

Having discussed the case of ‘non-confirmation’ in the described modified way this leads back again to the case of ‘confirmation’: The ‘fuzziness’ of observable facts even in the context of agreed procedures M of observation, which are repeatable by everyone (usually called measurement) requires for a broader concept of ‘similarity’ between ‘derived propositions’ and ‘observed propositions’. This is since long a hot debated point in the philosophy of science (see e.g. [4]). Until now does no general accepted solution exist for this problem.

Thus the clear idea of Popper to associate a theory candidate with a minimum of rationality by relating the theory in an agreed way to empirical observations becomes in the ‘dust of reality’ a difficult case. It is interesting that the ‘late Popper’ (1988-1991) has modified his view onto this subject a little bit more into the direction of the interpretation of observable events (cf. [5])

Logic as an Organon

In the discussion of the possible confirmation or falsification of a theory Popper uses two different perspectives: (i) in a more broader sense he is talking about the ‘process of justification’ of the theoretical statements with regard to an empirical basis relying on the ‘regulative idea of truth’, and (ii) in a more specialized sense he is talking about ‘deductive logic as an organon of criticism’. These two perspectives demand for more clarification.

While the meaning of the concept ‘theory’ is rather vague (statements, which have to be confirmed or falsified with respect to observational statements), the concept ‘deductive logic as an organon’ isn’t really clearer.

Until today we have two big paradigms of logic: (i) the ‘classical logic’ inspired by Aristotle (with many variants) and (ii) ‘modern formal logic’ (cf. [6]) in combination with modern mathematics (cf. [7],[8]). Both paradigms represent a whole universe of different variants, whose combinations into concrete formal empirical theories shows more than one paradigm.(cf. [4], [8], [10])

As outlined in the figure above the principal idea of logic in general follows the following schema: one has a set of expressions of some language L for which one assumes at least, that these expressions are classified as ‘true expressions’. According to an agreed procedure of ‘derivation’ one can derive (deduce, infer, …) other expressions of the language which are assumed to be classified as ‘true’ if the assumptions hold.[11]

The important point here is, that the modern concept of logic does not explain, what ‘true’ means nor exists there an explanation, how exactly a procedure looks like which enables the classification of an expression as ‘being true’. Logic works with the minimalist assumption that the ‘user of logic’ is using statements which he assumes to be ‘true’ independent of how this classification came into being. This frees the user of logic to deal with the cumbersome process of clarifying the meaning and the existence of something which makes a statement ‘true’, but on the other side the user of modern logic has no real control whether his ‘concept of derivation’ makes any sense in a real world, from which observation statements are generated claiming to be ’empirically true’, and that the relationships between these observational statements are appropriately ‘represented’ by the formal derivation concept. Until today there exists no ‘meta-theory’ which explains the relationship between the derivation concept of formal logic (there are many such concepts!) and the ‘dynamics of real events’.

Thus, if Popper mentions formal logic as a tool for the handling of assumed true statements of a theory, it is not really clear whether such a formal logical derivation really is appropriate to explain the ‘relationships between assumed true statements’ without knowing, which kind of reality is ‘designated’/ ‘referred to’ by such statements and their relationships between each other.

(Formalized) Theory and Logic

In his paper Popper does not explain too much what he is concretely mean with a (formalized) theory. Today there exist many different proposals of formalized theories for the usage as ’empirical theories’, but there is no commonly agreed final ‘template’ of a ‘formal empirical theory’.

Nevertheless we need some minimal conception to be able to discuss some of the properties of a theory more concretely. I will address this problem in another post accompanied with concrete applications.

COMMENTS

[1] Karl R.Popper, Conjectural Knowledge: My Solution of the Problem of Induction, in: [2], pp.1-31

[2] Karl R.Popper, Objective Knowledge. An Evolutionary Approach, Oxford University Press, London, 1972 (reprint with corrections 1973)

[3] In our everyday use of our ‘normal’ language it is the ‘normal’ case that a statement S like ‘There s a cup on the table’ can be interpreted in many different ways depending which concrete thing (= proposition B of the above examples) called a ‘cup’ or called ‘table’ can be observed.

[4] F. Suppe, Ed., The Structure of Scientific Theories, University of
Illinois Press, Urbana, 2nd edition, 1979.

[5] Gerd Doeben-Henisch, 2022,(SPÄTER) POPPER – WISSENSCHAFT – PHILOSOPHIE – OKSIMO-DISKURSRAUM, in: eJournal: Philosophie Jetzt – Menschenbild, ISSN 2365-5062, 22.-23.Februar 2022,
URL: https://www.cognitiveagent.org/2022/02/22/popper-wissenschaft-philosophie-oksimo-paradigma/

[6] William Kneale and Martha Kneale, The development of logic, Oxford University Press, Oxford, 1962 with several corrections and reprints 1986.

[7] Jean Dieudonnè, Geschichte der Mathematik 1700-1900, Friedrich Viehweg & Sohn, Braunschweig – Wiesbaden, 1985 (From the French edition “Abrégé d’histoire des mathématique 1700-1900, Hermann, Paris, 1978)

[8] Philip J.Davis & Reuben Hersh, The Mathematical Experience, Houghton Mifflin Company, Boston, 1981

[9] Nicolas Bourbaki, Elements of Mathematics. Theory of Sets, Springer-Verlag, Berlin, 1968

[10] Wolfgang Balzer, C.Ulises Moulines, Joseph D.Sneed, An Architectonic for Science. The Structuralist Program,D.Reidel Publ. Company, Dordrecht -Boston – Lancaster – Tokyo, 1987

[11] The usage of the terms ‘expression’, ‘proposition’, and ‘statement’ is in this text as follows: An ‘expression‘ is a string of signs from some alphabet A and which is accepted as ‘well formed expression’ of some language L. A ‘statement‘ is an utterance of some actor using expressions of the language L to talk ‘about’ some ‘experience’ — from the world of bodies or from his consciousness –, which is understood as the ‘meaning‘ of the statement. The relationship between the expressions of the statement and the meaning is located ‘in the actor’ and has been ‘learned’ by interactions with the world and himself. This hypothetical relationship is here called ‘meaning function  φ’. A ‘proposition‘ is (i) the inner construct of the meaning of a statement (here called ‘intended proposition’) and (ii) that part of the experience, which is correlated with the inner construct of the stated meaning (here called ‘occurring proposition’). The special relationship between the intended proposition and the occurring proposition is often expressed as ‘referring to’ or ‘designate’. A statement is called to ‘hold’/ to be ‘true’ or ‘being the case’ if there exists an occurring proposition which is ‘similar enough’ to the intended proposition of the statement. If such an occurring proposition is lacking then the designation of the statement is ‘undefined’ or ‘non confirming’ the expectation.

Follow-up Post

For a follow-up post see here.

OKSIMO MEETS POPPER. The Oksimo Theory Paradigm

eJournal: uffmm.org
ISSN 2567-6458, 2.April – 2.April  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

THE OKSIMO THORY PARADIGM

The Oksimo Theory Paradigm
Figure 1: The Oksimo Theory Paradigm

The following text is a short illustration how the general theory concept as extracted from the text of Popper can be applied to the oksimo simulation software concept.

The starting point is the meta-theoetical schema as follows:

MT=<S, A[μ], E, L, AX, ⊢, ET, E+, E-, true, false, contradiction, inconsistent>

In the oksimo case we have also a given empirical context S, a non-epty set of human actors A[μ] whith a built-in meaning function for the expressions E of some language L, some axioms AX as a subset of the expressions E, an inference concept , and all the other concepts.

The human actors A[μ] can write  some documents with the expressions E of language L. In one document S_U they can write down some universal facts they belief that these are true (e.g. ‘Birds can fly’).  In another document S_E they can write down some empirical facts from the given situation S like ‘There is something named James. James is a bird’. And somehow they wish that James should be able to fly, thus they write down a vision text S_V with ‘James can fly’.

The interesting question is whether it is possible to generate a situation S_E.i in the future, which includes the fact ‘James can fly’.

With the knowledge already given they can built the change rule: IF it is valid, that {Birds can fly. James is a bird} THEN with probability π = 1 add the expression Eplus = {‘James can fly’} to the actual situation S_E.i. EMinus = {}. This rule is then an element of the set of change rules X.

The simulator X works according to the schema S’ = S – Eminus + Eplus.

Because we have S=S_U + S_E we are getting

S’ = {Birds can fly. Something is named James. James is a bird.} – Eminus + Eplus

S’ = {Birds can fly. Something is named James. James is a bird.} – {}+ {James can fly}

S’ = {Birds can fly. Something is named James. James is a bird. James can fly}

With regard to the vision which is used for evaluation one can state additionally:

|{James can fly} ⊆ {Birds can fly. Something is named James. James is a bird. James can fly}|= 1 ≥ 1

Thus the goal has been reached with 1 meaning with 100%.

THE ROLE OF MEANING

What makes a certain difference between classical concepts of an empirical theory and the oksimo paradigm is the role of meaning in the oksimo paradigm. While the classical empirical theory concept is using formal (mathematical) languages for their descriptions with the associated — nearly unsolvable — problem how to relate these concepts to the intended empirical world, does the oksimo paradigm assume the opposite: the starting point is always the ordinary language as basic language which on demand can be extended by special expressions (like e.g. set theoretical expressions, numbers etc.).

Furthermore it is in the oksimo paradigm assumed that the human actors with their built-in meaning function nearly always are able to  decided whether an expression e of the used expressions E of the ordinary language L is matching certain properties of the given situation S. Thus the human actors are those who have the authority to decided by their meaning whether some expression is actually true or not.

The same holds with possible goals (visions) and possible inference rules (= change rules). Whether some consequence Y shall happen if some condition X is satisfied by a given actual situation S can only be decided by the human actors. There is no other knowledge available then that what is in the head of the human actors. [1] This knowledge can be narrow, it can even be wrong, but human actors can only decide with that knowledge what is available to them.

If they are using change rules (= inference rules) based on their knowledge and they derive some follow up situation as a theorem, then it can happen, that there exists no empiricial situation S which is matching the theorem. This would be an undefined truth case. If the theorem t would be a contradiction to the given situation S then it would be clear that the theory is inconsistent and therefore something seems to be wrong. Another case cpuld be that the theorem t is matching a situation. This would confirm the belief on the theory.

COMMENTS

[1] Well known knowledge tools are since long libraries and since not so long data-bases. The expressions stored there can only be of use (i) if a human actor knows about these and (ii) knows how to use them. As the amount of stored expressions is increasing the portion of expressions to be cognitively processed by human actors is decreasing. This decrease in the usable portion can be used for a measure of negative complexity which indicates a growng deterioration of the human knowledge space.  The idea that certain kinds of algorithms can analyze these growing amounts of expressions instead of the human actor themself is only constructive if the human actor can use the results of these computations within his knowledge space.  By general reasons this possibility is very small and with increasing negativ complexity it is declining.

 

 

 

OKSIMO MEETS POPPER. Popper’s Position

eJournal: uffmm.org
ISSN 2567-6458, 31.March – 31.March  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

POPPERs POSITION IN THE CHAPTERS 1-17

In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.

Scientific Theory

A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference , which allows the deduction of a certain theorem t  if one makes  some additional concrete assumptions H.

Example: Theory T1 = <AX1,>

AX1= {Birds can fly}

H1= {Peter is  a bird}

: Peter can fly

Because  there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can  fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.

One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completely true. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds   like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)

The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:

AX1= {Birds can fly}

H2= {Doxy is  a bird, Doxy cannot fly}

: ‘Doxy can fly’ & ~’Doxy can fly’

If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

In this case the set {AX1, H2} is called inconsistent.

If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.

Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof  a falsification of a theory T can destroy the ability  of this  theory to distinguish between true and false statements.

Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try  it.(cf. p.33, 63)

Meta Theory, Logic of Scientific Discovery, Philosophy of Science

Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.

Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person  like Popper.

Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:

A scientific theory from a point of  view of Philosophy of Science represents a structure like the following one (minimal version):

MT=<S, A[μ], E, L, AX, , ET, E+, E-, true, false, contradiction, inconsistent>

In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L.  Based on their built-in adaptive meaning function μ the human actors A can relate  properties of the situation S with expressions E of L.  Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions  E-. Both sets of expressions are true subsets of E: E+ ⊂ E  and E- ⊂ E. Additionally the experts can define some special  set of expressions called axioms  AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T  ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either  only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions  that this unified set  confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions  can not  distinguish between true and false expressions.

Popper gives some general requirements for the axioms of a theory (cf. p.71):

  1. Axioms must be free from contradiction.
  2. The axioms  must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
  3. The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.

While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.

Empirical Interpretation(s)

Popper assumes that the universal statements  of scientific theories   are linguistic representations, and this means  they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning.  Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true),  or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).

Examples:

  1. Two human actors in a situation S agree about the  fact, that there is ‘something’ which  they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is  some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract  concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one).  To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
  2. These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’.  If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
  3. The  expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
  4. To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’  they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations

The basic idea with a system of axioms AX is — according to Popper —  that the axioms as universal expressions represent  a system of equations where  the  general terms   should be able to be substituted by certain values. The set of admissible values is different from the set of  inadmissible values. The relation between those values which can be substituted for the terms  is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)

But Popper has difficulties with an axiomatic system interpreted as a system of equations  since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is,  that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)

On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)

But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.

Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.

As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E  of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists  together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and  are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!

Example:

Lets us have a look to another  example.

If we take the system of axioms AX as the following schema:  AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m  as follows: m={<1,2,3>, <2,3,5>}. The values are again given as  a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers   which can satisfy the general concept of the equation.  In this secondary interpretation m is becoming  a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.

It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing  objects; the individual contributions will be  reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derived forecast of the theory 1+2 = 3  and the empirically measured value 1+2 = 2. If one would  define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with  ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief  the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …

Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.

SOURCES

[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by  Basic Books, New York (more editions have been published  later; I am using the eBook version of Routledge (2002))