Category Archives: measurement

THE COLLECTIVE MAN-MACHINE INTELLIGENCE Paradigm WITHIN SUSTAINABLE DEVELOPMENT

eJournal: uffmm.org
ISSN 2567-6458, 23.March 2023 – April 4, 2023
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text starts the topic of the Collective Man-Machine Intelligence Paradigm within Sustainable Development.

OUTLINE

For most readers the divers content of this blog is hard to understand if told that all these parts belong to one coherent picture. But indeed, there exists one coherent picture. This is the first publication of this one coherent picture.

FIGURE : This figure outlines the first time the intended view of the new ‘Collective Man-Maschine Intelligence’ paradigm within a certain view of ‘Sustainable Development’. The mentioned different kinds of certain algorithms are arbitrary; only the ‘oksimo.R Software’ has a general meaning pointing to a new type of software which is at the same time editor and simulator of a real (sustainable) empirical theory, which can also be used for gaming.

Looking deeper into this figure you can perhaps get a rough idea, which kinds of questions had to be answered before this unified view could be formulated. And every subset of this view is backed up by complete formal specifications and even formal theories. Telling the story ‘afterwards’ is often ‘simple’, but to find all the different parts in the ‘overall picture’ one after the other is rather tedious. At last I needed about 50 years of research …

In the next weeks I will write some more comments. As always there are many ‘threads’ working in parallel and I have to complete some others before.

The Everyday Application Scenario

(The following text is an English translation from an originally German text partially generated with the www.DeepL.com/Translator (free version))

Having a meta-theoretical concept of a ‘sustainable empirical theory (SET)’ accompanied by the meta-theoretical concept of ‘collective intelligence (CI)’ it isn’t straightforward how these components are working together in an everyday scenario. The following figure gives a rough outline of that framework which — probably — has to be assumed.

FIGURE : Outline of the everyday scenario applying a sustainable empirical theory (SET) together with ‘collective intelligence (CI)’. For more explanations see the text.

CONCEPTS AND PROCESSES

To have abstract (meta-theoretical) concepts it isn’t sufficient to change the real world only with these. It needs always some ‘translation’ of abstract meanings into concrete, real processes which are ‘working in everyday real environments’. Thus, every ‘concept’ needs a bundle of ‘processes’ associated with the meaning of the abstract concept which are capable to bring the abstract meaning ‘into life’.

Theory Concept

A structural concept describes e.g. on a meta-level what a ‘sustainable empirical theory’ is and compares this concept with the concept ‘game’ and ‘theater play’. Since it can quickly become very time-consuming to write down complete theories by hand, it can be very helpful to have a software (there is one under the name ‘oksimo.R’) that supports citizens in writing down the ‘text of a theory’ together with other citizens in ‘normal language’ and also to ‘simulate’ it as needed; furthermore, it would be good to be able to ‘play’ a theory interactively (and ultimately even much more).

Having the text of a theory, trying it out and developing it further is one thing. But the way to a theory can be tedious and long. It requires a great deal of ‘experience’, ‘knowledge’ and multiple forms of what is usually very vaguely called ‘intelligence’.

Concept Collective Intelligence

Intelligence typically occurs in the context of ‘biological systems’, in ‘humans’ and ‘non-humans’. More recently, there are also examples of vague intelligence being realized by ‘machines’. In the end, all these different phenomena, which are roughly summarized under the term ‘intelligence’, form a pattern which could be considered as ‘collective intelligence’ under a certain consideration. There are many prominent examples of this in the field of ‘non-human biological systems’, and then especially in ‘human biological systems’ with their ‘coordinated behavior’ in connection with their ‘symbolic languages’.

The great challenge of the future is to bring together these different ‘types of individual and collective intelligence’ into a real constructive-collective intelligence.

Concept Empirical Data

The most general form of a language is the so-called ‘normal language’ or ‘everyday language’. It contains in one concept everything we know today about languages.

An interesting aspect is the fact that the everyday language forms for each special kind of language (logic, mathematics, …) that ‘meta-language’, on whose basis the other special language is ‘introduced’.

The possible ‘elements of meaning and structures of meaning’, out of which the everyday language structures have been formed, originate from the space of everyday life and its world of events.

While the normal perceptual processes in coordination among the different speaker-listeners can already provide a lot of valuable descriptions of everyday properties and processes, specialized observation processes in the form of ‘standardized measurement processes’ can considerably increase the accuracy of descriptions. The central moment is that all participating speaker-listeners interested in a ‘certain topic’ (physics, chemistry, spatial relations, game moves, …) agree on ‘agreed description procedures’ for all ‘important properties’, which everyone performs in the same way in a transparent and reproducible way.

Processes in Everyday Life

As pointed out above whatever conceptual structures may have been agreed upon, they can only ‘come into effect’ (‘come to life’) if there are enough people who are willing to live all those ‘processes’ concretely within the framework of everyday life. This requires space, time, the necessary resources and a sufficiently strong and persistent ‘motivation’ to live these processes every day anew.

Thus, in addition to humans, animals and plants and their needs, there is now a huge amount of artificial structures (houses, roads, machines,…), each of which also makes certain demands on its environment. Knowing these requirements and ‘coordinating/managing’ them in such a way that they enable positive ‘synergies’ is a huge challenge, which – according to the impression in 2023 – often overtaxes mankind.

FORECASTING – PREDICTION: What?

eJournal: uffmm.org
ISSN 2567-6458, 19.August 2022 – 25 August 2022, 14:26h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of the subject COMMON SCIENCE as Sustainable Applied Empirical Theory, besides ENGINEERING, in a SOCIETY. It is a preliminary version, which is intended to become part of a book.

FORECASTING – PREDICTION: What?

optimal prediction

In the introduction of the main text it has been underlined that within a sustainable empirical theory it is not only necessary to widen the scope with a maximum of diversity, but at the same time it is also necessary to enable the capability for an optimal prediction about the ‘possible states of a possible future’.

the meaning machinery

In the text after this introduction it has been outlined that between human actors the most powerful tool for the clarification of the given situation — the NOW — is the everyday language with a ‘built in’ potential in every human actor for infinite meanings. This individual internal meaning space as part of the individual cognitive structure is equipped with an ‘abstract – concrete’ meaning structure with the ability to distinguish between ‘true’ and ‘not true’, and furthermore equipped with the ability to ‘play around’ with meanings in a ‘new way’.

COORDINATION

Thus every human actor can generate within his cognitive dimension some states or situations accompanied with potential new processes leading to new states. To share this ‘internal meanings’ with other brains to ‘compare’ properties of the ‘own’ thinking with properties of the thinking of ‘others’ the only chance is to communicate with other human actors mediated by the shared everyday language. If this communication is successful it arises the possibility to ‘coordinate’ the own thinking about states and possible actions with others. A ‘joint undertaking’ is becoming possible.

shared thinking

To simplify the process of communication it is possible, that a human actor does not ‘wait’ until some point in the future to communicate the content of the thinking, but even ‘while the thinking process is going on’ a human actor can ‘translate his thinking’ in language expressions which ‘fit the processed meanings’ as good as possible. Doing this another human actor can observe the language activity, can try to ‘understand’, and can try to ‘respond’ to the observations with his language expressions. Such an ‘interplay’ of expressions in the context of multiple thinking processes can show directly either a ‘congruence’ or a ‘difference’. This can help each participant in the communication to clarify the own thinking. At the same time an exchange of language expressions associated with possible meanings inside the different brains can ‘stimulate’ different kinds of memory and thinking processes and through this the space of shared meanings can be ‘enlarged’.

phenomenal space 1 and 2

Human actors with their ability to construct meaning spaces and the ability to share parts of the meaning space by language communication are embedded with their bodies in a ‘body-external environment’ usual called ‘external world’ or ‘nature’ associated with the property to be ‘real’.

Equipped with a body with multiple different kinds of ‘sensors’ some of the environmental properties can stimulate these sensors which in turn send neuronal signals to the embedded brain. The first stage of the ‘processing of sensor signals’ is usually called ‘perception’. Perception is not a passive 1-to-1 mapping of signals into the brain but it is already a highly sophisticated processing where the ‘raw signals’ of the sensors — which already are doing some processing on their own — are ‘transformed’ into more complex signals which the human actor in its perception does perceive as ‘features’, ‘properties’, ‘figures’, ‘patterns’ etc. which usually are called ‘phenomena’. They all together are called ‘phenomenal space’. In a ‘naive thinking’ this phenomenal space is taken ‘as the external world directly’. During life a human actor can learn — this must not happen! –, that the ‘phenomenal space’ is a ‘derived space’ triggered by an ‘assumed outside world’ which ’causes’ by its properties the sensors to react in a certain way. But the ‘actual nature’ of the outside world is not really known. Let us call the unknown outside world of properties ‘phenomenal space 1’ and the derived phenomenal space inside the body-brain ‘phenomenal space 2’.

TIMELY ORDERING

Due to the availability of the phenomenal space 2 the different human actors can try to ‘explore’ the ‘unknown assumed real world’ based on the available phenomena.

If one takes a wider look to the working of the brain of a human actor one can detect that the processing of the brain of the phenomenal space is using additional mechanisms:

  1. The phenomenal space is organized in ‘time slices’ of a certain fixed duration. The ‘content’ of a time slice during the time window (t,t’) will be ‘overwritten’ during the next time slice (t’,t”) by those phenomena, which are then ‘actual’, which are then constituting the NOW. The phenomena from the time window before (t’,t”) can become ‘stored’ in some other parts of the brain usually called ‘memory’.
  2. The ‘storing’ of phenomena in parts of the brain called ‘memory’ happens in a highly sophisticated way enabling ‘abstract structures’ with an ‘interface’ for ‘concrete properties’ typical for the phenomenal space, and which can become associated with other ‘content’ of the memory.
  3. It is an astonishing ability of the memory to enable an ‘ordering’ of memory contents related to situations as having occurred ‘before’ or ‘after’ some other property. Therefore the ‘content of the memory’ can represent collections of ‘stored NOWs’, which can be ‘ordered’ in a ‘sequence of NOWs’, and thereby the ‘dimension of time’ appears as a ‘framing property’ of ‘remembered phenomena’.
  4. Based on this capability to organize remembered phenomena in ‘sequences of states’ representing a so-called ‘timely order’ the brain can ‘operate’ on such sequences in various ways. It can e.g. ‘compare’ two states in such a sequence whether these are ‘the same’ or whether they are ‘different’. A difference points to a ‘change’ in the phenomenal space. Longer sequences — even including changes — can perhaps show up as ‘repetitions’ compared to ‘earlier’ sequences. Such ‘repeating sequences’ can perhaps represent a ‘pattern’ pointing to some ‘hidden factors’ responsible for the pattern.

formal representations [1]

Basic outline of human actor as part of an external world with an internal phenomenal space 2, including a memory and the capability to elaborate cognitive meta-levels using the dimension of time. There is a limited exchange medium between different brains realized by language communication. Formal models are an instrument to represent recognized timely sequences of sets of properties with typical changes.

Based on a rather sophisticated internal processing structure every human actor has the capability to compose language descriptions which can ‘represent’ with the aid of sets of language expressions different kinds of local situations. Every expression can represent some ‘meaning’ which is encoded in every human actor in an individual manner. Such a language encoding can partially becoming ‘standardized’ by shared language learning in typical everyday living situations. To that extend as language encodings (the assumed meaning) is shared between different human actors they can use this common meaning space to communicate their experience.

Based on the built-in property of abstract knowledge to have an interface to ‘more concrete’ meanings, which finally can be related to ‘concrete perceptual phenomena’ available in the sensual perceptions, every human actor can ‘check’ whether an actual meaning seems to have an ‘actual correspondence’ to some properties in the ‘real environment’. If this phenomenal setting in the phenomenal space 2 with a correspondence to the sensual perceptions is encoded in a language expression E then usually it is told that the ‘meaning of E’ is true; otherwise not.

Because the perceptual interface to an assumed real world is common to all human actors they can ‘synchronize’ their perceptions by sharing the related encoded language expressions.

If a group of human actors sharing a real situation agrees about a ‘set of language expressions’ in the sens that each expression is assumed to be ‘true’, then one can assume, that every expression ‘represents’ some encoded meanings which are related to the shared empirical situation, and therefore the expressions represent ‘properties of the assumed real world’. Such kinds of ‘meaning constructions’ can be further ‘supported’ by the usage of ‘standardized procedures’ called ‘measurement procedures’.

Under this assumption one can infer, that a ‘change in the realm of real world properties’ has to be encoded in a ‘new language expression’ associated with the ‘new real world properties’ and has to be included in the set of expressions describing an actual situation. At the same time it can happen, that an expression of the actual set of expressions is becoming ‘outdated’ because the properties, this expression has encoded, are gone. Thus, the overall ‘dynamics of a set of expressions representing an actual set of real world properties’ can be realized as follows:

  1. Agree on a first set of expression to be a ‘true’ description of a given set of real world properties.
  2. After an agreed period of time one has to check whether (i) the encoded meaning of an expression is gone or (ii) whether a new real world property has appeared which seems to be ‘important’ but is not yet encoded in a language expression of the set. Depending from this check either (i) one has to delete those expressions which are no longer ‘true’ or (ii) one has to introduce new expressions for the new real world properties.

In a strictly ‘observational approach’ the human actors are only observing the course of events after some — regular or spontaneous –time span, making their observations (‘measurements’) and compare these observations with their last ‘true description’ of the actual situation. Following this pattern of behavior they can deduce from the series of true descriptions <D1, D2, …, Dn> for every pair of descriptions (Di,Di+1) a ‘difference description’ as a ‘rule’ in the following way: (i) IF x is a subset of expressions in Di+1, which are not yet members of the set of expressions in Di, THEN ‘add’ these expressions to the set of expressions in Di. (ii) IF y is a subset of expressions in Di, which are no more members of the set of expressions in Di+1, THEN ‘delete’ these expressions from the set of expressions in Di. (iii) Construct a ‘condition-set’ of expressions as subset of Di, which has to be fulfilled to apply (i) and (ii).

Doing this for every pair of descriptions then one is getting a set of ‘change rules’ X which can be used, to ‘generate’ — starting with the first description D0 — all the follow-up descriptions only by ‘applying a change rule Xi‘ to the last generated description.

This first purely observational approach works, but every change rule Xi in this set of change rules X can be very ‘singular’ pointing to a true singularity in the mathematical sense, because there is not ‘common rule’ to predict this singularity.

It would be desirable to ‘dig into possible hidden factors’ which are responsible for the observed changes but they would allow to ‘reduce the number’ of individual change rules of X. But for such a ‘rule-compression’ there exists from the outset no usable knowledge. Such a reduction will only be possible if a certain amount of research work will be done hopefully to discover the hidden factors.

All the change rules which could be found through such observational processes can in the future be re-used to explore possible outcomes for selected situations.

COMMENTS

[1] For the final format of this section I have got important suggestions from René Thom by reading the introduction of his book “Structural Stability and Morphogenesis: An Outline of a General Theory of Models” (1972, 1989). See my review post HERE : https://www.uffmm.org/2022/08/22/rene-thom-structural-stability-and-morphogenesis-an-outline-of-a-general-theory-of-models-original-french-edition-1972-updated-by-the-author-and-translated-into-english-by-d-h-fowler-1989/

POPPER and EMPIRICAL THEORY. A conceptual Experiment


eJournal: uffmm.org
ISSN 2567-6458, 12.March 22 – 16.March 2022, 11:20 h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Philosophy of Science theme which is part of the uffmm blog.

PREFACE

In a preceding post I have outline the concept of an empirical theory based on a text from Popper 1971. In his article Popper points to a minimal structure of what he is calling an empirical theory. A closer investigation of his texts reveals many questions which should be clarified for a more concrete application of his concept of an empirical theory.

In this post it will be attempted to elaborate the concept of an empirical theory more concretely from a theoretical point of view as well as from an application point of view.

A Minimal Concept of an Empirical Theory

The figure shows the process of (i) observing phenomena, (ii) representing these in expressions of some language L, (iii) elaborating conjectures as hypothetical relations between different observations, (iv) using an inference concept to deduce some forecasts, and (v) compare these forecasts with those observations, which are possible in an assumed situation.

Empirical Basis

As starting point as well as a reference for testing does Popper assume an ’empirical basis’. The question arises what this means.

In the texts examined so far from Popper this is not well described. Thus in this text some ‘assumptions/ hypotheses’ will be formulated to describe some framework which should be able to ‘explain’ what an empirical basis is and how it works.

Experts

Those, who usually are building theories, are scientists, are experts. For a general concept of an ’empirical theory’ it is assumed here that every citizen is a ‘natural expert’.

Environment

Natural experts are living in ‘natural environments’ as part of the planet earth, as part of the solar system, as part of the whole universe.

Language

Experts ‘cooperate’ by using some ‘common language’. Here the ‘English language’ is used; many hundreds of other languages are possible.

Shared Goal (Changes, Time, Measuring, Successive States)

For cooperation it is necessary to have a ‘shared goal’. A ‘goal’ is an ‘idea’ about a possible state in the ‘future’ which is ‘somehow different’ to the given actual situation. Such a future state can be approached by some ‘process’, a series of possible ‘states’, which usually are characterized by ‘changes’ manifested by ‘differences’ between successive states. The concept of a ‘process’, a ‘sequence of states’, implies some concept of ‘time’. And time needs a concept of ‘measuring time’. ‘Measuring’ means basically to ‘compare something to be measured’ (the target) with ‘some given standard’ (the measuring unit). Thus to measure the height of a body one can compare it with some object called a ‘meter’ and then one states that the target (the height of the body) is 1,8 times as large as the given standard (the meter object). In case of time it was during many thousand years customary to use the ‘cycles of the sun’ to define the concept (‘unit’) of a ‘day’ and a ‘night’. Based on this one could ‘count’ the days as one day, two days, etc. and one could introduce further units like a ‘week’ by defining ‘One week compares to seven days’, or ‘one month compares to 30 days’, etc. This reveals that one needs some more concepts like ‘counting’, and associated with this implicitly then the concept of a ‘number’ (like ‘1’, ‘2’, …, ’12’, …) . Later the measuring of time has been delegated to ‘time machines’ (called ‘clocks’) producing mechanically ‘time units’ and then one could be ‘more precise’. But having more than one clock generates the need for ‘synchronizing’ different clocks at different locations. This challenge continues until today. Having a time machine called ‘clock’ one can define a ‘state’ only by relating the state to an ‘agreed time window’ = (t1,t2), which allows the description of states in a successive timely order: the state in the time-window (t1,t2) is ‘before’ the time-window (t2,t3). Then one can try to describe the properties of a given natural environment correlated with a certain time-window, e.g. saying that the ‘observed’ height of a body in time-window w1 was 1.8 m, in a later time window w6 the height was still 1.8 m. In this case no changes could be observed. If one would have observed at w6 1.9 m, then a difference is occurring by comparing two successive states.

Example: A County

Here we will assume as an example for a natural environment a ‘county’ in Germany called ‘Main-Kinzig Kreis’ (‘Kreis’ = ‘county’), abbreviated ‘MKK’. We are interested in the ‘number of citizens’ which are living in this county during a certain time-window, here the year 2018 = (1.January 2018, 31.December 2018). According to the statistical office of the state of Hessen, to which the MKK county belongs, the number of citizens in the MKK during 2018 was ‘418.950’.(cf. [2])

Observing the Number of Citizens

One can ask in which sense the number ‘418.950’ can be understood as an ‘observation statement’? If we understand ‘observation’ as the everyday expression for ‘measuring’, then we are looking for a ‘procedure’ which allows us to ‘produce’ this number ‘418.950’ associated with the unit ‘number of citizens during a year’. As everybody can immediately realize no single person can simply observe all citizens of that county. To ‘count’ all citizens in the county one had to ‘travel’ to all places in the county where citizens are living and count every person. Such a travelling would need some time. This can easily need more than 40 years working 24 hours a day. Thus, this procedure would not work. A different approach could be to find citizens in every of the 24 cities in the MKK [1] to help in this counting-procedure. To manage this and enable some ‘quality’ for the counting, this could perhaps work. An interesting experiment. Here we ‘believe’ in the number of citizens delivered by the statistical office of the state of Hessen [2], but keeping some reservation for the question how ‘good’ this number really is. Thus our ‘observation statement’ would be: “In the year 2018 418.950 citizens have been counted in the MKK (according to the information of the statistical office of the state of Hessen)” This observation statement lacks a complete account of the procedure, how this counting really happened.

Concrete and Abstract Words

There are interesting details in this observation statement. In this observation statement we notice words like ‘citizen’ and ‘MKK’. To talk about ‘citizens’ is not a talk about some objects in the direct environment. What we can directly observe are concrete bodies which we have learned to ‘classify’ as ‘humans’, enriched for example with ‘properties’ like ‘man’, ‘woman’, ‘child’, ‘elderly person’, neighbor’ and the like. Bu to classify someone as a ‘citizen’ deserves knowledge about some official procedure of ‘registering as a citizen’ at a municipal administration recorded in some certified document. Thus the word ‘citizen’ has a ‘meaning’ which needs some ‘concrete procedure to get the needed information’. Thus ‘citizen’ is not a ‘simple word’ but a ‘more abstract word’ with regard to the associated meaning. The same holds for the word ‘MKK’ short for ‘Main-Kinzig Kreis’. At a first glance ‘MKK’ appears as a ‘name’ for some entity. But this entity cannot directly be observed too. One component of the ‘meaning’ of the name ‘MKK’ is a ‘real geographical region’, whose exact geographic extensions have been ‘measured’ by official institutions marked in an ‘official map’ of the state of Hessen. This region is associated with an official document of the state of Hessen telling, that this geographical region has to be understood s a ‘county’ with the name MKK. There exist more official documents defining what is meant with the word ‘county’. Thus the word ‘MKK’ has a rather complex meaning which to understand and to check, whether everything is ‘true’, isn’t easy. The author of this post is living in the MKK and he would not be able to tell all the details of the complete meaning of the name ‘MKK’.

First Lessons Learned

Thus one can learn from these first considerations, that we as citizens are living in a natural environment where we are using observation statements which are using words with potentially rather complex meanings, which to ‘check’ deserves some serious amount of clarification.

Conjectures – Hypotheses

Changes

The above text shows that ‘observations as such’ show nothing of interest. Different numbers of citizens in different years have no ‘message’. But as soon as one arranges the years in a ‘time line’ according to some ‘time model’ the scene is changing: if the numbers of two consecutive years are ‘different’ then this ‘difference in numbers’ can be interpreted as a ‘change’ in the environment, but only if one ‘assumes’ that the observed phenomena (the number of counted citizens) are associated with some real entities (the citizens) whose ‘quantity’ is ‘represented’ in these numbers.[5]

And again, the ‘difference between consecutive numbers’ in a time line cannot be observed or measured directly. It is a ‘second order property’ derived from given measurements in time. Such a 2nd order property presupposes a relationship between different observations: they ‘show up’ in the expressions (here numbers), but they are connected back in the light of the agreed ‘meaning’ to some ‘real entities’ with the property ‘overall quantity’ which can change in the ‘real setting’ of these real entities called ‘citizens’.

In the example of the MKK the statistical office of the state of Hessen computed a difference between two consecutive years which has been represented as a ‘growth factor’ of 0,4%. This means that the number of citizens in the year 2018 will increase until the year 2019 as follows: number-citizens(2019) = number-citizens(2018) + (number of citizens(2018) * growth-factor). This means number-citizens(2019) =418.950 + (418.950 * 0.004) = 418.950 + 1.675,8 = 420.625,8

Applying change repeatedly

If one could assume that the ‘growth rate’ would stay constant through the time then one could apply the growth rate again and again onto the actual number of citizens in the MKK every year. This would yield the following simple table:

YearNumberGrowth Rate
2018418.950,00,0040
2019420.625,80
2020422.308,30
2021423.997,54
2022425.693,53
2023427.396,30
Table: Simplified description of the increase of the number of citizens in the Main-Kinzig county in Germany with an assumed growth rate of 0,4% per year.

As we know from reality an assumption of a fixed growth rate for complex dynamic systems is not very probable.

Theory

Continuing the previous considerations one has to ask the question, how the layout of a ‘complete empirical theory’ would look like?

As I commented in the preceding post about Popper’s 1971 article about ‘objective knowledge’ there exists today no one single accepted framework for a formalized empirical theory. Therefore I will stay here with a ‘bottom-up’ approach using elements taken from everyday reasoning.

What we have until now is the following:

  1. Before the beginning of a theory building process one needs a group of experts being part of a natural environment using the same language which share a common goal which they want to enable.
  2. The assumed natural environment is assumed from the experts as being a ‘process’ of consecutive states in time. The ‘granularity’ of the process depends from the used ‘time model’.
  3. As a starting point they collect a set of statements talking about those aspects of a ‘selected state’ at some time t which they are interested in.
  4. This set of statements describes a set of ‘observable properties’ of the selected state which is understood as a ‘subset’ of the properties of the natural environment.
  5. Every statement is understood by the experts as being ‘true’ in the sense, that the ‘known meaning’ of a statement has an ‘observable counterpart’ in the situation, which can be ‘confirmed’ by each expert.
  6. For each pair of consecutive states it holds that the set of statements of each state can be ‘equal’ or ‘can show ‘differences’.
  7. A ‘difference’ between sets of statements can be interpreted as pointing to a ‘change in the real environment’.[5]
  8. Observed differences can be described by special statements called ‘change statements’ or simply ‘rules’.
  9. A change statement has the format ‘IF a set of statements ST* is a subset of the statements ST of a given state S, THEN with probability p, a set of statements ST+ will be added to the actual state S and a set of statements ST- will be removed from the statements ST of a given state S. This will result in a new succeeding state S* with the representing statements ST – (ST-) + (ST+) depending from the assumed probability p.
  10. The list of change statements is an ‘open set’ according to the assumption, that an actual state is only a ‘subset’ of the real environment.
  11. Until now we have an assumed state S, an assumed goal V, and an open set of change statements X.
  12. Applying change statements to a given state S will generate a new state S*. Thus the application of a subset X’ of the open set of change statements X onto a given state S will here be called ‘generating a new state by a procedure’. Such a state-generating-procedure can be understood as an ‘inference’ (like in logic) oder as a ‘simulation’ (like in engineering).[6]
  13. To write this in a more condensed format we can introduce some signs —– S,V ⊩ ∑ X S‘ —– saying: If I have some state S and a goal V then the simulator will according to the change statements X generate a new state S’. In such a setting the newly generated state S’ can be understood as a ‘theorem’ which has been derived from the set of statements in the state S which are assumed to be ‘true’. And because the derived new state is assumed to happen in some ‘future’ ‘after’ the ‘actual state S’ this derived state can also be understood as a ‘forecast’.
  14. Because the experts can change all the time all parts ‘at will’ such a ‘natural empirical theory’ is an ‘open entity’ living in an ongoing ‘communication process’.
Second Lessons Learned

It is interestingly to know that from the set of statements in state S, which are assumed to be empirically true, together with some change statements X, whose proposed changes are also assumed to be ‘true’, and which have some probability P in the domain [0,1], one can forecast a set of statements in the state S* which shall be true, with a certainty being dependent from the preceding probability P and the overall uncertainty of the whole natural environment.

Confirmation – Non-Confirmation

A Theory with Forecasts

Having reached the formulation of an ordinary empirical theory T with the ingredients <S,V,X,⊩ > and the derivation concept S,V ⊩ ∑ X S‘ it is possible to generate theorems as forecasts. A forecast here is not a single statement st* but a whole state S* consisting of a finite set of statements ST* which ‘designate’ according to the ‘agreed meaning’ a set of ‘intended properties’ which need a set of ‘occurring empirical properties’ which can be observed by the experts. These observations are usually associated with ‘agreed procedures of measurement’, which generate as results ‘observation statements’/ ‘measurement statements’.

Within Time

Experts which are cooperating by ‘building’ an ordinary empirical theory are themselves part of a process in time. Thus making observations in the time-window (t1,t2) they have a state S describing some aspects of the world at ‘that time’ (t1,t2). When they then derive a forecast S* with their theory this forecast describes — with some probability P — a ‘possible state of the natural environment’ which is assumed to happen in the ‘future’. The precision of the predicted time when the forecasted statements in S* should happen depends from the assumptions in S.

To ‘check’ the ‘validity’ of such a forecast it is necessary that the overall natural process reaches a ‘point in time’ — or a time window — indicated by the used ‘time model’, where the ‘actual point in time’ is measured by an agreed time machine (mechanical clock). Because there is no observable time without a time machine the classification of a certain situation S* being ‘now’ at the predicted point of time depends completely from the used time machine.[7]

Given this the following can happen: According to the used theory a certain set of statements ST* is predicted to be ‘true’ — with some probability — either ‘at some time in the future’ or in the time-window (t1,t2) or at a certain point in time t*.

Validating Forecasts

If one of these cases would ‘happen’ then the experts would have the statements ST* of their forecast and a real situation in their natural environment which enables observations ‘Obs’ which are ‘translated’ into appropriate ‘observation statements’ STObs. The experts with their predicted statements ST* know a learned agreed meaning M* of their predicted statements ST* as intended-properties M* of ST*. The experts have also learned how they relate the intended meaning M* to the meaning MObs from the observation statements STobs. If the observed meaning MObs ‘agrees sufficiently well’ with the intended meaning M* then the experts would agree in a statement, that the intended meaning M* is ‘fulfilled’/ ‘satisfied’/ ‘confirmed’ by the observed meaning MObs. If not then it would stated that it is ‘not fulfilled’/ ‘not satisfied’/ ‘not confirmed’.

The ‘sufficient fulfillment’ of the intended meaning M* of a set of statements ST* is usually translated in a statement like “The statements ST* are ‘true'”. In the case of ‘no fulfillment’ it is unclear: this can be interpreted as ‘being false’ or as ‘being unclear’: No clear case of ‘being true’ and no clear case of ‘being false’.

Forecasting the Number of Citizens

In the used simple example we have the MKK county with an observed number of citizens in 2018 with 418950. The simple theory used a change statement with a growth factor of 0.4% per year. This resulted in the forecast with the number 420.625 citizens for the year 2019.

If the newly counting of the number of citizens in the years 2019 would yield 420.625, then there would be a perfect match, which could be interpreted as a ‘confirmation’ saying that the forecasted statement and the observed statement are ‘equal’ and therefore the theory seems to match the natural environment through the time. One could even say that the theory is ‘true for the observed time’. Nothing would follow from this for the unknown future. Thus the ‘truth’ of the theory is not an ‘absolute’ truth but a truth ‘within defined limits’.

We know from experience that in the case of forecasting numbers of citizens for some region — here a county — it is usually not so clear as it has been shown in this example.

This begins with the process of counting. Because it is very expensive to count the citizens of all cities of a county this happens only about every 20 years. In between the statistical office is applying the method of ‘forecasting projection’.[9] The state statistical office collects every year ‘electronically’ the numbers of ‘birth’, ‘death’, ‘outflow’, and ‘inflow’ from the individual cities and modifies with these numbers the last real census. In the case of the state of Hessen this was the year 2011. The next census in Germany will happen May 2022.[10] For such a census the data will be collected directly from the registration offices from the cities supported by a control survey of 10% of the population.

Because there are data from the statistical office of the state of Hessen for June 2021 [8:p.9] with saying that the MKK county had 421 936 citizens at 30. June 2021 we can compare this number with the theory forecast for the year 2021 with 423 997. This shows a difference in the numbers. The theory forecast is ‘higher’ than the observed forecast. What does this mean?

Purely arithmetically the forecast is ‘wrong’. The responsible growth factor is too large. If one would ‘adjust’ it in a simplified linear way to ‘0.24%’ then the theory could get a forecast for 2021 with 421 973 (observed: 421 936), but then the forecast for 2019 would be 419 955 (instead of 420 625).

This shows at least the following aspects:

  1. The empirical observations as such can vary ‘a little bit’. One had to clarify which degree of ‘variance’ is due to the method of measurement and therefore this variance should be taken into account for the evaluation of a theoretical forecast.
  2. As mentioned by the statistical office [9] there are four ‘factors’ which influence the final number of citizens in a region: ‘birth’, ‘death’, ‘outflow’, and ‘inflow’. These factors can change in time. Under ‘normal conditions’ the birth-rate and the death-rate are rather ‘stable’, but in case of an epidemic situation or even war this can change a lot. Outflow and inflow are very dynamic depending from many factors. Thus this can influence the growth factor a lot and these factors are difficult to forecast.
Third lessons Learned

Evaluating the ‘relatedness’ of some forecast F of an empirical theory T to the observations O in a given real natural environment is not a ‘clear-cut’ case. The ‘precision’ of such a relatedness depends from many factors where each of these factors has some ‘fuzziness’. Nevertheless as experience shows it can work in a limited way. And, this ‘limited way’ is the maximum we can get. The most helpful contribution of an ‘ordinary empirical theory’ seems to be the forecast of ‘What will happen if we have a certain set of assumptions’. Using such a forecast in the process of the experts this can help to improve to get some ‘informed guesses’ for planning.

Forecast

The next post will show, how this concept of an ordinary empirical theory can be used by applying the oksimo paradigm to a concrete case. See HERE.

Comments

[1] Cities of the MKK-county: 24, see: https://www.wegweiser-kommune.de/kommunen/main-kinzig-kreis-lk

[2] Forecast for development of the number of citizens in the MMK starting with 2018, See: the https://statistik.hessen.de/zahlen-fakten/bevoelkerung-gebiet-haushalte-familien/bevoelkerung/tabellen

[3] Karl Popper, „A World of Propensities“,(1988) and „Towards an Evolutionary Theory of Knowledge“, (1989) in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (1990, repr. 1995)

[4] Karl Popper, „All Life is Problem Solving“, original a lecture 1991 in German, the first tome published (in German) „Alles Leben ist Problemlösen“ (1994), then in the book „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York

[5] This points to the concept of ‘propensity’ which the late Popper has discussed in the papers [3] and [4].

[6] This concept of a ‘generator’ or an ‘inference’ reminds to the general concept of Popper and the main stream philosophy of a logical derivation concept where a ‘set of logical rules’ defines a ‘derivation concept’ which allows the ‘derivation/ inference’ of a statement s* as a ‘theorem’ from an assumed set of statements S assumed to be true.

[7] The clock-based time is in the real world correlated with certain constellations of the real universe, but this — as a whole — is ‘changing’!

[8] Hessisches Statistisches Landesamt, “Die Bevölkerung der hessischen
Gemeinden am 30. Juni 2021. Fortschreibungsergebnisse Basis Zensus 09. Mai 2011″, Okt. 2021, Wiesbaden, URL: https://statistik.hessen.de/sites/statistik.hessen.de/files/AI2_AII_AIII_AV_21-1hj.pdf

[9] Method of the forward projection of the statistical office of the State of Hessen: “Bevölkerung: Die Bevölkerungszahlen sind Fortschreibungsergebnisse, die auf den bei der Zensuszählung 2011
ermittelten Bevölkerungszahlen basieren. Durch Auswertung von elektronisch übermittelten Daten für Geburten und Sterbefälle durch die Standesämter, sowie der Zu- und Fortzüge der Meldebehörden, werden diese nach einer bundeseinheitlichen Fortschreibungsmethode festgestellt. Die Zuordnung der Personen zur Bevölkerung einer Gemeinde erfolgt nach dem Hauptwohnungsprinzip (Bevölkerung am Ort der alleinigen oder der Hauptwohnung).”([8:p.2]

[10] Statistical Office state of Hessen, Next census 2022: https://statistik.hessen.de/zahlen-fakten/zensus/zensus-2022/zensus-2022-kurz-erklaert

REVIEWING TARSKI’s SEMANTIC and MODEL CONCEPT. 85 Years Later …

eJournal: uffmm.org, ISSN 2567-6458,
8.August  2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

85 Years Later

The two papers of Tarski, which I do discuss here, have been published in 1936. Occasionally I have already read these paper many years ago but at that time I could not really work with these papers. Formally they seemed to be ’correct’, but in the light of my ’intuition’ the message appeared to me somehow ’weird’, not really in conformance with my experience of how knowledge and language are working in the real world. But at that time I was not able to explain my intuition to myself sufficiently. Nevertheless, I kept these papers – and some more texts of Tarski – in my bookshelves for an unknown future when my understanding would eventually change…
This happened the last days.

review-tarski-semantics-models-v1-printed

BACK TO REVIEWING SECTION

Here

 

AAI THEORY V2 –A Philosophical Framework

eJournal: uffmm.org,
ISSN 2567-6458, 22.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 23.February 2019 (continued the text)

Last change: 24.February 2019 (extended the text)

CONTEXT

In the overview of the AAI paradigm version 2 you can find this section  dealing with the philosophical perspective of the AAI paradigm. Enjoy reading (or not, then send a comment :-)).

THE DAILY LIFE PERSPECTIVE

The perspective of Philosophy is rooted in the everyday life perspective. With our body we occur in a space with other bodies and objects; different features, properties  are associated with the objects, different kinds of relations an changes from one state to another.

From the empirical sciences we have learned to see more details of the everyday life with regard to detailed structures of matter and biological life, with regard to the long history of the actual world, with regard to many interesting dynamics within the objects, within biological systems, as part of earth, the solar system and much more.

A certain aspect of the empirical view of the world is the fact, that some biological systems called ‘homo sapiens’, which emerged only some 300.000 years ago in Africa, show a special property usually called ‘consciousness’ combined with the ability to ‘communicate by symbolic languages’.

General setting of the homo sapiens species (simplified)
Figure 1: General setting of the homo sapiens species (simplified)

As we know today the consciousness is associated with the brain, which in turn is embedded in the body, which  is further embedded in an environment.

Thus those ‘things’ about which we are ‘conscious’ are not ‘directly’ the objects and events of the surrounding real world but the ‘constructions of the brain’ based on actual external and internal sensor inputs as well as already collected ‘knowledge’. To qualify the ‘conscious things’ as ‘different’ from the assumed ‘real things’ ‘outside there’ it is common to speak of these brain-generated virtual things either as ‘qualia’ or — more often — as ‘phenomena’ which are  different to the assumed possible real things somewhere ‘out there’.

PHILOSOPHY AS FIRST PERSON VIEW

‘Philosophy’ has many facets.  One enters the scene if we are taking the insight into the general virtual character of our primary knowledge to be the primary and irreducible perspective of knowledge.  Every other more special kind of knowledge is necessarily a subspace of this primary phenomenological knowledge.

There is already from the beginning a fundamental distinction possible in the realm of conscious phenomena (PH): there are phenomena which can be ‘generated’ by the consciousness ‘itself’  — mostly called ‘by will’ — and those which are occurring and disappearing without a direct influence of the consciousness, which are in a certain basic sense ‘given’ and ‘independent’,  which are appearing  and disappearing according to ‘their own’. It is common to call these independent phenomena ’empirical phenomena’ which represent a true subset of all phenomena: PH_emp  PH. Attention: These empirical phenomena’ are still ‘phenomena’, virtual entities generated by the brain inside the brain, not directly controllable ‘by will’.

There is a further basic distinction which differentiates the empirical phenomena into those PH_emp_bdy which are controlled by some processes in the body (being tired, being hungry, having pain, …) and those PH_emp_ext which are controlled by objects and events in the environment beyond the body (light, sounds, temperature, surfaces of objects, …). Both subsets of empirical phenomena are different: PH_emp_bdy PH_emp_ext = 0. Because phenomena usually are occurring  associated with typical other phenomena there are ‘clusters’/ ‘pattern’ of phenomena which ‘represent’ possible events or states.

Modern empirical science has ‘refined’ the concept of an empirical phenomenon by introducing  ‘standard objects’ which can be used to ‘compare’ some empirical phenomenon with such an empirical standard object. Thus even when the perception of two different observers possibly differs somehow with regard to a certain empirical phenomenon, the additional comparison with an ’empirical standard object’ which is the ‘same’ for both observers, enhances the quality, improves the precision of the perception of the empirical phenomena.

From these considerations we can derive the following informal definitions:

  1. Something is ‘empirical‘ if it is the ‘real counterpart’ of a phenomenon which can be observed by other persons in my environment too.
  2. Something is ‘standardized empirical‘ if it is empirical and can additionally be associated with a before introduced empirical standard object.
  3. Something is ‘weak empirical‘ if it is the ‘real counterpart’ of a phenomenon which can potentially be observed by other persons in my body as causally correlated with the phenomenon.
  4. Something is ‘cognitive‘ if it is the counterpart of a phenomenon which is not empirical in one of the meanings (1) – (3).

It is a common task within philosophy to analyze the space of the phenomena with regard to its structure as well as to its dynamics.  Until today there exists not yet a complete accepted theory for this subject. This indicates that this seems to be some ‘hard’ task to do.

BRIDGING THE GAP BETWEEN BRAINS

As one can see in figure 1 a brain in a body is completely disconnected from the brain in another body. There is a real, deep ‘gap’ which has to be overcome if the two brains want to ‘coordinate’ their ‘planned actions’.

Luckily the emergence of homo sapiens with the new extended property of ‘consciousness’ was accompanied by another exciting property, the ability to ‘talk’. This ability enabled the creation of symbolic languages which can help two disconnected brains to have some exchange.

But ‘language’ does not consist of sounds or a ‘sequence of sounds’ only; the special power of a language is the further property that sequences of sounds can be associated with ‘something else’ which serves as the ‘meaning’ of these sounds. Thus we can use sounds to ‘talk about’ other things like objects, events, properties etc.

The single brain ‘knows’ about the relationship between some sounds and ‘something else’ because the brain is able to ‘generate relations’ between brain-structures for sounds and brain-structures for something else. These relations are some real connections in the brain. Therefore sounds can be related to ‘something  else’ or certain objects, and events, objects etc.  can become related to certain sounds. But these ‘meaning relations’ can only ‘bridge the gap’ to another brain if both brains are using the same ‘mapping’, the same ‘encoding’. This is only possible if the two brains with their bodies share a real world situation RW_S where the perceptions of the both brains are associated with the same parts of the real world between both bodies. If this is the case the perceptions P(RW_S) can become somehow ‘synchronized’ by the shared part of the real world which in turn is transformed in the brain structures P(RW_S) —> B_S which represent in the brain the stimulating aspects of the real world.  These brain structures B_S can then be associated with some sound structures B_A written as a relation  MEANING(B_S, B_A). Such a relation  realizes an encoding which can be used for communication. Communication is using sound sequences exchanged between brains via the body and the air of an environment as ‘expressions’ which can be recognized as part of a learned encoding which enables the receiving brain to identify a possible meaning candidate.

DIFFERENT MODES TO EXPRESS MEANING

Following the evolution of communication one can distinguish four important modes of expressing meaning, which will be used in this AAI paradigm.

VISUAL ENCODING

A direct way to express the internal meaning structures of a brain is to use a ‘visual code’ which represents by some kinds of drawing the visual shapes of objects in the space, some attributes of  shapes, which are common for all people who can ‘see’. Thus a picture and then a sequence of pictures like a comic or a story board can communicate simple ideas of situations, participating objects, persons and animals, showing changes in the arrangement of the shapes in the space.

Pictorial expressions representing aspects of the visual and the auditory sens modes
Figure 2: Pictorial expressions representing aspects of the visual and the auditory sens modes

Even with a simple visual code one can generate many sequences of situations which all together can ‘tell a story’. The basic elements are a presupposed ‘space’ with possible ‘objects’ in this space with different positions, sizes, relations and properties. One can even enhance these visual shapes with written expressions of  a spoken language. The sequence of the pictures represents additionally some ‘timely order’. ‘Changes’ can be encoded by ‘differences’ between consecutive pictures.

FROM SPOKEN TO WRITTEN LANGUAGE EXPRESSIONS

Later in the evolution of language, much later, the homo sapiens has learned to translate the spoken language L_s in a written format L_w using signs for parts of words or even whole words.  The possible meaning of these written expressions were no longer directly ‘visible’. The meaning was now only available for those people who had learned how these written expressions are associated with intended meanings encoded in the head of all language participants. Thus only hearing or reading a language expression would tell the reader either ‘nothing’ or some ‘possible meanings’ or a ‘definite meaning’.

A written textual version in parallel to a pictorial version
Figure 3: A written textual version in parallel to a pictorial version

If one has only the written expressions then one has to ‘know’ with which ‘meaning in the brain’ the expressions have to be associated. And what is very special with the written expressions compared to the pictorial expressions is the fact that the elements of the pictorial expressions are always very ‘concrete’ visual objects while the written expressions are ‘general’ expressions allowing many different concrete interpretations. Thus the expression ‘person’ can be used to be associated with many thousands different concrete objects; the same holds for the expression ‘road’, ‘moving’, ‘before’ and so on. Thus the written expressions are like ‘manufacturing instructions’ to search for possible meanings and configure these meanings to a ‘reasonable’ complex matter. And because written expressions are in general rather ‘abstract’/ ‘general’ which allow numerous possible concrete realizations they are very ‘economic’ because they use minimal expressions to built many complex meanings. Nevertheless the daily experience with spoken and written expressions shows that they are continuously candidates for false interpretations.

FORMAL MATHEMATICAL WRITTEN EXPRESSIONS

Besides the written expressions of everyday languages one can observe later in the history of written languages the steady development of a specialized version called ‘formal languages’ L_f with many different domains of application. Here I am  focusing   on the formal written languages which are used in mathematics as well as some pictorial elements to ‘visualize’  the intended ‘meaning’ of these formal mathematical expressions.

Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)
Fig. 4: Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)

One prominent concept in mathematics is the concept of a ‘graph’. In  the basic version there are only some ‘nodes’ (also called vertices) and some ‘edges’ connecting the nodes.  Formally one can represent these edges as ‘pairs of nodes’. If N represents the set of nodes then N x N represents the set of all pairs of these nodes.

In a more specialized version the edges are ‘directed’ (like a ‘one way road’) and also can be ‘looped back’ to a node   occurring ‘earlier’ in the graph. If such back-looping arrows occur a graph is called a ‘cyclic graph’.

Directed cyclic graph extended to represent 'states of affairs'
Fig.5: Directed cyclic graph extended to represent ‘states of affairs’

If one wants to use such a graph to describe some ‘states of affairs’ with their possible ‘changes’ one can ‘interpret’ a ‘node’ as  a state of affairs and an arrow as a change which turns one state of affairs S in a new one S’ which is minimally different to the old one.

As a state of affairs I  understand here a ‘situation’ embedded in some ‘context’ presupposing some common ‘space’. The possible ‘changes’ represented by arrows presuppose some dimension of ‘time’. Thus if a node n’  is following a node n indicated by an arrow then the state of affairs represented by the node n’ is to interpret as following the state of affairs represented in the node n with regard to the presupposed time T ‘later’, or n < n’ with ‘<‘ as a symbol for a timely ordering relation.

Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token
Fig.6: Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token

The space can be any kind of a space. If one assumes as an example a 2-dimensional space configured as a grid –as shown in figure 6 — with two tokens at certain positions one can introduce a language to describe the ‘facts’ which constitute the state of affairs. In this example one needs ‘names for objects’, ‘properties of objects’ as well as ‘relations between objects’. A possible finite set of facts for situation 1 could be the following:

  1. TOKEN(T1), BLACK(T1), POSITION(T1,1,1)
  2. TOKEN(T2), WHITE(T2), POSITION(T2,2,1)
  3. NEIGHBOR(T1,T2)
  4. CELL(C1), POSITION(1,2), FREE(C1)

‘T1’, ‘T2’, as well as ‘C1’ are names of objects, ‘TOKEN’, ‘BACK’ etc. are names of properties, and ‘NEIGHBOR’ is a relation between objects. This results in the equation:

S1 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), TOKEN(T2), WHITE(T2), POSITION(T2,2,1), NEIGHBOR(T1,T2), CELL(C1), POSITION(1,2), FREE(C1)}

These facts describe the situation S1. If it is important to describe possible objects ‘external to the situation’ as important factors which can cause some changes then one can describe these objects as a set of facts  in a separated ‘context’. In this example this could be two players which can move the black and white tokens and thereby causing a change of the situation. What is the situation and what belongs to a context is somewhat arbitrary. If one describes the agriculture of some region one usually would not count the planets and the atmosphere as part of this region but one knows that e.g. the sun can severely influence the situation   in combination with the atmosphere.

Change of a state of affairs given as a state which will be enhanced by a new object
Fig.7: Change of a state of affairs given as a state which will be enhanced by a new object

Let us stay with a state of affairs with only a situation without a context. The state of affairs is     a ‘state’. In the example shown in figure 6 I assume a ‘change’ caused by the insertion of a new black token at position (2,2). Written in the language of facts L_fact we get:

  1. TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)

Thus the new state S2 is generated out of the old state S1 by unifying S1 with the set of new facts: S2 = S1 {TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)}. All the other facts of S1 are still ‘valid’. In a more general manner one can introduce a change-expression with the following format:

<S1, S2, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)})>

This can be read as follows: The follow-up state S2 is generated out of the state S1 by adding to the state S1 the set of facts { … }.

This layout of a change expression can also be used if some facts have to be modified or removed from a state. If for instance  by some reason the white token should be removed from the situation one could write:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)})>

Another notation for this is S2 = S1 – {TOKEN(T2), WHITE(T2), POSITION(2,1)}.

The resulting state S2 would then look like:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1)}

And a combination of subtraction of facts and addition of facts would read as follows:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)}, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would result in the final state S2:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1),TOKEN(T3), BLACK(T3), POSITION(2,2)}

These simple examples demonstrate another fact: while facts about objects and their properties are independent from each other do relational facts depend from the state of their object facts. The relation of neighborhood e.g. depends from the participating neighbors. If — as in the example above — the object token T2 disappears then the relation ‘NEIGHBOR(T1,T2)’ no longer holds. This points to a hierarchy of dependencies with the ‘basic facts’ at the ‘root’ of a situation and all the other facts ‘above’ basic facts or ‘higher’ depending from the basic facts. Thus ‘higher order’ facts should be added only for the actual state and have to be ‘re-computed’ for every follow-up state anew.

If one would specify a context for state S1 saying that there are two players and one allows for each player actions like ‘move’, ‘insert’ or ‘delete’ then one could make the change from state S1 to state S2 more precise. Assuming the following facts for the context:

  1. PLAYER(PB1), PLAYER(PW1), HAS-THE-TURN(PB1)

In that case one could enhance the change statement in the following way:

<S1, S2, PB1,insert(TOKEN(T3,2,2)),add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would read as follows: given state S1 the player PB1 inserts a  black token at position (2,2); this yields a new state S2.

With or without a specified context but with regard to a set of possible change statements it can be — which is the usual case — that there is more than one option what can be changed. Some of the main types of changes are the following ones:

  1. RANDOM
  2. NOT RANDOM, which can be specified as follows:
    1. With PROBABILITIES (classical, quantum probability, …)
    2. DETERMINISTIC

Furthermore, if the causing object is an actor which can adapt structurally or even learn locally then this actor can appear in some time period like a deterministic system, in different collected time periods as an ‘oscillating system’ with different behavior, or even as a random system with changing probabilities. This make the forecast of systems with adaptive and/ or learning systems rather difficult.

Another aspect results from the fact that there can be states either with one actor which can cause more than one action in parallel or a state with multiple actors which can act simultaneously. In both cases the resulting total change has eventually to be ‘filtered’ through some additional rules telling what  is ‘possible’ in a state and what not. Thus if in the example of figure 6 both player want to insert a token at position (2,2) simultaneously then either  the rules of the game would forbid such a simultaneous action or — like in a computer game — simultaneous actions are allowed but the ‘geometry of a 2-dimensional space’ would not allow that two different tokens are at the same position.

Another aspect of change is the dimension of time. If the time dimension is not explicitly specified then a change from some state S_i to a state S_j does only mark the follow up state S_j as later. There is no specific ‘metric’ of time. If instead a certain ‘clock’ is specified then all changes have to be aligned with this ‘overall clock’. Then one can specify at what ‘point of time t’ the change will begin and at what point of time t*’ the change will be ended. If there is more than one change specified then these different changes can have different timings.

THIRD PERSON VIEW

Up until now the point of view describing a state and the possible changes of states is done in the so-called 3rd-person view: what can a person perceive if it is part of a situation and is looking into the situation.  It is explicitly assumed that such a person can perceive only the ‘surface’ of objects, including all kinds of actors. Thus if a driver of a car stears his car in a certain direction than the ‘observing person’ can see what happens, but can not ‘look into’ the driver ‘why’ he is steering in this way or ‘what he is planning next’.

A 3rd-person view is assumed to be the ‘normal mode of observation’ and it is the normal mode of empirical science.

Nevertheless there are situations where one wants to ‘understand’ a bit more ‘what is going on in a system’. Thus a biologist can be  interested to understand what mechanisms ‘inside a plant’ are responsible for the growth of a plant or for some kinds of plant-disfunctions. There are similar cases for to understand the behavior of animals and men. For instance it is an interesting question what kinds of ‘processes’ are in an animal available to ‘navigate’ in the environment across distances. Even if the biologist can look ‘into the body’, even ‘into the brain’, the cells as such do not tell a sufficient story. One has to understand the ‘functions’ which are enabled by the billions of cells, these functions are complex relations associated with certain ‘structures’ and certain ‘signals’. For this it is necessary to construct an explicit formal (mathematical) model/ theory representing all the necessary signals and relations which can be used to ‘explain’ the obsrvable behavior and which ‘explains’ the behavior of the billions of cells enabling such a behavior.

In a simpler, ‘relaxed’ kind of modeling  one would not take into account the properties and behavior of the ‘real cells’ but one would limit the scope to build a formal model which suffices to explain the oservable behavior.

This kind of approach to set up models of possible ‘internal’ (as such hidden) processes of an actor can extend the 3rd-person view substantially. These models are called in this text ‘actor models (AM)’.

HIDDEN WORLD PROCESSES

In this text all reported 3rd-person observations are called ‘actor story’, independent whether they are done in a pictorial or a textual mode.

As has been pointed out such actor stories are somewhat ‘limited’ in what they can describe.

It is possible to extend such an actor story (AS)  by several actor models (AM).

An actor story defines the situations in which an actor can occur. This  includes all kinds of stimuli which can trigger the possible senses of the actor as well as all kinds of actions an actor can apply to a situation.

The actor model of such an actor has to enable the actor to handle all these assumed stimuli as well as all these actions in the expected way.

While the actor story can be checked whether it is describing a process in an empirical ‘sound’ way,  the actor models are either ‘purely theoretical’ but ‘behavioral sound’ or they are also empirically sound with regard to the body of a biological or a technological system.

A serious challenge is the occurrence of adaptiv or/ and locally learning systems. While the actor story is a finite  description of possible states and changes, adaptiv or/ and locally learning systeme can change their behavior while ‘living’ in the actor story. These changes in the behavior can not completely be ‘foreseen’!

COGNITIVE EXPERT PROCESSES

According to the preceding considerations a homo sapiens as a biological system has besides many properties at least a consciousness and the ability to talk and by this to communicate with symbolic languages.

Looking to basic modes of an actor story (AS) one can infer some basic concepts inherently present in the communication.

Without having an explicit model of the internal processes in a homo sapiens system one can infer some basic properties from the communicative acts:

  1. Speaker and hearer presuppose a space within which objects with properties can occur.
  2. Changes can happen which presuppose some timely ordering.
  3. There is a disctinction between concrete things and abstract concepts which correspond to many concrete things.
  4. There is an implicit hierarchy of concepts starting with concrete objects at the ‘root level’ given as occurence in a concrete situation. Other concepts of ‘higher levels’ refer to concepts of lower levels.
  5. There are different kinds of relations between objects on different conceptual levels.
  6. The usage of language expressions presupposes structures which can be associated with the expressions as their ‘meanings’. The mapping between expressions and their meaning has to be learned by each actor separately, but in cooperation with all the other actors, with which the actor wants to share his meanings.
  7. It is assume that all the processes which enable the generation of concepts, concept hierarchies, relations, meaning relations etc. are unconscious! In the consciousness one can  use parts of the unconscious structures and processes under strictly limited conditions.
  8. To ‘learn’ dedicated matters and to be ‘critical’ about the quality of what one is learnig requires some disciplin, some learning methods, and a ‘learning-friendly’ environment. There is no guaranteed method of success.
  9. There are lots of unconscious processes which can influence understanding, learning, planning, decisions etc. and which until today are not yet sufficiently cleared up.

 

 

 

 

 

 

 

 

AAI THEORY V2 – MEASURING USABILITY

eJournal: uffmm.org
ISSN 2567-6458, 6.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

An overview of the enhanced AAI theory  version 2 you can find here.  In this post we talk about the tenth chapter dealing with Measuring Usability

MEASURING  USABILITY

As has been delineated in the post “Usability and Usefulness”   statements  about the quality of the usability of some assisting actor are based on some  kinds of measurement: mapping some target (here the interactions of an executive actor with some assistive actor) into some predefined norm (e.g. ‘number of errors’, ‘time needed for completion’, …).   These remarks are here embedded in a larger perspective following   Dumas and  Fox (2008).

Overview of Usability Testing following the article of Dumas & Fox (2008), with some new AAI specific terminology
Overview of Usability Testing following the article of Dumas & Fox (2008), with some new AAI specific terminology

From the three main types of usability testing with regard to the position in the life-cycle of a system we focus here primarily on the usability testing as part of the analysis phase where the developers want to get direct feedback for the concepts embedded in an actor story. Depending from this feedback the actor story and its related models can become modified and this can result in a modified exploratory mock-up  for a new test. The challenge is not to be ‘complete’ in finding ‘disturbing’ factors during an interaction but to increase the probability to detect possible disturbing factors by facing the symbolically represented concepts of the actor story with a sample of real world actors. Experiments  point to the number of 5-10 test persons which seem to be sufficient to detect the most severe disturbing factors of the concepts.

Usability testing procedure according to Lauesen (2005), adapted to the AAI paradigm
Usability testing procedure according to Lauesen (2005), adapted to the AAI paradigm

A good description of usability testing can be found in the book Lauesen (2005), especially chapters 1 +13.  According to this one can infer the following basic schema for a usability test:

  1. One needs 5 – 10 test persons whose input-output profile (AAR) comes close to the profile (TAR) required by the actor story.
  2. One needs a  mock-up of the assistive actor; this mock-up  should  correspond ‘sufficiently well’ with the input-output profile (TAR) required by the  actor story. In the simplest case one has a ‘paper model’, whose sheets can be changed on demand.
  3. One needs a facilitator who is receiving the test person, introduces the test person into the task (orally and/ or by a short document (less than a page)), then accompanies the test without interacting further with the test person until the end of the test.  The end is either reached by completing the task or by reaching the end of a predefined duration time.
  4. After the test person has finished the test   a debriefing happens by interrogating the test person about his/ her subjective feelings about the test. Because interviews are always very fuzzy and not very reliable one should keep this interrogation simple, short, and associated with concrete points. One strategy could be to ask the test person first about the general feeling: Was it ‘very good’, ‘good’, ‘OK’, ‘undefined’, ‘not OK’, ‘bad’, ‘very bad’ (+3 … 0 … -3). If the stated feeling is announced then one can ask back which kinds of circumstances caused these feelings.
  5. During the test at least two observers are observing the behavior of the test person. The observer are using as their ‘norm’ the actor story which tells what ‘should happen in the ideal case’. If a test person is deviating from the actor story this will be noted as a ‘deviation of kind X’, and this counts as an error. Because an actor story in the mathematical format represents a graph it is simple to quantify the behavior of the test person with regard to how many nodes of a solution path have been positively passed. This gives a count for the percentage of how much has been done. Thus the observer can deliver data about at least the ‘percentage of task completion’, ‘the number (and kind) of errors by deviations’, and ‘the processing time’. The advantage of having the actor story as a  norm is that all observers will use the same ‘observation categories’.
  6. From the debriefing one gets data about the ‘good/ bad’ feelings on a scale, and some hints what could have caused the reported feelings.

STANDARDS – CIF (Common Industry Format)

There are many standards around describing different aspects of usability testing. Although standards can help in practice  from the point of research standards are not only good, they can hinder creative alternative approaches. Nevertheless I myself are looking to standards to check for some possible ‘references’.  One standard I am using very often is the  “Common Industry Format (CIF)”  for usability reporting. It is  an ISO standard (ISO/IEC 25062:2006) since  2006. CIF describes a method for reporting the findings of usability tests that collect quantitative measurements of user performance. CIF does not describe how to carry out a usability test, but it does require that the test include measurements of the application’s effectiveness and efficiency as well as a measure of the users’ satisfaction. These are the three elements that define the concept of usability.

Applied to the AAI paradigm these terms are fitting well.

Effectiveness in CIF  is targeting  the accuracy and completeness with which users achieve their goal. Because the actor story in AAI his represented as a graph where the individual paths represents a way to approach a defined goal one can measure directly the accuracy by comparing the ‘observed path’ in a test and the ‘intended ideal path’ in the actor story. In the same way one can compute the completeness by comparing the observed path and the intended ideal path of the actor story.

Efficiency in CIF covers the resources expended to achieve the goals. A simple and direct measure is the measuring of the time needed.

Users’ satisfaction in CIF means ‘freedom from discomfort’ and ‘positive attitudes towards the use of the product‘. These are ‘subjective feelings’ which cannot directly be observed. Only ‘indirect’ measures are possible based on interrogations (or interactions with certain tasks) which inherently are fuzzy and not very reliable.  One possibility how to interrogate is mentioned above.

Because the term usability in CIF is defined by the before mentioned terms of effectiveness, efficiency as well as  users’ satisfaction, which in turn can be measured in many different ways the meaning of ‘usability’ is still a bit vague.

DYNAMIC ACTORS – CHANGING CONTEXTS

With regard to the AAI paradigm one has further to mention that the possibility of adaptive, learning systems embedded in dynamic, changing  environments requires for a new type of usability testing. Because learning actors change by every exercise one should run a test several times to observe how the dynamic learning rates of an actor are developing in time. In such a dynamic framework  a system would only be  ‘badly usable‘ when the learning curves of the actors can not approach a certain threshold after a defined ‘typical learning time’. And,  moreover, there could be additional effects occurring only in a long-term usage and observation, which can not be measured in a single test.

REFERENCES

  • ISO/IEC 25062:2006(E)
  • Joseph S. Dumas and Jean E. Fox. Usability testing: Current practice
    and future directions. chapter 57, pp.1129 – 1149,  in J.A. Jacko and A. Sears, editors, The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and Emerging Applications. 2nd edition, 2008
  • S. Lauesen. User Interface Design. A software Engineering Perspective.
    Pearson – Addison Wesley, London et al., 2005

AAI THEORY V2 – USABILITY AND USEFULNESS

eJournal: uffmm.org
ISSN 2567-6458, 4.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

REMARK (5.May 2019)

This text  has to be reviewed again on account of the new aspect of gaming as  discussed in the post Engineering and Society.

CONTEXT

An overview of the enhanced AAI theory  version 2 you can find here.  In this post we talk about the sixth chapter dealing with usability and usefulness.

USABILITY AND USEFULNESS

In the AAI paradigm the concept of usability is seen as a sub-topic of the more broader concept of usefulness. Furthermore Usefulness  as well as usability are understood as measurements comparing some target with some presupposed norm.

Example: If someone wants to buy a product A whose prize fits well with the available budget and this product A shows only  an average usability then the product is probably ‘more useful’ for the buyer than another product B which does not fit with the budget although it  has a better usability. A conflict can  arise if the weaker value of the usability of product A causes during the usage of product A ‘bad effects’ onto the user of product A which in turn produce additional negative costs which enhance the original ‘nice price’ to a degree where the product A becomes finally  ‘more costly’ than product B.

Therefore  the concept usefulness will be  defined independently from the concept usability and depends completely  from the person or company who is searching for the solution of a problem. The concept of usability depends directly on the real structure of an  actor, a biological one or a non-biological one. Thus independent of the definition of the actual usefulness the given structure of an actor implies certain capabilities with regard to input, output as well as to  internal   processing. Therefore if an X seems to be highly useful for someone and to get X  needs a certain actor story to become realized with certain actors then it can matter whether this process includes a ‘good usability’ for the participating actors or not.

In the AAI paradigm both concepts usefulness as well as usability will be analyzed to provide a  chance to check the contributions of both concepts  in some predefined duration of usage. This allows the analysis of the sustainability of the wanted usefulness restricted to  usability as a parameter. There can be even more parameters   included in the evaluation of the actor story  to enhance the scope of   sustainability. Depending from the definition of the concept of resilience one can interpret the concept of sustainability used in this AAI paradigm as compatible with the resilience concept too.

MEASUREMENT

To speak about ‘usefulness’, ‘usability’, ‘sustainability’ (or ‘resilience’) requires some kind of a scale of values with an   ordering relation R allowing to state about  some values x,y   whether R(x,y) or R(y,x) or EQUAL(x,y). The values used in the scale have to be generated by some defined process P which is understood as a measurement process M which basically compares some target X with some predefined norm N and gives as a result a pair (v,N) telling a number v associated with the applied norm N. Written: M : X x N —> V x N.

A measurement procedure M must be transparent and repeatable in the sense that the repeated application of the measurement procedure M will generate the same results than before. Associated with the measurement procedure there can exist many additional parameters like ‘location’, ‘time’, ‘temperature’, ‘humidity’,  ‘used technologies’, etc.

Because there exist targets X which are not static it can be a problem when and how often one has to measure these targets to get some reliable value. And this problem becomes even worse if the target includes adaptive systems which are changing constantly like in the case of  biological systems.

All biological systems have some degree of learnability. Thus if a human actor is acting as part of an actor story  the human actor will learn every time he is working through the process. Thus making errors during his first run of the process does not imply that he will repeat these errors the next time. Usually one can observe a learning curve associated with n-many runs which show — mostly — a decrease in errors, a decrease in processing time, and — in general — a change of all parameters, which can be measured. Thus a certain actor story can receive a good usability value after a defined number of usages.  But there are other possible subjective parameters like satisfaction, being excited, being interested and the like which can change in the opposite direction, because to become well adapted to  the process can be boring which in turn can lead to less concentrations with many different negative consequences.

 

 

 

 

AAI THEORY V2 –EPISTEMOLOGY OF THE AAI-EXPERTS

eJournal: uffmm.org,
ISSN 2567-6458, 26.Januar 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

An overview to the enhanced AAI theory  version 2 you can find here.  In this post we talk about the fourth chapter dealing with the epistemology of actors within an AAI analysis process.

EPISTEMOLOGY AND THE EMPIRICAL SCIENCES

Epistemology is a sub-discipline of general philosophy. While a special discipline in empirical science is defined by a certain sub-set of the real world RW  by empirical measurement methods generating empirical data which can be interpreted by a formalized theory,  philosophy  is not restricted to a sub-field of the real world. This is important because an empirical discipline has no methods to define itself.  Chemistry e.g. can define by which kinds of measurement it is gaining empirical data   and it can offer different kinds of formal theories to interpret these data including inferences to forecast certain reactions given certain configurations of matters, but chemistry is not able  to explain the way how a chemist is thinking, how the language works which a chemist is using etc. Thus empirical science presupposes a general framework of bodies, sensors, brains, languages etc. to be able to do a very specialized  — but as such highly important — job. One can define ‘philosophy’ then as that kind of activity which tries to clarify all these  conditions which are necessary to do science as well as how cognition works in the general case.

Given this one can imagine that philosophy is in principle a nearly ‘infinite’ task. To get not lost in this conceptual infinity it is recommended to start with concrete processes of communications which are oriented to generate those kinds of texts which can be shown as ‘related to parts of the empirical world’ in a decidable way. This kind of texts   is here called ’empirically sound’ or ’empirically true’. It is to suppose that there will be texts for which it seems to be clear that they are empirically sound, others will appear ‘fuzzy’ for such a criterion, others even will appear without any direct relation to empirical soundness.

In empirical sciences one is using so-called empirical measurement procedures as benchmarks to decided whether one has empirical data or not, and it is commonly assumed that every ‘normal observer’ can use these data as every other ‘normal observer’. But because individual, single data have nearly no meaning on their own one needs relations, sets of relations (models) and even more complete theories, to integrate the data in a context, which allows some interpretation and some inferences for forecasting. But these relations, models, or theories can not directly be inferred from the real world. They have to be created by the observers as ‘working hypotheses’ which can fit with the data or not. And these constructions are grounded in  highly complex cognitive processes which follow their own built-in rules and which are mostly not conscious. ‘Cognitive processes’ in biological systems, especially in human person, are completely generated by a brain and constitute therefore a ‘virtual world’ on their own.  This cognitive virtual world  is not the result of a 1-to-1 mapping from the real world into the brain states.  This becomes important in that moment where the brain is mapping this virtual cognitive world into some symbolic language L. While the symbols of a language (sounds or written signs or …) as such have no meaning the brain enables a ‘coding’, a ‘mapping’ from symbolic expressions into different states of the brain. In the light’ of such encodings the symbolic expressions have some meaning.  Besides the fact that different observers can have different encodings it is always an open question whether the encoded meaning of the virtual cognitive space has something to do with some part of the empirical reality. Empirical data generated by empirical measurement procedures can help to coordinate the virtual cognitive states of different observers with each other, but this coordination is not an automatic process. Empirically sound language expressions are difficult to get and therefore of a high value for the survival of mankind. To generate empirically sound formal theories is even more demanding and until today there exists no commonly accepted concept of the right format of an empirically sound theory. In an era which calls itself  ‘scientific’ this is a very strange fact.

EPISTEMOLOGY OF THE AAI-EXPERTS

Applying these general considerations to the AAI experts trying to construct an actor story to describe at least one possible path from a start state to a goal state, one can pick up the different languages the AAI experts are using and asking back under which conditions these languages have some ‘meaning’ and under which   conditions these meanings can be called ’empirically sound’?

In this book three different ‘modes’ of an actor story will be distinguished:

  1. A textual mode using some ordinary everyday language, thus using spoken language (stored in an audio file) or written language as a text.
  2. A pictorial mode using a ‘language of pictures’, possibly enhanced by fragments of texts.
  3. A mathematical mode using graphical presentations of ‘graphs’ enhanced by symbolic expressions (text) and symbolic expressions only.

For every mode it has to be shown how an AAI expert can generate an actor story out of the virtual cognitive world of his brain and how it is possible to decided the empirical soundness of the actor story.

 

 

ADVANCED AAI-THEORY

eJournal: uffmm.org,
ISSN 2567-6458, 21.Januar 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Here You can find a new version of this post

CONTEXT

The last official update of the AAI theory dates back to Oct-2, 2018. Since that time many new thoughts have been detected and have been configured for further extensions and improvements. Here I try to give an overview of all the actual known aspects of the expanded AAI theory as a possible guide for the further elaborations of the main text.

CLARIFYING THE PROBLEM

  1. Generally it is assumed that the AAI theory is embedded in a general systems engineering approach starting with the clarification of a problem.
  2. Two cases will be distinguished:
    1. A stakeholder is associated with a certain domain of affairs with some prominent aspect/ parameter P and the stakeholder wants to clarify whether P poses some ‘problem’ in this domain. This presupposes some explained ‘expectations’ E how it should be and some ‘findings’ x pointing to the fact that P is ‘sufficiently different’ from some y>x. If the stakeholder judges that this difference is ‘important’, than P matching x will be classified as a problem, which will be documented in a ‘problem document D_p’. One interpret this this analysis as a ‘measurement M’ written as M(P,E) = x and x<y.
    2. Given a problem document D_p a stakeholder invites some experts to find a ‘solution’ which transfers the old ‘problem P’ into a ‘configuration S’ which at least should ‘minimize the problem P’. Thus there must exist some ‘measurements’ of the given problem P with regard to certain ‘expectations E’ functioning as a ‘norm’ as M(P,E)=x and some measurements of the new configuration S with regard to the same expectations E as M(S,E)=y and a metric which allows the judgment y > x.
  3. From this follows that already in the beginning of the analysis of a possible solution one has to refer to some measurement process M, otherwise there exists no problem P.

CHECK OF FRAMING CONDITIONS

  1. The definition of a problem P presupposes a domain of affairs which has to be characterized in at least two respects:
    1. A minimal description of an environment ENV of the problem P and
    2. a list of so-called non-functional requirements (NFRs).
  2. Within the environment it mus be possible to identify at least one task T to be realized from some start state to some end state.
  3. Additionally it mus be possible to identify at least one executing actor A_exec doing this task and at least one actor assisting A_ass the executing actor to fulfill the task.
  4. For the  following analysis of a possible solution one can distinguish two strategies:
    1. Top-down: There exists a group of experts EXPs which will analyze a possible solution, will test these, and then will propose these as a solution for others.
    2. Bottom-up: There exists a group of experts EXPs too but additionally there exists a group of customers CTMs which will be guided by the experts to use their own experience to find a possible solution.

ACTOR STORY (AS)

  1. The goal of an actor story (AS) is a full specification of all identified necessary tasks T which lead from a start state q* to a goal state q+, including all possible and necessary changes between the different states M.
  2. A state is here considered as a finite set of facts (F) which are structured as an expression from some language L distinguishing names of objects (LIKE ‘d1’, ‘u1’, …) as well as properties of objects (like ‘being open’, ‘being green’, …) or relations between objects (like ‘the user stands before the door’). There can also e a ‘negation’ like ‘the door is not open’. Thus a collection of facts like ‘There is a door D1’ and ‘The door D1 is open’ can represent a state.
  3. Changes from one state q to another successor state q’ are described by the object whose action deletes previous facts or creates new facts.
  4. In this approach at least three different modes of an actor story will be distinguished:
    1. A pictorial mode generating a Pictorial Actor Story (PAS). In a pictorial mode the drawings represent the main objects with their properties and relations in an explicit visual way (like a Comic Strip).
    2. A textual mode generating a Textual Actor Story (TAS): In a textual mode a text in some everyday language (e.g. in English) describes the states and changes in plain English. Because in the case of a written text the meaning of the symbols is hidden in the heads of the writers it can be of help to parallelize the written text with the pictorial mode.
    3. A mathematical mode generating a Mathematical Actor Story (MAS): n the mathematical mode the pictorial and the textual modes are translated into sets of formal expressions forming a graph whose nodes are sets of facts and whose edges are labeled with change-expressions.

TASK INDUCED ACTOR-REQUIREMENTS (TAR)

If an actor story AS is completed, then one can infer from this story all the requirements which are directed at the executing as well as the assistive actors of the story. These requirements are targeting the needed input- as well as output-behavior of the actors from a 3rd person point of view (e.g. what kinds of perception are required, what kinds of motor reactions, etc.).

ACTOR INDUCED ACTOR-REQUIREMENTS (AAR)

Depending from the kinds of actors planned for the real work (biological systems, animals or humans; machines, different kinds of robots), one has to analyze the required internal structures of the actors needed to enable the required perceptions and responses. This has to be done in a 1st person point of view.

ACTOR MODELS (AMs)

Based on the AARs one has to construct explicit actor models which are fulfilling the requirements.

USABILITY TESTING (UTST)

Using the actor as a ‘norm’ for the measurement one has to organized an ‘usability test’ in he way, that a real executing test actor having the required profiles has to use a real assisting actor in the context of the specified actor story. Place in a start state of the actor story the executing test actor has to show that and how he will reach the defined goal state of the actor story. For this he has to use a real assistive actor which usually is an experimental device (a mock-up), which allows the test of the story.

Because an executive actor is usually a ‘learning actor’ one has to repeat the usability test n-times to see, whether the learning curve approaches a minimum. Additionally to such objective tests one should also organize an interview to get some judgments about the subjective states of the test persons.

SIMULATION

With an increasing complexity of an actor story AS it becomes important to built a simulator (SIM) which can take as input the start state of the actor story together with all possible changes. Then the simulator can compute — beginning with the start state — all possible successor states. In the interactive mode participating actors will explicitly be asked to interact with the simulator.

Having a simulator one can use a simulator as part of an usability test to mimic the behavior of an assistive actor. This mode can also be used for training new executive actors.

A TOP-DOWN ACTOR STORY

The elaboration of an actor story will usually be realized in a top-down style: some AAI experts will develop the actor story based on their experience and will only ask for some test persons if they have elaborated everything so far that they can define some tests.

A BOTTOM-UP ACTOR STORY

In a bottom-up style the AAI experts collaborate from the beginning with a group of common users from the application domain. To do this they will (i) extract the knowledge which is distributed in the different users, then (ii) they will start some modeling from these different facts to (iii) enable some basic simulations. This simple simulation (iv) will be enhanced to an interactive simulation which allows serious gaming either (iv.a) to test the model or to enable the users (iv.b) to learn the space of possible states. The test case will (v) generate some data which can be used to evaluate the model with regard to pre-defined goals. Depending from these findings (vi) one can try to improve the model further.

THE COGNITIVE SPACE

To be able to construct executive as well as assistive actors which are close to the way how human persons do communicate one has to set up actor models which are as close as possible with the human style of cognition. This requires the analysis of phenomenal experience as well as the psychological behavior as well as the analysis of a needed neuron-physiological structures.

STATE DYNAMICS

To model in an actor story the possible changes from one given state to another one (or to many successor states) one needs eventually besides explicit deterministic changes different kinds of random rules together with adaptive ones or decision-based behavior depending from a whole network of changing parameters.

QUANTUM THEORY (QT). Basic elements

eJournal: uffmm.org, ISSN 2567-6458, 2.January 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email:
gerd@doeben-henisch.de

CONTEXT

This is a continuation from the post WHY QT FOR AAI? explaining the motivation why to look to quantum theory (QT) in the case of the AAI paradigm. After approaching QT from a philosophy of science perspective (see the post QUANTUM THEORY (QT). BASIC PROPERTIES) giving a ‘birds view’ of the relationship between a QT and the presupposed ‘real world’ and digging a bit into the first person view inside an observer we are here interested in the formal machinery of QT. For this we follow Grifftiths in his chapter 1.

QT BASIC ELEMENTS

MEASUREMENT

  1. The starting point of a quantum theory QT are ‘phenomena‘, which “lack any description in classical physics”, a kind of things “which human beings cannot observe directly”. To measure such phenomena one needs highly sophisticated machines, which poses the problem, that the interpretation of possible ‘measurement data’ in terms of a quantum theory depends highly on the understanding of the working of the used measurement apparatus. (cf. p.8)
  2. This problem is well known in philosophy of science: (i) one wants to built a new theory T. (ii) For this theory one needs appropriate measurement data MD. (iii) The measurement as such needs a well defined procedure including different kinds of pre-defined objects and artifacts. The description of the procedure including the artifacts (which can be machines) is a theory of its own called measurement theory T*. (iv) Thus one needs a theory T* to enable a new theory T.
  3. In the case of QT one has the special case that QT itself has to be part of the measurement theory T*, i.e. QT subset T*. But, as Griffiths points out, the measurement problem in QT is even deeper; it is not only the conceptual dependency of QT from its measurement theory T*, but in the case of QT does the measurement apparatus directly interact with the target objects of QT because the measurement apparatus is itself part of the atomic and sub-atomic world which is the target. (cf. p.8) This has led to include the measurement as ‘stochastic time development’ explicitly into the QT. (cf. p.8) In his book Griffiths follows the strategy to deal with the ‘collapse of the wave function’ within the theoretical level, because it does not take place “in the experimental physicist’s laboratory”. (cf. p.9)
  4. As a consequence of these considerations Griffiths develops the fundamental principles in the chapters 2-16 without making any reference to measurement.

PRE-KNOWLEDGE

  1. Besides the special problem of measurement in quantum mechanics there is the general problem of measurement for every kind of empirical discipline which requires a perception of the real world guided by a scientific bias called ‘scientific knowledge’! Without a theoretical pre-knowledge there is no scientific observation possible. A scientific observation needs already a pre-theory T* defining the measurement procedure as well as the pre-defined standard object as well as – eventually — an ‘appropriate’ measurement device. Furthermore, to be able to talk about some measurement data as ‘data related to an object of QT’ one needs additionally a sufficient ‘pre-knowledge’ of such an object which enables the observer to decide whether the measured data are to be classified as ‘related to the object of QT. The most convenient way to enable this is to have already a proposal for a QT as the ‘knowledge guide’ how one ‘should look’ to the measured data.

QT STATES

  1. Related to the phenomena of quantum mechanics the phenomena are in QT according to Griffiths understood as ‘particles‘ whose ‘state‘ is given by a ‘complex-valued wave function ψ(x)‘, and the collection of all possible wave functions is assumed to be a ‘complex linear vector space‘ with an ‘inner product’, known as a ‘Hilbert space‘. “Two wave functions φ(x) and ψ(x) represent ‘distinct physical states’ … if and only if they are ‘orthogonal’ in the sense that their ‘inner product is zero’. Otherwise φ(x) and ψ(x) represent incompatible states of the quantum system …” .(p.2)
  2. “A quantum property … corresponds to a subspace of the quantum Hilbert space or the projector onto this subspace.” (p.2)
  3. A sample space of mutually-exclusive possibilities is a decomposition of the identity as a sum of mutually commuting projectors. One and only one of these projectors can be a correct description of a quantum system at a given time.cf. p.3)
  4. Quantum sample spaces can be mutually incompatible. (cf. p.3)
  5. “In … quantum mechanics [a physical variable] is represented by a Hermitian operator.… a real-valued function defined on a particular sample space, or decomposition of the identity … a quantum system can be said to have a value … of a physical variable represented by the operator F if and only if the quantum wave function is in an eigenstate of F … . Two physical variables whose operators do not commute correspond to incompatible sample spaces… “.(cf. p.3)
  6. “Both classical and quantum mechanics have dynamical laws which enable one to say something about the future (or past) state of a physical system if its state is known at a particular time. … the quantum … dynamical law … is the (time-dependent) Schrödinger equation. Given some wave function ψ_0 at a time t_0 , integration of this equation leads to a unique wave function ψ_t at any other time t. At two times t and t’ these uniquely defined wave functions are related by a … time development operator T(t’ , t) on the Hilbert space. Consequently we say that integrating the Schrödinger equation leads to unitary time development.” (p.3)
  7. “Quantum mechanics also allows for a stochastic or probabilistic time development … . In order to describe this in a systematic way, one needs the concept of a quantum history … a sequence of quantum events (wave functions or sub-spaces of the Hilbert space) at successive times. A collection of mutually … exclusive histories forms a sample space or family of histories, where each history is associated with a projector on a history Hilbert space. The successive events of a history are, in general, not related to one another through the Schrödinger equation. However, the Schrödinger equation, or … the time development operators T(t’ , t), can be used to assign probabilities to the different histories belonging to a particular family.” (p.3f)

HILBERT SPACE: FINITE AND INFINITE

  1. “The wave functions for even such a simple system as a quantum particle in one dimension form an infinite-dimensional Hilbert space … [but] one does not have to learn functional analysis in order to understand the basic principles of quantum theory. The majority of the illustrations used in Chs. 2–16 are toy models with a finite-dimensional Hilbert space to which the usual rules of linear algebra apply without any qualification, and for these models there are no mathematical subtleties to add to the conceptual difficulties of quantum theory … Nevertheless, they provide many useful insights into general quantum principles.”. (p.4f)

CALCULUS AND PROBABILITY

  1. Griffiths (2003) makes considerable use of toy models with a simple discretized time dependence … To obtain … unitary time development, one only needs to solve a simple difference equation, and this can be done in closed form on the back of an envelope. (cf. p.5f)
  2. Probability theory plays an important role in discussions of the time development of quantum systems. … when using toy models the simplest version of probability theory, based on a finite discrete sample space, is perfectly adequate.” (p.6)
  3. “The basic concepts of probability theory are the same in quantum mechanics as in other branches of physics; one does not need a new “quantum probability”. What distinguishes quantum from classical physics is the issue of choosing a suitable sample space with its associated event algebra. … in any single quantum sample space the ordinary rules for probabilistic reasoning are valid. ” (p.6)

QUANTUM REASONING

  1. The important difference compared to classical mechanics is the fact that “an initial quantum state does not single out a particular framework, or sample space of stochastic histories, much less determine which history in the framework will actually occur.” (p.7) There are multiple incompatible frameworks possible and to use the ordinary rules of propositional logic presupposes to apply these to a single framework. Therefore it is important to understand how to choose an appropriate framework.(cf. p.7)

NEXT

These are the basic ingredients which Griffiths mentions in chapter 1 of his book 2013. In the following these ingredients have to be understood so far, that is becomes clear how to relate the idea of a possible history of states (cf. chapters 8ff) where the future of a successor state in a sequence of timely separated states is described by some probability.

REFERENCES

  • R.B. Griffiths. Consistent Quantum Theory. Cambridge University Press, New York, 2003

 

BACKGROUND INFORMATION 27.Dec.2018: The AAI-paradigm and Quantum Logic. The Limits of Classic Probability

eJournal: uffmm.org, ISSN 2567-6458
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last Corrections: 30.Dec.2018

CONTEXT

This is a continuation from the post about QL Basics Concepts Part 1. The general topic here is the analysis of properties of human behavior, actually narrowed down to the statistical properties. From the different possible theories applicable to statistical properties of behavior here the one called CPT (classical probability theory) is selected for a short examination.

SUMMARY

An analysis of the classical probability theory shows that the empirical application of this theory is limited to static sets of events and probabilities. In the case of biological systems which are adaptive with regard to structure and cognition this does not work. This yields the question whether a quantum probability theory approach does work or not.

THE CPT IDEA

  1. Before we are looking  to the case of quantum probability theory (QLPT) let us examine the case of a classical probability theory (CPT) a little bit more.
  2. Generally one has to distinguish the symbolic formal representation of a theory T and some domain of application D distinct from the symbolic representation.
  3. In principle the domain of application D can be nearly anything, very often again another symbolic representation. But in the case of empirical applications we assume usually some subset of ’empirical events’ E of the ’empirical (real) world’ W.
  4. For the following let us assume (for a while) that this is the case, that D is a subset of the empirical world W.
  5. Talking about ‘events in an empirical real world’ presupposes that there there exists a ‘procedure of measurement‘ using a ‘previously defined standard object‘ and a ‘symbolic representation of the measurement results‘.
  6. Furthermore one has to assume a community of ‘observers‘ which have minimal capabilities to ‘observe’, which implies ‘distinctions between different results’, some ‘ordering of successions (before – after)’, to ‘attach symbols according to some rules’ to measurement results, to ‘translate measurement results’ into more abstract concepts and relations.
  7. Thus to speak about empirical results assumes a set of symbolic representations of those events as a finite set of symbolic representations which represent a ‘state in the real world’ which can have a ‘predecessor state before’ and – possibly — a ‘successor state after’ the ‘actual’ state. The ‘quality’ of these measurement representations depends from the quality of the measurement procedure as well as from the quality of the cognitive capabilities of the participating observers.
  8. In the classical probability theory T_cpt as described by Kolmogorov (1932) it is assumed that there is a set E of ‘elementary events’. The set E is assumed to be ‘complete’ with regard to all possible events. The probability P is coming into play with a mapping from E into the set of positive real numbers R+ written as P: E —> R+ or P(E) = 1 with the assumption that all the individual elements e_i of E have an individual probability P(e_i) which obey the rule P(e_1) + P(e_2) + … + P(e_n) = 1.
  9. In the formal theory T_cpt it is not explained ‘how’ the probabilities are realized in the concrete case. In the ‘real world’ we have to identify some ‘generators of events’ G, otherwise we do not know whether an event e belongs to a ‘set of probability events’.
  10. Kolmogorov (1932) speaks about a necessary generator as a ‘set of conditions’ which ‘allows of any number of repetitions’, and ‘a set of events can take place as a result of the establishment of the condition’. (cf. p.3) And he mentions explicitly the case that different variants of the a priori assumed possible events can take place as a set A. And then he speaks of this set A also of an event which has taken place! (cf. p.4)
  11. If one looks to the case of the ‘set A’ then one has to clarify that this ‘set A’ is not an ordinary set of set theory, because in a set every member occurs only once. Instead ‘A’ represents a ‘sequence of events out of the basic set E’. A sequence is in set theory an ‘ordered set’, where some set (e.g. E) is mapped into an initial segment  of the natural numbers Nat and in this case  the set A contains ‘pairs from E x Nat|\n’  with a restriction of the set Nat to some n. The ‘range’ of the set A has then ‘distinguished elements’ whereby the ‘domain’ can have ‘same elements’. Kolmogorov addresses this problem with the remark, that the set A can be ‘defined in any way’. (cf. p.4) Thus to assume the set A as a set of pairs from the Cartesian product E x Nat|\n with the natural numbers taken from the initial segment of the natural numbers is compatible with the remark of Kolmogorov and the empirical situation.
  12. For a possible observer it follows that he must be able to distinguish different states <s1, s2, …, sm> following each other in the real world, and in every state there is an event e_i from the set of a priori possible events E. The observer can ‘count’ the occurrences of a certain event e_i and thus will get after n repetitions for every event e_i a number of occurrences m_i with m_i/n giving the measured empirical probability of the event e_i.
  13. Example 1: Tossing a coin with ‘head (H)’ or ‘tail (T)’ we have theoretically the probabilities ‘1/2’ for each event. A possible outcome could be (with ‘H’ := 0, ‘T’ := 1): <((0,1), (0,2), (0,3), (1,4), (0,5)> . Thus we have m_H = 4, m_T = 1, giving us m_H/n = 4/5 and m_T/n = 1/5. The sum yields m_H/n + m_T/n = 1, but as one can see the individual empirical probabilities are not in accordance with the theory requiring 1/2 for each. Kolmogorov remarks in his text  that if the number of repetitions n is large enough then will the values of the empirically measured probability approach the theoretically defined values. In a simple experiment with a random number generator simulating the tossing of the coin I got the numbers m_Head = 4978, m_Tail = 5022, which gives the empirical probabilities m_Head/1000 = 0.4977 and m_Teil/ 1000 = 0.5021.
  14. This example demonstrates while the theoretical term ‘probability’ is a simple number, the empirical counterpart of the theoretical term is either a simple occurrence of a certain event without any meaning as such or an empirically observed sequence of events which can reveal by counting and division a property which can be used as empirical probability of this event generated by a ‘set of conditions’ which allow the observed number of repetitions. Thus we have (i) a ‘generator‘ enabling the events out of E, we have (ii) a ‘measurement‘ giving us a measurement result as part of an observation, (iii) the symbolic encoding of the measurement result, (iv) the ‘counting‘ of the symbolic encoding as ‘occurrence‘ and (v) the counting of the overall repetitions, and (vi) a ‘mathematical division operation‘ to get the empirical probability.
  15. Example 1 demonstrates the case of having one generator (‘tossing a coin’). We know from other examples where people using two or more coins ‘at the same time’! In this case the set of a priori possible events E is occurring ‘n-times in parallel’: E x … x E = E^n. While for every coin only one of the many possible basic events can occur in one state, there can be n-many such events in parallel, giving an assembly of n-many events each out of E. If we keeping the values of E = {‘H’, ‘T’} then we have four different basic configurations each with probability 1/4. If we define more ‘abstract’ events like ‘both the same’ (like ‘0,0’, ‘1,1’) or ‘both different’ (like ‘0,1’. ‘1,0’), then we have new types of complex events with different probabilities, each 1/2. Thus the case of n-many generators in parallel allows new types of complex events.
  16. Following this line of thinking one could consider cases like (E^n)^n or even with repeated applications of the Cartesian product operation. Thus, in the case of (E^n)^n, one can think of different gamblers each having n-many dices in a cup and tossing these n-many dices simultaneously.
  17. Thus we have something like the following structure for an empirical theory of classical probability: CPT(T) iff T=<G,E,X,n,S,P*>, with ‘G’ as the set of generators producing out of E events according to the layout of the set X in a static (deterministic) manner. Here the  set E is the set of basic events. The set X is a ‘typified set’ constructed out of the set E with t-many applications of the Cartesian operation starting with E, then E^n1, then (E^n1)^n2, …. . ‘n’ denotes the number of repetitions, which determines the length of a sequence ‘S’. ‘P*’ represents the ’empirical probability’ which approaches the theoretical probability P while n is becoming ‘big’. P* is realized as a tuple of tuples according to the layout of the set X  where each element in the range of a tuple  represents the ‘number of occurrences’ of a certain event out of X.
  18. Example: If there is a set E = {0,1} with the layout X=(E^2)^2 then we have two groups with two generators each: <<G1, G2>,<G3,G4>>. Every generator G_i produces events out of E. In one state i this could look like  <<0, 0>,<1,0>>. As part of a sequence S this would look like S = <….,(<<0, 0>,<1,0>>,i), … > telling that in the i-th state of S there is an occurrence of events like shown. The empirical probability function P* has a corresponding layout P* = <<m1, m2>,<m3,m4>> with the m_j as ‘counter’ which are counting the occurrences of the different types of events as m_j =<c_e1, …, c_er>. In the example there are two different types of events occurring {0,1} which requires two counters c_0 and c_1, thus we would have m_j =<c_0, c_1>, which would induce for this example the global counter structure:  P* = <<<c_0, c_1>, <c_0, c_1>>,<<c_0,  c_1>,<c_0, c_1>>>. If the generators are all the same then the set of basic events E is the same and in theory   the theoretical probability function P: E —> R+ would induce the same global values for all generators. But in the empirical case, if the theoretical probability function P is not known, then one has to count and below the ‘magic big n’ the values of the counter of the empirical probability function can be different.
  19. This format of the empirical classical  probability theory CPT can handle the case of ‘different generators‘ which produce events out of the same basic set E but with different probabilities, which can be counted by the empirical probability function P*. A prominent case of different probabilities with the same set of events is the case of manipulations of generators (a coin, a dice, a roulette wheel, …) to deceive other people.
  20. In the examples mentioned so far the probabilities of the basic events as well as the complex events can be different in different generators, but are nevertheless  ‘static’, not changing. Looking to generators like ‘tossing a coin’, ‘tossing a dice’ this seams to be sound. But what if we look to other types of generators like ‘biological systems’ which have to ‘decide’ which possible options of acting they ‘choose’? If the set of possible actions A is static, then the probability of selecting one action a out of A will usually depend from some ‘inner states’ IS of the biological system. These inner states IS need at least the following two components:(i) an internal ‘representation of the possible actions’ IS_A as well (ii) a finite set of ‘preferences’ IS_Pref. Depending from the preferences the biological system will select an action IS_a out of IS_A and then it can generate an action a out of A.
  21. If biological systems as generators have a ‘static’ (‘deterministic’) set of preferences IS_Pref, then they will act like fixed generators for ‘tossing a coin’, ‘tossing a dice’. In this case nothing will change.  But, as we know from the empirical world, biological systems are in general ‘adaptive’ systems which enables two kinds of adaptation: (i) ‘structural‘ adaptation like in biological evolution and (ii) ‘cognitive‘ adaptation as with higher organisms having a neural system with a brain. In these systems (example: homo sapiens) the set of preferences IS_Pref can change in time as well as the internal ‘representation of the possible actions’ IS_A. These changes cause a shift in the probabilities of the events manifested in the realized actions!
  22. If we allow possible changes in the terms ‘G’ and ‘E’ to ‘G+’ and ‘E+’ then we have no longer a ‘classical’ probability theory CPT. This new type of probability theory we can call ‘non-classic’ probability theory NCPT. A short notation could be: NCPT(T) iff T=<G+,E+,X,n,S,P*> where ‘G+’ represents an adaptive biological system with changing representations for possible Actions A* as well as changing preferences IS_Pref+. The interesting question is, whether a quantum logic approach QLPT is a possible realization of such a non-classical probability theory. While it is known that the QLPT works for physical matters, it is an open question whether it works for biological systems too.
  23. REMARK: switching from static generators to adaptive generators induces the need for the inclusion of the environment of the adaptive generators. ‘Adaptation’ is generally a capacity to deal better with non-static environments.

See continuation here.