Category Archives: empirical theory

ChatGPT4: How do you define the concept ’empirical theory’?

Last Change: July 16, 2024

Inquirer: Gerd Doeben-Henisch

Email: info@uffmm.org

CONTEXT

This post is part of the Dialogues with chatGPT4.

How do you define the concept ’empirical theory’?

SUMMARY

At the end of the dialogue the inquirer stated: I agree with your conclusion. But this conclusion is not a sufficient answer to my question. Somehow you are stating, that it is a “significant challenge” without having a clear answer today. Is this right?

chatGPT4: Yes, that’s a fair assessment. Integrating different formalisms across various scientific disciplines remains a significant challenge and an area of ongoing research and development. While there are strategies and approaches that can aid in this integration, as mentioned earlier, there is no universally applicable solution that works seamlessly across all fields and for all types of problems.

Protocol of the Dialogue

QUESTION: How do you define the concept ’empirical theory’?

The concept of an “empirical theory” generally refers to a theory that is grounded in and supported by observations and data derived from real-world experiences, experiments, or measurements, rather than purely theoretical or abstract reasoning. Here are some key aspects that define an empirical theory:

1. **Data-Driven**: An empirical theory is based on data collected through observations, experiments, or other forms of empirical research. The theory must be testable and measurable against the natural world.

2. **Predictive Power**: It typically provides a framework that not only explains current observations but also predicts future occurrences or behaviors. This predictive aspect is crucial for the validation and utility of the theory.

3. **Falsifiability**: A core principle in empirical theories is that they must be falsifiable. This means there must be some conceivable observation or experiment that could refute the theory if it were not true, making it subject to potential disproof.

4. **Replicability**: The results that support an empirical theory should be replicable by independent studies. This is important to ensure that the theory is robust and not the result of experimental errors or biases.

5. **Revision and Refinement**: Empirical theories are not static; they evolve over time as new data become available and as old assumptions are tested. Theories may be refined or discarded depending on how well they continue to match observations.

Empirical theories are fundamental in sciences such as physics, chemistry, biology, and social sciences, where theories must be grounded in observable phenomena and subject to experimental validation.

QUESTION: You mention as one key property of an empirical theory ‘Predictive Power’. What are the Formalisms within a theory to enable ‘predictive power’?

The concept of “predictive power” in an empirical theory is central to its utility and credibility. Predictive power allows a theory to forecast future events or phenomena based on existing data and principles. Several formalisms within a theory enable this predictive capability:

1. **Mathematical Models**: Many scientific theories rely on mathematical models that describe relationships between variables quantitatively. These models often take the form of equations that can be used to make predictions. For example, Newton’s laws of motion use differential equations to predict the future position and velocity of moving objects.

2. **Statistical Models**: These models use statisticalHow do you define the concept ’empirical theory’?

methods to infer and predict outcomes based on data. Regression analysis, for example, can predict a dependent variable based on one or more independent variables. Machine learning algorithms also fall into this category, where models are trained on historical data to predict outcomes from new data.

3. **Hypotheses and Propositions**: Theories often include specific hypotheses that are testable statements derived from general principles of the theory. These hypotheses can be used to predict outcomes in specific situations, which can then be tested through experiments or observations.

4. **Simulations**: In some fields, particularly in complex systems like climate science or economics, simulations play a crucial role in prediction. These simulations are typically run on computational models that incorporate many variables and their interactions to forecast future states.

5. **Algorithmic Rules**: Especially in theories related to computer science or systems theory, algorithmic rules that specify sequences of actions or decisions based on input conditions can predict outcomes of processes or the behavior of systems.

6. **Causal Inference**: Identifying causal relationships within data allows theories to predict the effects of changing one variable on another. This is fundamental in fields like epidemiology and economics, where understanding cause and effect is crucial for making predictions.

These formalisms are not only instrumental in predicting future events but also in confirming the validity of the theories themselves. By successfully predicting outcomes, theories validate their underlying assumptions and enhance their credibility and acceptance in the scientific community.

QUESTION. You mention about six different formalisms to enable predictions within an empirical theory. But these six formalisms are as such formally not compatible. Does this mean that we are practicing six different kinds of theories which are ‘incompatible’?

The presence of various formalisms within empirical theories, each suitable for different types of data, phenomena, or disciplines, does not necessarily imply that these theories are “incompatible” in the broader sense of scientific inquiry. Instead, it reflects the diverse nature of the world and the complexity of phenomena that science seeks to understand and predict. Here’s how these differences can be viewed:

1. **Complementarity**: Different formalisms often address different aspects of phenomena or work best under different conditions. For instance, statistical models may be best for systems where direct causal relationships are difficult to measure due to noise or complexity, while causal inference is preferred in situations where the relationship between variables is clear and measurable. These approaches are complementary rather than incompatible.

2. **Interdisciplinary Integration**: In many fields, insights from multiple formalisms are integrated to form a more comprehensive understanding. For example, in climate science, mathematical models of physical processes are combined with statistical models derived from historical data to make more accurate predictions about future climate conditions.

3. **Scalability and Applicability**: Some formalisms are more applicable at certain scales or in certain contexts. For instance, algorithmic rules might be most applicable in computer science and certain areas of engineering, while mathematical models are often used in physics and chemistry across various scales.

4. **Evolution of Theories**: Theories can evolve over time as new tools and formalisms are developed. This evolution can lead to the merging of previously distinct approaches or the refinement of existing theories to incorporate new findings.

5. **Methodological Pluralism**: The diversity of formalisms is a strength that allows researchers to tackle complex questions from multiple angles. Methodological pluralism acknowledges that no single approach can capture all aspects of complex realities, and different methods can provide different insights that are valid within their own scopes.

In conclusion, rather than viewing different formalisms as leading to incompatible theories, it’s more productive to see them as different tools in a toolkit, each useful for certain types of problems. The integration of these tools across disciplines and their ongoing development is part of what drives scientific progress.

QUESTION: That ‘different formalisms’ can be seen as ‘each useful for certain types of problems’ is OK. But this answers not the basic question, how to ‘integrate different formalisms’. Do you know any kind of strategy which solves the problem on a general formal level?

Integrating different formalisms from diverse scientific disciplines is a complex challenge that often requires a multi-faceted approach. Here are some strategies that can help address this issue on a more formal level:

### 1. **Unified Modeling Frameworks**

One approach to integrating different formalisms is to use unified modeling frameworks that can accommodate various types of models. For example, systems theory and network theory provide frameworks that can model physical, biological, social, and information systems. These frameworks allow different types of data and relationships to be incorporated into a cohesive model. Examples include agent-based modeling and system dynamics, which can integrate both qualitative and quantitative data.

### 2. **Meta-Analysis and Systematic Reviews**

In areas like medicine and social sciences, meta-analysis and systematic reviews are used to synthesize results from multiple studies that may use different methodologies and formalisms. This approach provides a way to quantitatively combine data from different sources to arrive at more general conclusions.

### 3. **Interdisciplinary Collaboration**

Active collaboration across disciplines can facilitate the integration of different formalisms by leveraging the expertise of specialists in each area. This might involve joint research projects, interdisciplinary teams, and cross-disciplinary training programs that encourage a deeper understanding and adaptation of diverse methodologies.

### 4. **Mathematical and Statistical Bridging Methods**

Techniques such as Bayesian statistics, Monte Carlo methods, and other computational approaches can serve as bridges between different formalisms. These methods allow for the combination of data and predictions from different types of models, even when these models are based on different foundational assumptions.

### 5. **Semantic Integration**

This involves developing a common vocabulary and set of concepts that can be shared across disciplines. Ontologies and semantic web technologies are useful in this context, as they help define relationships between different concepts and facilitate data integration across diverse datasets.

### 6. **Hybrid Models**

Creating hybrid models that incorporate elements from different formalisms can be effective, especially in engineering and environmental science. For example, combining physical laws with empirical data within a computational model can yield predictions that are more accurate and robust than those obtained from a single formalism.

### 7. **Philosophical and Methodological Reconciliation**

Addressing underlying philosophical and methodological differences is crucial for integrating formalisms. This might involve developing new methodologies that acknowledge and bridge epistemological differences, such as combining qualitative and quantitative research methods.

### Conclusion

While integrating different formalisms poses significant challenges, these strategies offer pathways to build more comprehensive and universally applicable theories. The process is inherently iterative and requires ongoing dialogue between different scientific domains. This integration is essential for tackling complex, multifaceted problems that no single discipline can solve alone.

QUESTION: I agree with your conclusion. But this conclusion is not a sufficient answer to my question. Somehow you are stating, that it is a “significant challenge” without having a clear answer today. Is this right?

Yes, that’s a fair assessment. Integrating different formalisms across various scientific disciplines remains a significant challenge and an area of ongoing research and development. While there are strategies and approaches that can aid in this integration, as mentioned earlier, there is no universally applicable solution that works seamlessly across all fields and for all types of problems.

The complexity of integrating formalisms arises from several factors:

1. **Epistemological Differences**: Different disciplines often operate under fundamentally different assumptions and theories about the nature of knowledge and reality. These differences can make it challenging to create a common framework that is accepted and useful across disciplines.

2. **Methodological Diversity**: Each field has developed its methodologies and tools that are optimized for its specific types of data and problems. Bridging these methodological gaps requires not just technical solutions but often a rethinking of foundational principles.

3. **Data Incompatibility**: Diverse types of data, from quantitative measurements to qualitative observations, can be difficult to integrate due to differences in scale, units, and interpretations.

4. **Communication Barriers**: Effective interdisciplinary collaboration, which is crucial for integrating formalisms, is often hindered by differences in terminology, priorities, and research cultures.

The strategies I mentioned (unified modeling frameworks, interdisciplinary collaboration, hybrid models, etc.) represent efforts to address these challenges, but they are more about mitigating the difficulties rather than completely solving them. Progress is being made, especially with the advent of advanced computational techniques and a growing emphasis on interdisciplinary research, but fully integrating diverse scientific formalisms into a coherent, universally applicable framework remains an aspirational goal rather than a current reality.

Thus, while significant strides can be and have been made in specific contexts or projects, the broader challenge persists and is a dynamic field of academic inquiry and practical application.

ANSWER of INQUIRER: Thanks for this dialogue 🙂

You’re welcome! I’m glad we could discuss this topic together. If you have any more questions or need further clarification on anything else, feel free to ask. Happy thinking! 😊

REVIEW: Keith E.Stanovich, Richard F.West, Maggie E.Toplak, “The Rational Quotient. Towards a Test of Rational Thinking”, MIT Press, 2016

(Last change: Nov 1, 2023)

CONTEXT

This text belongs to the overall theme REVIEWS.

In the last months I was engaged with the topic of text-generating algorithms and the possible impact for a scientific discourse (some first notices to this discussion you can find here (https://www.uffmm.org/2023/08/24/homo-sapiens-empirical-and-sustained-empirical-theories-emotions-and-machines-a-sketch/)). In this context it is important to clarify the role and structure of human actors as well as the concept of Intelligence. Meanwhile I have abandoned the word Intelligence completely because the inflationary use in today mainstream pulverises any meaning. Even in one discipline — like psychology — you can find many different concepts. In this context I have read the book of Stanovich et.al to have a prominent example of using the concept of intelligence, there combined with the concept of rationality, which is no less vague.

Introduction

The book “The Rationality Quotient” from 2016 represents not the beginning of a discourse but is a kind of summary of a long lasting discourse with many publications before. This makes this book interesting, but also difficult to read in the beginning, because the book is using nearly on every page theoretical terms, which are assumed to be known to the reader and cites other publications without giving sufficient explanations why exactly these cited publications are important. This is no argument against this book but sheds some light on the reader, who has to learn a lot to understand the text.

A text with the character of summing up its subject is good, because it has a confirmed meaning about the subject which enables a kind of clarity which is typical for that state of elaborated point of view.

In the following review it is not the goal to give a complete account of every detail of this book but only to present the main thesis and then to analyze the used methods and the applied epistemological framework.

Main Thesis of the Book

The reviewing starts with the basic assumptions and the main thesis.

FIGURE 1 : The beginning. Note: the number ‘2015’ has to be corrected to ‘2016’.

FIGURE 2 : First outline of cognition. Note: the number ‘2015’ has to be corrected to ‘2016’.

As mentioned in the introduction you will in the book not find a real overview about the history of psychological research dealing with the concept of Intelligence and also no overview about the historical discourse to the concept of Rationality, whereby the last concept has also a rich tradition in Philosophy. Thus, somehow you have to know it.

There are some clear warnings with regard to the fuzziness of the concept rationality (p.3) as well as to the concept of intelligence (p.15). From a point of view of Philosophy of Science it could be interesting to know what the circumstances are which are causing such a fuzziness, but this is not a topic of the book. The book talks within its own selected conceptual paradigm. Being in the dilemma, of what kind of intelligence paradigm one wants to use, the book has decided to work with the Cattell-Horn-Carroll (CTC) paradigm, which some call a theory. [1]

Directly from the beginning it is explained that the discussion of Intelligence is missing a clear explanation of the full human model of cognition (p.15) and that intelligence tests therefore are mostly measuring only parts of human cognitive functions. (p.21)

Thus let us have a more detailed look to the scenario.

[1] For a first look to the Cattell–Horn–Carroll theory see: https://en.wikipedia.org/wiki/Cattell%E2%80%93Horn%E2%80%93Carroll_theory, a first overview.

Which point of View?

The book starts with a first characterization of the concept of Rationality within a point of view which is not really clear. From different remarks one gets some hints to modern Cognitive Science (4,6), to Decision Theory (4) and Probability Calculus (9), but a clear description is missing.

And it is declared right from the beginning, that the main aim of the book is the Construction of a rational Thinking Test (4), because for the authors the used Intelligence Tests — later reduced to the Carroll-Horn-Carroll (CHC) type of intelligence test (16) — are too narrow in what they are measuring (15, 16, 21).

Related to the term Rationality the book characterizes some requirements which the term rationality should fulfill (e.g. ‘Rationality as a continuum’ (4), ’empirically based’ (4), ‘operationally grounded’ (4), a ‘strong definition’ (5), a ‘normative one’ (5), ‘normative model of optimum judgment’ (5)), but it is more or less open, what these requirements imply and what tacit assumptions have to be fulfilled, that this will work.

The two requirements ’empirically based’ as well as ‘operationally grounded’ point in the direction of an tacitly assumed concept of an empirical theory, but exactly this concept — and especially in association with the term cognitive science — isn’t really clear today.

Because the authors make in the next pages a lot of statements which claim to be serious, it seems to be important for the discussion in this review text to clarify the conditions of the ‘meaning of language expressions’ and of being classified as ‘being true’.

If we assume — tentatively — that the authors assume a scientific theory to be primarily a text whose expressions have a meaning which can transparently be associated with an empirical fact and if this is the case, then the expression will be understood as being grounded and classified as true, then we have characterized a normal text which can be used in everyday live for the communication of meanings which can become demonstrated as being true.

Is there a difference between such a ‘normal text’ and a ‘scientific theory’? And, especially here, where the context should be a scientific theory within the discipline of cognitive science: what distinguishes a normal text from a ‘scientific theory within cognitive science’?

Because the authors do not explain their conceptual framework called cognitive science we recur here to a most general characterization [2,3] which tells us, that cognitive science is not a single discipline but an interdisciplinary study which is taking from many different disciplines. It has not yet reached a state where all used methods and terms are embedded in one general coherent framework. Thus the relationship of the used conceptual frameworks is mostly fuzzy, unclear. From this follows directly, that the relationship of the different terms to each other — e.g. like ‘underlying preferences’ and ‘well ordered’ — is within such a blurred context rather unclear.

Even the simple characterization of an expression as ‘having an empirical meaning’ is unclear: what are the kinds of empirical subjects and the used terms? According to the list of involved disciplines the disciplines linguistics [4], psychology [5] or neuroscience [6] — besides others — are mentioned. But every of these disciplines is itself today a broad field of methods, not integrated, dealing with a multifaceted subject.

Using an Auxiliary Construction as a Minimal Point of Reference

Instead of becoming somehow paralyzed from these one-and-all characterizations of the individual disciplines one can try to step back and taking a look to basic assumptions about empirical perspectives.

If we take a group of Human Observers which shall investigate these subjects we could make the following assumptions:

  1. Empirical Linguistics is dealing with languages, spoken as well as written by human persons, within certain environments, and these can be observed as empirical entities.
  2. Empirical Psychology is dealing with the behavior of human persons (a kind of biological systems) within certain environments, and these can be observed.
  3. Empirical Neuroscience is dealing with the brain as part of a body which is located in some environment, and this all can be observed.

The empirical observations of certain kinds of empirical phenomena can be used to define more abstract concepts, relations, and processes. These more abstract concepts, relations, and processes have ‘as such’ no empirical meaning! They constitute a formal framework which has to become correlated with empirical facts to get some empirical meaning. As it is known from philosophy of science [7] the combination of empirical concepts within a formal framework of abstracts terms can enable ‘abstract meanings’ which by logical conclusions can produce statements which are — in the moment of stating them — not empirically true, because ‘real future’ has not yet happened. And on account of the ‘generality’ of abstract terms compared to the finiteness and concreteness of empirical facts it can happen, that the inferred statements never will become true. Therefore the mere usage of abstract terms within a text called scientific theory does not guarantee valid empirical statements.

And in general one has to state, that a coherent scientific theory including e.g. linguistics, psychology and neuroscience, is not yet in existence.

To speak of cognitive science as if this represents a clearly defined coherent discipline seems therefore to be misleading.

This raises questions about the project of a constructing a coherent rational thinking test (CART).

[2] See ‘cognitive science’ in wikipedia: https://en.wikipedia.org/wiki/Cognitive_science

[3] See too ‘cognitive science’ in the Stanford Encyclopedia of Philosophy: https://plato.stanford.edu/entries/cognitive-science/

[4] See ‘linguistics’ in wikipedia: https://en.wikipedia.org/wiki/Linguistics

[5] See ‘psychology’ in wikipedia: https://en.wikipedia.org/wiki/Psychology

[6] See ‘neuroscience’ in wikipedia: https://en.wikipedia.org/wiki/Neuroscience

[7] See ‘philosophy of science’ in wikipedia: https://en.wikipedia.org/wiki/Philosophy_of_science

‘CART’ TEST FRAMEWORK – A Reconstruction from the point of View of Philosophy of Science

Before I will dig deeper into the theory I try to understand the intended outcome of this theory as some point of reference. The following figure 3 gives some hints.

FIGURE 3 : Outline of the Test Framework based on the Appendix in Stanovich et.al 2016. This Outline is a Reconstruction by the author of this review.

It seems to be important to distinguish at least three main parts of the whole scientific endeavor:

  1. The group of scientists which has decided to process a certain problem.
  2. The generated scientific theory as a text.
  3. The description of a CART Test, which describes a procedure, how the abstract terms of the theory can be associated with real facts.

From the group of scientists (Stanovich et al.) we know that they understand themselves as cognitive scientists (without having a clear characterization, what this means concretely).

The intended scientific theory as a text is here assumed to be realized in the book, which is here the subject of a review.

The description of a CART Test is here taken from the appendix of the book.

To understand the theory it is interesting to see, that in the real test the test system (assumed here as a human person) has to read (and hear?) a instruction, how to proceed with a task form, and then the test system (a human person) has to process the test form in the way it has understood the instructions and the test form as it is.

The result is a completed test form.

And it is then this completed test form which will be rated according to the assumed CART theory.

This complete paradigm raises a whole bunch of questions which to answer here in full is somehow out of range.

Mix-Up of Abstract Terms

Because the Test Scenario presupposes a CART theory and within this theory some kind of a model of intended test users it can be helpful to have a more closer look to this assumed CART model, which is located in a person.

FIGURE 4 : General outline of the logic behind CART according to Stanovich et al. (2016).

The presented cognitive architecture shall present a framework for the CART (Comprehensive Assessment of Rational Thinking), whereby this framework is including a model. The model is not only assumed to contextualize and classify heuristics and tasks, but it also presents Rationality in a way that one can deduce mental characteristics included in rationality.(cf. 37)

Because the term Rationality is not an individual empirical fact but an abstract term of a conceptual framework, this term has as such no meaning. The meaning of this abstract term has to be arranged by relations to other abstract terms which themselves are sufficiently related to concrete empirical statements. And these relations between abstract terms and empirical facts (represented as language expressions) have to be represented in a manner, that it is transparent how the the measured facts are related to the abstract terms.

Here Stanovich et al. is using another abstract term Mind, which is associated with characteristics called mental characteristics: Reflective mind, Algorithmic Level, and Mindware.

And then the text tells that Rationality is presenting mental characteristics. What does this mean? Is rationality different from the mind, who has some characteristics, which can be presented from rationality using somehow the mind, or is rationality nevertheless part of the mind and manifests themself in these mental characteristics? But what kind of the meaning could this be for an abstract term like rationality to be part of the mind? Without an explicit model associated with the term Mind which arranges the other abstract term Rationality within this model there exists no meaning which can be used here.

These considerations are the effect of a text, which uses different abstract terms in a way, which is rather unclear. In a scientific theory this should not be the case.

Measuring Degrees of Rationality

In the beginning of chapter 4 Stanovich et al. are looking back to chapter 1. Here they built up a chain of arguments which illustrate some general perspective (cf. 63):

  1. Rationality has degrees.
  2. These degrees of rationality can be measured.
  3. Measurement is realized by experimental methods of cognitive science.
  4. The measuring is based on the observable behavior of people.
  5. The observable behavior can manifest whether the individual actor (a human person) follows assumed preferences related to an assumed axiom of choice.
  6. Observable behavior which is classified as manifesting assumed internal preferences according to an assumed internal axiom of choice can show descriptive and procedural invariance.
  7. Based on these deduced descriptive and procedural invariance, it can be inferred further, that these actors are behaving as if they are maximizing utility.
  8. It is difficult to assess utility maximization directly.
  9. It is much easier to assess whether one of the axioms of rational choice is being violated.

These statements characterize the Logic of the CART according to Stanovich et al. (cf.64)

A major point in this argumentation is the assumption, that observable behavior is such, that one can deduce from the properties of this behavior those attributes/ properties, which point (i) to an internal model of an axiom of choice, (ii) to internal processes, which manifest the effects of this internal model, (iii) to certain characteristics of these internal processes which allow the deduction of the property of maximizing utility or not.

These are very strong assumptions.

If one takes further into account the explanations from the pages 7f about the required properties for an abstract term axiom of choice (cf. figure 1) then these assumptions appear to be very demanding.

Can it be possible to extract the necessary meaning out of observable behavior in a way, which is clear enough by empirical standards, that this behavior shows property A and not property B ?

As we know from the description of the CART in the appendix of the book (cf. figure 3) the real behavior assumed for an CART is the (i) reading (or hearing?) of an instruction communicated by ordinary English, and then (ii) a behavior deduced from the understanding of the instruction, which (iii) manifests themself in the reading of a form with a text and filling out this form in predefined positions in a required language.

This described procedure is quite common throughout psychology and similar disciplines. But it is well known, that the understanding of language instructions is very error-prone. Furthermore, the presentation of a task as a text is inevitably highly biased and additionally too very error-prone with regard to the understanding (this is a reason why in usability testing purely text-based tests are rather useless).

The point is, that the empirical basis is not given as a protocol of observations of language free behavior but of a behavior which is nearly completely embedded in the understanding and handling of texts. This points to the underlying processes of text understanding which are completely internal to the actor. There exists no prewired connection between the observable strings of signs constituting a text and the possible meaning which can be organized by the individual processes of text understanding.

Stopping Here

Having reached this point of reading and trying to understand I decided to stop here: to many questions on all levels of a scientific discourse and the relationships between main concepts and terms appear in the book of Stanovich et al. to be not clear enough. I feel therefore confirmed in my working hypothesis from the beginning, that the concept of intelligence today is far too vague, too ambiguous to contain any useful kernel of meaning any more. And concepts like Rationality, Mind (and many others) seem to do not better.

Chatting with chatGPT4

Since April 2023 I have started to check the ability of chatGPT4 to contribute to a philosophical and scientific discourse. The working hypothesis is, that chatGPT4 is good in summarizing the common concepts, which are used in public texts, but chatGPT is not able for critical evaluations, not for really new creative ideas and in no case for systematic analysis of used methods, used frameworks, their interrelations, their truth-conditons and much more, what it cannot. Nevertheless, it is a good ‘common sense check’. Until now I couldn’t learn anything new from these chats.

If you have read this review with all the details and open questions you will be perhaps a little bit disappointed about the answers from chatGPT4. But keep calm: it is a bit helpful.

Protocol with chatGPT4

Collective human-machine intelligence and text generation. A transdisciplinary analysis.

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Time: Sept 25, 2023 – Oct 3, 2023

Translation: This text is a translation from the German Version into English with the aid of the software deepL.com as well as with chatGPT4, moderated by the author. The style of the two translators is different. The author is not good enough to classify which translator is ‘better’.

CONTEXT

This text is the outcome of a conference held at the Technical University of Darmstadt (Germany) with the title: Discourses of disruptive digital technologies using the example of AI text generators ( https://zevedi.de/en/topics/ki-text-2/ ). A German version of this article will appear in a book from de Gruyter as open access in the beginning of 2024.

Collective human-machine intelligence and text generation. A transdisciplinary analysis.

Abstract

Based on the conference theme “AI – Text and Validity. How do AI text generators change scientific discourse?” as well as the special topic “Collective human-machine intelligence using the example of text generation”, the possible interaction relationship between text generators and a scientific discourse will be played out in a transdisciplinary analysis. For this purpose, the concept of scientific discourse will be specified on a case-by-case basis using the text types empirical theory as well as sustained empirical theory in such a way that the role of human and machine actors in these discourses can be sufficiently specified. The result shows a very clear limitation of current text generators compared to the requirements of scientific discourse. This leads to further fundamental analyses on the example of the dimension of time with the phenomenon of the qualitatively new as well as on the example of the foundations of decision-making to the problem of the inherent bias of the modern scientific disciplines. A solution to the inherent bias as well as the factual disconnectedness of the many individual disciplines is located in the form of a new service of transdisciplinary integration by re-activating the philosophy of science as a genuine part of philosophy. This leaves the question open whether a supervision of the individual sciences by philosophy could be a viable path? Finally, the borderline case of a world in which humans no longer have a human counterpart is pointed out.

AUDIO: Keyword Sound

STARTING POINT

This text takes its starting point from the conference topic “AI – Text and Validity. How do AI text generators change scientific discourses?” and adds to this topic the perspective of a Collective Human-Machine Intelligence using the example of text generation. The concepts of text and validity, AI text generators, scientific discourse, and collective human-machine intelligence that are invoked in this constellation represent different fields of meaning that cannot automatically be interpreted as elements of a common conceptual framework.

TRANSDISCIPLINARY

In order to be able to let the mentioned terms appear as elements in a common conceptual framework, a meta-level is needed from which one can talk about these terms and their possible relations to each other. This approach is usually located in the philosophy of science, which can have as its subject not only single terms or whole propositions, but even whole theories that are compared or possibly even united. The term transdisciplinary [1] , which is often used today, is understood here in this philosophy of science understanding as an approach in which the integration of different concepts is redeemed by introducing appropriate meta-levels. Such a meta-level ultimately always represents a structure in which all important elements and relations can gather.

[1] Jürgen Mittelstraß paraphrases the possible meaning of the term transdisciplinarity as a “research and knowledge principle … that becomes effective wherever a solely technical or disciplinary definition of problem situations and problem solutions is not possible…”. Article Methodological Transdisciplinarity, in LIFIS ONLINE, www.leibniz-institut.de, ISSN 1864-6972, p.1 (first published in: Technology Assessment – Theory and Practice No.2, 14.Jg., June 2005, 18-23). In his text Mittelstrass distinguishes transdisciplinarity from the disciplinary and from the interdisciplinary. However, he uses only a general characterization of transdisciplinarity as a research guiding principle and scientific form of organization. He leaves the concrete conceptual formulation of transdisciplinarity open. This is different in the present text: here the transdisciplinary theme is projected down to the concreteness of the related terms and – as is usual in philosophy of science (and meta-logic) – realized by means of the construct of meta-levels.

SETTING UP A STRUCTURE

Here the notion of scientific discourse is assumed as a basic situation in which different actors can be involved. The main types of actors considered here are humans, who represent a part of the biological systems on planet Earth as a kind of Homo sapiens, and text generators, which represent a technical product consisting of a combination of software and hardware.

It is assumed that humans perceive their environment and themselves in a species-typical way, that they can process and store what they perceive internally, that they can recall what they have stored to a limited extent in a species-typical way, and that they can change it in a species-typical way, so that internal structures can emerge that are available for action and communication. All these elements are attributed to human cognition. They are working partially consciously, but largely unconsciously. Cognition also includes the subsystem language, which represents a structure that on the one hand is largely species-typically fixed, but on the other hand can be flexibly mapped to different elements of cognition.

In the terminology of semiotics [2] the language system represents a symbolic level and those elements of cognition, on which the symbolic structures are mapped, form correlates of meaning, which, however, represent a meaning only insofar as they occur in a mapping relation – also called meaning relation. A cognitive element as such does not constitute meaning in the linguistic sense. In addition to cognition, there are a variety of emotional factors that can influence both cognitive processes and the process of decision-making. The latter in turn can influence thought processes as well as action processes, consciously as well as unconsciously. The exact meaning of these listed structural elements is revealed in a process model [3] complementary to this structure.

[2] See, for example, Winfried Nöth: Handbuch der Semiotik. 2nd, completely revised edition. Metzler, Stuttgart/Weimar, 2000

[3] Such a process model is presented here only in partial aspects.

SYMBOLIC COMMUNICATION SUB-PROCESS

What is important for human actors is that they can interact in the context of symbolic communication with the help of both spoken and written language. Here it is assumed – simplistically — that spoken language can be mapped sufficiently accurately into written language, which in the standard case is called text. It should be noted that texts only represent meaning if the text producers involved, as well as the text recipients, have a meaning function that is sufficiently similar.
For texts by human text producers it is generally true that, with respect to concrete situations, statements as part of texts can be qualified under agreed conditions as now matching the situation (true) or as now not now matching the situation (false). However, a now-true can become a now-not-true again in the next moment and vice versa.

This dynamic fact refers to the fact that a punctual occurrence or non-occurrence of a statement is to be distinguished from a structural occurrence/ non-occurrence of a statement, which speaks about occurrence/ non-occurrence in context. This refers to relations which are only indirectly apparent in the context of a multitude of individual events, if one considers chains of events over many points in time. Finally, one must also consider that the correlates of meaning are primarily located within the human biological system. Meaning correlates are not automatically true as such, but only if there is an active correspondence between a remembered/thought/imagined meaning correlate and an active perceptual element, where an intersubjective fact must correspond to the perceptual element. Just because someone talks about a rabbit and the recipient understands what a rabbit is, this does not mean that there is also a real rabbit which the recipient can perceive.

TEXT-GENERATORS

When distinguishing between the two different types of actors – here biological systems of the type Homo sapiens and there technical systems of the type text-generators – a first fundamental asymmetry immediately strikes the eye: so-called text-generators are entities invented and built by humans; furthermore, it is humans who use them, and the essential material used by text-generators are furthermore texts, which are considered human cultural property, created and used by humans for a variety of discourse types, here restricted to scientific discourse.


In the case of text generators, let us first note that we are dealing with machines that have input and output, a minimal learning capability, and whose input and output can process text-like objects.
Insofar as text generators can process text-like objects as input and process them again as output, an exchange of texts between humans and text generators can take place in principle.

At the current state of development (September 2023), text generators do not yet have an independent real-world perception within the scope of their input, and the entire text generator system does not yet have such processes as those that enable species-typical cognitions in humans. Furthermore, a text generator does not yet have a meaning function as it is given with humans.

From this fact it follows automatically that text generators cannot decide about selective or structural correctness/not correctness in the case of statements of a text. In general, they do not have their own assignment of meaning as with humans. Texts generated by text generators only have a meaning if a human as a recipient automatically assigns a meaning to a text due to his species-typical meaning relation, because this is the learned behavior of a human. In fact, the text generator itself has never assigned any meaning to the generated text. Salopp one could also formulate that a technical text generator works like a parasite: it collects texts that humans have generated, rearranges them combinatorially according to formal criteria for the output, and for the receiving human a meaning event is automatically triggered by the text in the human, which does not exist anywhere in the text generator.
Whether this very restricted form of text generation is now in any sense detrimental or advantageous for the type of scientific discourse (with texts), that is to be examined in the further course.

SCIENTIFIC DISCOURSE

There is no clear definition for the term scientific discourse. This is not surprising, since an unambiguous definition presupposes that there is a fully specified conceptual framework within which terms such as discourse and scientific can be clearly delimited. However, in the case of a scientific enterprise with a global reach, broken down into countless individual disciplines, this does not seem to be the case at present (Sept 2023). For the further procedure, we will therefore fall back on core ideas of the discussion in philosophy of science since the 20th century [4]and we will introduce working hypotheses on the concept of empirical theory as well as sustainable empirical theory, so that a working hypothesis on the concept of scientific discourse will be possible, which has a minimal sharpness.

[4] A good place to start may be: F. Suppe, Editor. The Structure of Scientific Theories. University of Illinois Press, Urbana, 2 edition, 1979.

EMPIRICAL THEORY

The following assumptions are made for the notion of an empirical theory:

  1. an empirical theory is basically a text, written in a language that all participants understand.
  2. one part of the theory contains a description of an initial situation, the statements of which can be qualified by the theory users as now matching (true) or now not matching (false).
  3. another part of the theory contains a text that lists all changes that, to the knowledge of the participants, occur in the context of the initial situation and can change parts of the initial situation.
  4. changes in the initial situation are expressed by replacing certain statements of the initial situation with other statements. The resulting new text replaces the previous text.
  5. through the possibility of generating new initial situations, there is the possibility of making predictions (forming expectations) by applying rules of change to an applicable initial situation several times (at least once) in succession. The texts that are generated in each case induce in the minds of the participants, on the basis of the available meaning functions, the idea of a situation that is demanded to become qualified as now matching with intersubjective reality, should it occur. In the case of occurrence, the situation must correspond to the conception in the mind via perception. Whether such a correspondence can be established, after how long time and to what extent, this is basically open in the presence of the prediction making (the formation of an expectation) (autonomy of the object!).

SCIENTIFIC DISCOURSE NEW

Even before the introduction of the notion of a sustainable empirical theory, the notion of an empirical theory just introduced allows for initial statements about what a scientific discourse could or should be.
It is assumed here that the participants in a scientific discourse exchange their opinions about the world in the form of texts, all of which meet the requirements of an empirical theory.

WITH HUMANS

Insofar as the participants in the discourse are human beings, we know that they have basic species-typical abilities to generate subjective meanings by means of language, which can be decided as now true or now not true by reference to a real situation.

WITH TEXT GENERATORS

Insofar as the discourse participants would be text generators lacking these elementary skills (meaning, reality reference), they drop out as independent actors for a scientific discourse.
To illustrate this, here is a self-assessment of chatGPT4, a current text generator:

ChatGPT

Yes, your summary of our previous dialogue is accurate based on the points you’ve listed:

  1. I cannot read a book in the same way a human does, meaning I don’t interpret or comprehend in the same manner.
  2. My knowledge is up to September 2021, which means that I might not be aware of events, advancements, or publications after that point.
  3. My “learning” is different from human learning. I don’t learn from new experiences or information after my last update. Instead, I generate responses based on patterns in the data I was trained on.
  4. I don’t cite individual sources in my responses. I provide information based on the patterns recognized during my training.
  5. Responses I give can vary slightly even if the same question is posed multiple times.
  6. I can’t directly attribute my statements to specific, individual sources.
  7. The specifics about my training data and the processes that determine my responses are not transparently provided in each interaction.

You’ve captured the essence of our prior conversation well. If you have further questions or need clarifications on any of these points, please let me know. [5]

[5] This text is from a chat experiment conducted by the author with chatGPT4 on August 31, 2023. The citation can be found in: Gerd Doeben-Henisch, CONTINUE EXPERIMENT WITH CHATGPT4: Science, Empirical Science, Philosophy of Science, uffmm, Integrating Engineering and the Human Factor, eJournal uffmm.org ISSN 2567-6458,
8/31/2023 in [https://www.uffmm.org/2023/08/31/continue-experiment-with-chatgpt4-science-empirical-science-philosophy-of-science/ ] (accessed 9/27/2023).

The question then arises whether (current) text generators, despite their severely limited capabilities, could nevertheless contribute to scientific discourse, and what this contribution means for human participants. Since text generators fail for the hard scientific criteria (decidable reality reference, reproducible predictive behavior, separation of sources), one can only assume a possible contribution within human behavior: since humans can understand and empirically verify texts, they would in principle be able to rudimentarily classify a text from a text generator within their considerations.

For hard theory work, these texts would not be usable, but due to their literary-associative character across a very large body of texts, the texts of text generators could – in the positive case – at least introduce thoughts into the discourse through texts as stimulators via the detour of human understanding, which would stimulate the human user to examine these additional aspects to see if they might be important for the actual theory building after all. In this way, the text generators would not participate independently in the scientific discourse, but they would indirectly support the knowledge process of the human actors as aids to them.[6]

[6] A detailed illustration of this associative role of a text generator can also be found in (Doeben-Henisch, 2023) on the example of the term philosophy of science and on the question of the role of philosophy of science.

CHALLENGE DECISION

The application of an empirical theory can – in the positive case — enable an expanded picture of everyday experience, in that, related to an initial situation, possible continuations (possible futures) are brought before one’s eyes.
For people who have to shape their own individual processes in their respective everyday life, however, it is usually not enough to know only what one can do. Rather, everyday life requires deciding in each case which continuation to choose, given the many possible continuations. In order to be able to assert themselves in everyday life with as little effort as possible and with – at least imagined – as little risk as possible, people have adopted well-rehearsed behavior patterns for as many everyday situations as possible, which they follow spontaneously without questioning them anew each time. These well-rehearsed behavior patterns include decisions that have been made. Nevertheless, there are always situations in which the ingrained automatisms have to be interrupted in order to consciously clarify the question for which of several possibilities one wants to decide.

The example of an individual decision-maker can also be directly applied to the behavior of larger groups. Normally, even more individual factors play a role here, all of which have to be integrated in order to reach a decision. However, the characteristic feature of a decision situation remains the same: whatever knowledge one may have at the time of decision, when alternatives are available, one has to decide for one of many alternatives without any further, additional knowledge at this point. Empirical science cannot help here [7]: it is an indisputable basic ability of humans to be able to decide.

So far, however, it remains rather hidden in the darkness of not knowing oneself, which ultimately leads to deciding for one and not for the other. Whether and to what extent the various cultural patterns of decision-making aids in the form of religious, moral, ethical or similar formats actually form or have formed a helpful role for projecting a successful future appears to be more unclear than ever.[8]

[7] No matter how much detail she can contribute about the nature of decision-making processes.

[8] This topic is taken up again in the following in a different context and embedded there in a different solution context.

SUSTAINABLE EMPIRICAL THEORY

Through the newly flared up discussion about sustainability in the context of the United Nations, the question of prioritizing action relevant to survival has received a specific global impulse. The multitude of aspects that arise in this discourse context [9] are difficult, if not impossible, to classify into an overarching, consistent conceptual framework.

[9] For an example see the 17 development goals: [https://unric.org/de/17ziele/] (Accessed: September 27, 2023)

A rough classification of development goals into resource-oriented and actor-oriented can help to make an underlying asymmetry visible: a resource problem only exists if there are biological systems on this planet that require a certain configuration of resources (an ecosystem) for their physical existence. Since the physical resources that can be found on planet Earth are quantitatively limited, it is possible, in principle, to determine through thought and science under what conditions the available physical resources — given a prevailing behavior — are insufficient. Added to this is the factor that biological systems, by their very existence, also actively alter the resources that can be found.

So, if there should be a resource problem, it is exclusively because the behavior of the biological systems has led to such a biologically caused shortage. Resources as such are neither too much, nor too little, nor good, nor bad. If one accepts that the behavior of biological systems in the case of the species Homo sapiens can be controlled by internal states, then the resource problem is primarily a cognitive and emotional problem: Do we know enough? Do we want the right thing? And these questions point to motivations beyond what is currently knowable. Is there a dark spot in the human self-image here?

On the one hand, this questioning refers to the driving forces for a current decision beyond the possibilities of the empirical sciences (trans-empirical, meta-physical, …), but on the other hand, this questioning also refers to the center/ core of human competence. This motivates to extend the notion of empirical theory to the notion of a sustainable empirical theory. This does not automatically solve the question of the inner mechanism of a value decision, but it systematically classifies the problem. The problem thus has an official place. The following formulation is suggested as a characterization for the concept of a sustainable empirical theory:

  1. a sustainable empirical theory contains an empirical theory as its core.
    1. besides the parts of initial situation, rules of change and application of rules of change, a sustainable theory also contains a text with a list of such situations, which are considered desirable for a possible future (goals, visions, …).
    2. under the condition of goals, it is possible to minimally compare each current situation with the available goals and thereby indicate the degree of goal achievement.

Stating desired goals says nothing about how realistic or promising it is to pursue those goals. It only expresses that the authors of this theory know these goals and consider them optimal at the time of theory creation. [10] The irrationality of chosen goals is in this way officially included in the domain of thought of the theory creators and in this way facilitates the extension of the rational to the irrational without already having a real solution. Nobody can exclude that the phenomenon of bringing forth something new, respectively of preferring a certain point of view in comparison to others, can be understood further and better in the future.

[10] Something can only be classified as optimal if it can be placed within an overarching framework, which allows for positioning on a scale. This refers to a minimal cognitive model as an expression of rationality. However, the decision itself takes place outside of such a rational model; in this sense, the decision as an independent process is pre-rational.

EXTENDED SCIENTIFIC DISCOURSE

If one accepts the concept of a sustainable empirical theory, then one can extend the concept of a scientific discourse in such a way that not only texts that represent empirical theories can be introduced, but also those texts that represent sustainable empirical theories with their own goals. Here too, one can ask whether the current text generators (September 2023) can make a constructive contribution. Insofar as a sustainable empirical theory contains an empirical theory as a hard core, the preceding observations on the limitations of text generators apply. In the creative part of the development of an empirical theory, they can contribute text fragments through their associative-combinatorial character based on a very large number of documents, which may inspire the active human theory authors to expand their view. But what about that part that manifests itself in the selection of possible goals? At this point, one must realize that it is not about any formulations, but about those that represent possible solution formulations within a systematic framework; this implies knowledge of relevant and verifiable meaning structures that could be taken into account in the context of symbolic patterns. Text generators fundamentally do not have these abilities. But it is – again – not to be excluded that their associative-combinatorial character based on a very large number of documents can still provide one or the other suggestion.

In retrospect of humanity’s history of knowledge, research, and technology, it is suggested that the great advances were each triggered by something really new, that is, by something that had never existed before in this form. The praise for Big Data, as often heard today, represents – colloquially speaking — exactly the opposite: The burial of the new by cementing the old.[11]

[11] A prominent example of the naive fixation on the old as a standard for what is right can be seen, for example, in the book by Seth Stephens-Davidowitz, Don’t Trust Your Gut. Using Data Instead of Instinct To Make Better Choices, London – Oxford New York et al., 2022.

EXISTENTIALLY NEW THROUGH TIME

The concept of an empirical theory inherently contains the element of change, and even in the extended concept of a sustainable empirical theory, in addition to the fundamental concept of change, there is the aspect of a possible goal. A possible goal itself is not a change, but presupposes the reality of changes! The concept of change does not result from any objects but is the result of a brain performance, through which a current present is transformed into a partially memorable state (memory contents) by forming time slices in the context of perception processes – largely unconsciously. These produced memory contents have different abstract structures, are networked differently with each other, and are assessed in different ways. In addition, the brain automatically compares current perceptions with such stored contents and immediately reports when a current perception has changed compared to the last perception contents. In this way, the phenomenon of change is a fundamental cognitive achievement of the brain, which thus makes the character of a feeling of time available in the form of a fundamental process structure. The weight of this property in the context of evolution is hardly to be overestimated, as time as such is in no way perceptible.

[12] The modern invention of machines that can generate periodic signals (oscillators, clocks) has been successfully integrated into people’s everyday lives. However, the artificially (technically) producible time has nothing to do with the fundamental change found in reality. Technical time is a tool that we humans have invented to somehow structure the otherwise amorphous mass of a phenomenon stream. Since structure itself shows in the amorphous mass, which manifest obviously for all, repeating change cycles (e.g., sunrise and sunset, moonrise and moonset, seasons, …), a correlation of technical time models and natural time phenomena was offered. From the correlations resulting here, however, one should not conclude that the amorphous mass of the world phenomenon stream actually behaves according to our technical time model. Einstein’s theory of relativity at least makes us aware that there can be various — or only one? — asymmetries between technical time and world phenomenon stream.


Assuming this fundamental sense of time in humans, one can in principle recognize whether a current phenomenon, compared to all preceding phenomena, is somehow similar or markedly different, and in this sense indicates something qualitatively new.[13]

[13] Ultimately, an individual human only has its individual memory contents available for comparison, while a collective of people can in principle consult the set of all records. However, as is known, only a minimal fraction of the experiential reality is symbolically transformed.

By presupposing the concept of directed time for the designation of qualitatively new things, such a new event is assigned an information value in the Shannonian sense, as well as the phenomenon itself in terms of linguistic meaning, and possibly also in the cognitive area: relative to a spanned knowledge space, the occurrence of a qualitatively new event can significantly strengthen a theoretical assumption. In the latter case, the cognitive relevance may possibly mutate to a sustainable relevance if the assumption marks a real action option that could be important for further progress. In the latter case, this would provoke the necessity of a decision: should we adopt this action option or not? Humans can accomplish the finding of qualitatively new things. They are designed for it by evolution. But what about text generators?

Text generators so far do not have a sense of time comparable to that of humans. Their starting point would be texts that are different, in such a way that there is at least one text that is the most recent on the timeline and describes real events in the real world of phenomena. Since a text generator (as of September 2023) does not yet have the ability to classify texts regarding their applicability/non-applicability in the real world, its use would normally end here. Assuming that there are people who manually perform this classification for a text generator [14] (which would greatly limit the number of possible texts), then a text generator could search the surface of these texts for similar patterns and, relative to them, for those that cannot be compared. Assuming that the text generator would find a set of non-comparable patterns in acceptable time despite a massive combinatorial explosion, the problem of semantic qualification would arise again: which of these patterns can be classified as an indication of something qualitatively new? Again, humans would have to become active.

[14] Such support of machines by humans in the field of so-called intelligent algorithms has often been applied (and is still being applied today, see: [https://www.mturk.com/] (Accessed: September 27, 2023)), and is known to be very prone to errors.

As before, the verdict is mixed: left to itself, a text generator will not be able to solve this task, but in cooperation with humans, it may possibly provide important auxiliary services, which could ultimately be of existential importance to humans in search of something qualitatively new despite all limitations.

THE IMMANENT PREJUDICE OF THE SCIENCES

A prejudice is known to be the assessment of a situation as an instance of a certain pattern, which the judge assumes applies, even though there are numerous indications that the assumed applicability is empirically false. Due to the permanent requirement of everyday life that we have to make decisions, humans, through their evolutionary development, have the fundamental ability to make many of their everyday decisions largely automatically. This offers many advantages, but can also lead to conflicts.

Daniel Kahneman introduced in this context in his book [15] the two terms System 1 and System 2 for a human actor. These terms describe in his concept of a human actor two behavioral complexes that can be distinguished based on some properties.[16] System 1 is set by the overall system of human actor and is characterized by the fact that the actor can respond largely automatically to requirements by everyday life. The human actor has automatic answers to certain stimuli from his environment, without having to think much about it. In case of conflicts within System 1 or from the perspective of System 2, which exercises some control over the appropriateness of System 1 reactions in a certain situation in conscious mode, System 2 becomes active. This does not have automatic answers ready, but has to laboriously work out an answer to a given situation step by step. However, there is also the phenomenon that complex processes, which must be carried out frequently, can be automated to a certain extent (bicycling, swimming, playing a musical instrument, learning language, doing mental arithmetic, …). All these processes are based on preceding decisions that encompass different forms of preferences. As long as these automated processes are appropriate in the light of a certain rational model, everything seems to be OK. But if the corresponding model is distorted in any sense, then it would be said that these models carry a prejudice.

[15] Daniel Kahnemann, Thinking Fast and Slow, Pinguin Boooks Random House, UK, 2012 (zuerst 2011)

[16] See the following Chapter 1 in Part 1 of (Kahnemann, 2012, pages 19-30).

In addition to the countless examples that Kahneman himself cites in his book to show the susceptibility of System 1 to such prejudices, it should be pointed out here that the model of Kahneman himself (and many similar models) can carry a prejudice that is of a considerably more fundamental nature. The division of the behavioral space of a human actor into a System 1 and 2, as Kahneman does, obviously has great potential to classify many everyday events. But what about all the everyday phenomena that fit neither the scheme of System 1 nor the scheme of System 2?

In the case of making a decision, System 1 comments that people – if available – automatically call up and execute an available answer. Only in the case of conflict under the control of System 2 can there be lengthy operations that lead to other, new answers.

In the case of decisions, however, it is not just about reacting at all, but there is also the problem of choosing between known possibilities or even finding something new because the known old is unsatisfactory.

Established scientific disciplines have their specific perspectives and methods that define areas of everyday life as a subject area. Phenomena that do not fit into this predefined space do not occur for the relevant discipline – methodically conditioned. In the area of decision-making and thus the typical human structures, there are not a few areas that have so far not found official entry into a scientific discipline. At a certain point in time, there are ultimately many, large phenomenon areas that really exist, but methodically are not present in the view of individual sciences. For a scientific investigation of the real world, this means that the sciences, due to their immanent exclusions, are burdened with a massive reservation against the empirical world. For the task of selecting suitable sustainable goals within the framework of sustainable science, this structurally conditioned fact can be fatal. Loosely formulated: under the banner of actual science, a central principle of science – the openness to all phenomena – is simply excluded, so as not to have to change the existing structure.

For this question of a meta-reflection on science itself, text generators are again only reduced to possible abstract text delivery services under the direction of humans.

SUPERVISION BY PHILOSOPHY

The just-described fatal dilemma of all modern sciences is to be taken seriously, as without an efficient science, sustainable reflection on the present and future cannot be realized in the long term. If one agrees that the fatal bias of science is caused by the fact that each discipline works intensively within its discipline boundaries, but does not systematically organize communication and reflection beyond its own boundaries with a view to other disciplines as meta-reflection, the question must be answered whether and how this deficit can be overcome.

There is only one known answer to this question: one must search for that conceptual framework within which these guiding concepts can meaningfully interact both in their own right and in their interaction with other guiding concepts, starting from those guiding concepts that are constitutive for the individual disciplines.

This is genuinely the task of philosophy, concretized by the example of the philosophy of science. However, this would mean that each individual science would have to use a sufficiently large part of its capacities to make the idea of the one science in maximum diversity available in a real process.

For the hard conceptual work hinted at here, text generators will hardly be able to play a central role.

COLLECTIVE INTELLIGENCE

Since so far there is no concept of intelligence in any individual science that goes beyond a single discipline, it makes little sense at first glance to apply the term intelligence to collectives. However, looking at the cultural achievements of humanity as a whole, and here not least with a view to the used language, it is undeniable that a description of the performance of an individual person, its individual performance, is incomplete without reference to the whole.

So, if one tries to assign an overarching meaning to the letter combination intelligence, one will not be able to avoid deciphering this phenomenon of the human collective in the form of complex everyday processes in a no less complex dynamic world, at least to the extent that one can identify a somewhat corresponding empirical something for the letter combination intelligence, with which one could constitute a comprehensible meaning.

Of course, this term should be scalable for all biological systems, and one would have to have a comprehensible procedure that allows the various technical systems to be related to this collective intelligence term in such a way that direct performance comparisons between biological and technical systems would be possible.[17]

[17] The often quoted and popular Turing Test (See: Alan M. Turing: Computing Machinery and Intelligence. In: Mind. Volume LIX, No. 236, 1950, 433–460, [doi:10.1093/mind/LIX.236.433] (Accessed: Sept 29, 2023) in no way meets the methodological requirements that one would have to adhere to if one actually wanted to come to a qualified performance comparison between humans and machines. Nevertheless, the basic idea of Turing in his meta-logical text from 1936, published in 1937 (see: A. M. Turing: On Computable Numbers, with an Application to the Entscheidungsproblem. In: Proceedings of the London Mathematical Society. s2-42. Volume, No. 1, 1937, 230–265 [doi:10.1112/plms/s2-42.1.230] (Accessed: Sept 29, 2023) seems to be a promising starting point, since he, in trying to present an alternative formulation to Kurt Gödel’s (1931) proof on the undecidability of arithmetic, leads a meta-logical proof, and in this context Turing introduces the concept of a machine that was later called Universal Turing Machine.

Already in this proof approach, it can be seen how Turing transforms the phenomenon of a human bookkeeper at a meta-level into a theoretical concept, by means of which he can then meta-logically examine the behavior of this bookkeeper in a specific behavioral space. His meta-logical proof not only confirmed Gödel’s meta-logical proof, but also indirectly indicates how ultimately any phenomenal complexes can be formalized on a meta-level in such a way that one can then argue formally demanding with it.

CONCLUSION STRUCTURALLY

The idea of philosophical supervision of the individual sciences with the goal of a concrete integration of all disciplines into an overall conceptual structure seems to be fundamentally possible from a philosophy of science perspective based on the previous considerations. From today’s point of view, specific phenomena claimed by individual disciplines should no longer be a fundamental obstacle for a modern theory concept. This would clarify the basics of the concept of Collective Intelligence and it would surely be possible to more clearly identify interactions between human collective intelligence and interactive machines. Subsequently, the probability would increase that the supporting machines could be further optimized, so that they could also help in more demanding tasks.

CONCLUSION SUBJECTIVELY

Attempting to characterize the interactive role of text generators in a human-driven scientific discourse, assuming a certain scientific model, appears to be somewhat clear from a transdisciplinary (and thus structural) perspective. However, such scientific discourse represents only a sub-space of the general human discourse space. In the latter, the reception of texts from the perspective of humans inevitably also has a subjective view [18]: People are used to suspecting a human author behind a text. With the appearance of technical aids, texts have increasingly become products, which increasingly gaining formulations that are not written down by a human author alone, but by the technical aids themselves, mediated by a human author. With the appearance of text generators, the proportion of technically generated formulations increases extremely, up to the case that ultimately the entire text is the direct output of a technical aid. It becomes difficult to impossible to recognize to what extent a controlling human share can still be spoken of here. The human author thus disappears behind a text; the sign reality which does not prevent an existential projection of the inner world of the human reader into a potential human author, but threatens to lose itself or actually loses itself in the real absence of a human author in the face of a chimeric human counterpart. What happens in a world where people no longer have human counterparts?

[18] There is an excellent analysis on this topic by Hannes Bajohr titled “Artifizielle und postartifizielle Texte. Über Literatur und Künstliche Intelligenz” (Artificial and Post-Artificial Texts: On Literature and Artificial Intelligence). It was the Walter-Höllerer-Lecture 2022, delivered on December 8, 2022, at the Technical University of Berlin. The lecture can be accessed here [ https://hannesbajohr.de/wp-content/uploads/2022/12/Hoellerer-Vorlesung-2022.pdf ] (Accessed: September 29, 2023). The reference to this lecture was provided to me by Jennifer Becker.

Homo Sapiens: empirical and sustained-empirical theories, emotions, and machines. A sketch

Author: Gerd Doeben-Henisch

Email: info@uffmm.org

Aug 24, 2023 — Aug 29, 2023 (10:48h CET)

Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.

CONTEXT

This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.

Start of the Lecture

Dear Auditorium,

This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.

In this conference many different views are presented that are possible on this topic.

TRANSDISCIPLINARY

My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.

An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.

‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.

HUMAN TEXT GENERATION

The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.

This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.

TEXT CAPABLE MACHINES

With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.

In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.

BIOLOGICAL — NON-BIOLOGICAL

On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.

BLANK INTELLIGENCE TERM

The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.

PREREQUISITES FOR TEXT GENERATION

If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.

VALIDITY

A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.

In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.

In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.

ASYMMETRY: APPLICABLE- NOT APPLICABLE

One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.

With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.

MEANING

This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.

If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):

KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.

LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.

MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.

Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.

FUTURE AND EMOTIONS

In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.

If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.

If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?

The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]

SCIENTIFIC DISCOURSE AND EVERYDAY SITUATIONS

In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.

The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?

For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.

This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.

The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.

Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.

From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.

  1. The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
  2. This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
  3. The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
  4. It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
  5. In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
  6. Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
  7. The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
  8. The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
  9. The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
  1. The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
  2. If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
  3. This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
  4. A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
  5. Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
  6. The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.

Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.

SUSTAINABLE EMPIRICAL THEORY

With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.

While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.

However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]

If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.

In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]

MAN-MACHINE

After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?

My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.

In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.

The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.

This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.

Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?

Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.

Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.

In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.

But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?

A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.

My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]

COMMENTS

[1] https://zevedi.de/en/topics/ki-text-2/

[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( https://www.uffmm.org/2023/08/28/state-change-from-non-writing-to-writing-working-with-chatgpt4-in-parallel/ ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.

THINKING: everyday – philosophical – empirical theoretical (sketch)

(First: June 9, 2023 – Last change: June 10, 2023)

Comment: This post is a translation from a German text in my blog ‘cognitiveagent.org’ with the aid of the deepL software

CONTEXT

The current phase of my thinking continues to revolve around the question how the various states of knowledge relate to each other: the many individual scientific disciplines drift side by side; philosophy continues to claim supremacy, but cannot really locate itself convincingly; and everyday thinking continues to run its course unperturbed with the conviction that ‘everything is clear’, that you just have to look at it ‘as it is’. Then the different ‘religious views’ come around the corner with a very high demand and a simultaneous prohibition not to look too closely. … and much more.

INTENTION

In the following text three fundamental ways of looking at our present world are outlined and at the same time they are put in relation to each other. Some hitherto unanswered questions can possibly be answered better, but many new questions arise as well. When ‘old patterns of thinking’ are suspended, many (most? all?) of the hitherto familiar patterns of thinking have to be readjusted. All of a sudden they are simply ‘wrong’ or strongly ‘in need of repair’.

Unfortunately it is only a ‘sketch’.[1]

THOUGHTS IN EVERYDAY

FIG. 1: In everyday thinking, every human being (a ‘homo sapiens’ (HS)) assumes that what he knows of a ‘real world’ is what he ‘perceives’. That there is this real world with its properties, he is – more or less – ‘aware’ of, there is no need to discuss about it specially. That, what ‘is, is’.

… much could be said …

PHILOSOPHICAL THINKING

FIG. 2: Philosophical thinking starts where one notices that the ‘real world’ is not perceived by all people in ‘the same way’ and even less ‘imagined’ in the same way. Some people have ‘their ideas’ about the real world that are strikingly ‘different’ from other people’s ideas, and yet they insist that the world is exactly as they imagine it. From this observation in everyday life, many new questions can arise. The answers to these questions are as manifold as there were and are people who gave or still give themselves to these philosophical questions.

… famous examples: Plato’s allegory of the cave suggests that the contents of our consciousness are perhaps not ‘the things themselves’ but only the ‘shadows’ of what is ultimately ‘true’ … Descartes‘ famous ‘cogito ergo sum’ brings into play the aspect that the contents of consciousness also say something about himself who ‘consciously perceives’ such contents …. the ‘existence of the contents’ presupposes his ‘existence as thinker’, without which the existence of the contents would not be possible at all …what does this tell us? … Kant’s famous ‘thing in itself’ (‘Ding an sich’) can be referred to the insight that the concrete, fleeting perceptions can never directly show the ‘world as such’ in its ‘generality’. This lies ‘somewhere behind’, hard to grasp, actually not graspable at all? ….

… many things could be said …

EMPIRICAL-THEORETICAL THINKING

FIG. 3: The concept of an ’empirical theory’ developed very late in the documented history of man on this planet. On the one hand philosophically inspired, on the other hand independent of the widespread forms of philosophy, but very strongly influenced by logical and mathematical thinking, the new ’empirical theoretical’ thinking settled exactly at this breaking point between ‘everyday thinking’ and ‘theological’ as well as ‘strongly metaphysical philosophical thinking’. The fact that people could make statements about the world ‘with the chest tone of conviction’, although it was not possible to show ‘common experiences of the real world’, which ‘corresponded’ with the expressed statements, inspired individual people to investigate the ‘experiential (empirical) world’ in such a way that everyone else could have the ‘same experiences’ with ‘the same procedure’. These ‘transparent procedures’ were ‘repeatable’ and such procedures became what was later called ’empirical experiment’ or then, one step further, ‘measurement’. In ‘measuring’ one compares the ‘result’ of a certain experimental procedure with a ‘previously defined standard object’ (‘kilogram’, ‘meter’, …).

This procedure led to the fact that – at least the experimenters – ‘learned’ that our knowledge about the ‘real world’ breaks down into two components: there is the ‘general knowledge’ what our language can articulate, with terms that do not automatically have to have something to do with the ‘experiential world’, and such terms that can be associated with experimental experiences, and in such a way that other people, if they engage in the experimental procedure, can also repeat and thereby confirm these experiences. A rough distinction between these two kinds of linguistic expressions might be ‘fictive’ expressions with unexplained claims to experience, and ’empirical’ expressions with confirmed claims to experience.

Since the beginning of the new empirical-theoretical way of thinking in the 17th century, it took at least 300 years until the concept of an ’empirical theory’ was consolidated to such an extent that it became a defining paradigm in many areas of science. However, many methodological questions remained controversial or even ‘unsolved’.

DATA and THEORY

For many centuries, the ‘misuse of everyday language’ for enabling ’empirically unverifiable statements’ was directly chalked up to this everyday language and the whole everyday language was discredited as ‘source of untruths’. A liberation from this ‘ monster of everyday language’ was increasingly sought in formal artificial languages or then in modern axiomatized mathematics, which had entered into a close alliance with modern formal logic (from the end of the 19th century). The expression systems of modern formal logic or then of modern formal mathematics had as such (almost) no ‘intrinsic meaning’. They had to be introduced explicitly on a case-by-case basis. A ‘formal mathematical theory’ could be formulated in such a way that it allowed ‘logical inferences’ even without ‘explicit assignment’ of an ‘external meaning’, which allowed certain formal expressions to be called ‘formally true’ or ‘formally false’.

This seemed very ‘reassuring’ at first sight: mathematics as such is not a place of ‘false’ or ‘foisted’ truths.

The intensive use of formal theories in connection with experience-based experiments, however, then gradually made clear that a single measured value as such does not actually have any ‘meaning’ either: what is it supposed to ‘mean’ that at a certain ‘time’ at a certain ‘place’ one establishes an ‘experienceable state’ with certain ‘properties’, ideally comparable to a previously agreed ‘standard object’? ‘Expansions’ of bodies can change, ‘weight’ and ‘temperature’ as well. Everything can change in the world of experience, fast, slow, … so what can a single isolated measured value say?

It dawned to some – not only to the experience-based researchers, but also to some philosophers – that single measured values only get a ‘meaning’, a possible ‘sense’, if one can at least establish ‘relations’ between single measured values: Relations ‘in time’ (before – after), relations at/in place (higher – lower, next to each other, …), ‘interrelated quantities’ (objects – areas, …), and that furthermore the different ‘relations’ themselves again need a ‘conceptual context’ (single – quantity, interactions, causal – non-causal, …).

Finally, it became clear that single measured values needed ‘class terms’, so that they could be classified somehow: abstract terms like ‘tree’, ‘plant’, ‘cloud’, ‘river’, ‘fish’ etc. became ‘collection points’, where one could deliver ‘single observations’. With this, hundreds and hundreds of single values could then be used, for example, to characterize the abstract term ‘tree’ or ‘plant’ etc.

This distinction into ‘single, concrete’ and ‘abstract, general’ turns out to be fundamental. It also made clear that the classification of the world by means of such abstract terms is ultimately ‘arbitrary’: both ‘which terms’ one chooses is arbitrary, and the assignment of individual experiential data to abstract terms is not unambiguously settled in advance. The process of assigning individual experiential data to particular terms within a ‘process in time’ is itself strongly ‘hypothetical’ and itself in turn part of other ‘relations’ which can provide additional ‘criteria’ as to whether date X is more likely to belong to term A or more likely to belong to term B (biology is full of such classification problems).

Furthermore, it became apparent that mathematics, which comes across as so ‘innocent’, can by no means be regarded as ‘innocent’ on closer examination. The broad discussion of philosophy of science in the 20th century brought up many ‘artifacts’ which can at least easily ‘corrupt’ the description of a dynamic world of experience.

Thus it belongs to formal mathematical theories that they can operate with so-called ‘all- or particular statements’. Mathematically it is important that I can talk about ‘all’ elements of a domain/set. Otherwise talking becomes meaningless. If I now choose a formal mathematical system as conceptual framework for a theory which describes ’empirical facts’ in such a way that inferences become possible which are ‘true’ in the sense of the theory and thus become ‘predictions’ which assert that a certain fact will occur either ‘absolutely’ or with a certain probability X greater than 50%, then two different worlds unite: the fragmentary individual statements about the world of experience become embedded in ‘all-statements’ which in principle say more than empirical data can provide.

At this point it becomes visible that mathematics, which appears to be so ‘neutral’, does exactly the same job as ‘everyday language’ with its ‘abstract concepts’: the abstract concepts of everyday language always go beyond the individual case (otherwise we could not say anything at all in the end), but just by this they allow considerations and planning, as we appreciate them so much in mathematical theories.

Empirical theories in the format of formal mathematical theories have the further problem that they as such have (almost) no meanings of their own. If one wants to relate the formal expressions to the world of experience, then one has to explicitly ‘construct a meaning’ (with the help of everyday language!) for each abstract concept of the formal theory (or also for each formal relation or also for each formal operator) by establishing a ‘mapping’/an ‘assignment’ between the abstract constructs and certain provable facts of experience. What may sound so simple here at first sight has turned out to be an almost unsolvable problem in the course of the last 100 years. Now it does not follow that one should not do it at all; but it does draw attention to the fact that the choice of a formal mathematical theory need not automatically be a good solution.

… many things could still be said …

INFERENCE and TRUTH

A formal mathematical theory can derive certain statements as formally ‘true’ or ‘false’ from certain ‘assumptions’. This is possible because there are two basic assumptions: (i) All formal expressions have an ‘abstract truth value’ as ‘abstractly true’ or just as ‘abstractly not true’. Furthermore, there is a so-called ‘formal notion of inference’ which determines whether and how one can ‘infer’ other formal expressions from a given ‘set of formal expressions’ with agreed abstract truth values and a well-defined ‘form’. This ‘derivation’ consists of ‘operations over the signs of the formal expressions’. The formal expressions are here ‘objects’ of the notion of inference, which is located on a ‘level higher’, on a ‘meta-level 1’. The inference term is insofar a ‘formal theory’ of its own, which speaks about certain ‘objects of a deeper level’ in the same way as the abstract terms of a theory (or of everyday language) speak about concrete facts of experience. The interaction of the notion of inference (at meta-level 1) and the formal expressions as objects presupposes its own ‘interpretive relation’ (ultimately a kind of ‘mapping’), which in turn is located at yet another level – meta-level 2. This interpretive relation uses both the formal expressions (with their truth values!) and the inference term as ‘objects’ to install an interpretive relation between them. Normally, this meta-level 2 is handled by the everyday language, and the implicit interpretive relation is located ‘in the minds of mathematicians (actually, in the minds of logicians)’, who assume that their ‘practice of inference’ provides enough experiential data to ‘understand’ the ‘content of the meaning relation’.

It had been Kurt Gödel [2], who in 1930/31 tried to formalize the ‘intuitive procedure’ of meta-proofs itself (by means of the famous Gödelization) and thus made the meta-level 3 again a new ‘object’, which can be discussed explicitly. Following Gödel’s proof, there were further attempts to formulate this meta-level 3 again in a different ways or even to formalize a meta-level 4. But these approaches remained so far without clear philosophical result.

It seems to be clear only that the ability of the human brain to open again and again new meta-levels, in order to analyze and discuss with it previously formulated facts, is in principle unlimited (only limited by the finiteness of the brain, its energy supply, the time, and similar material factors).

An interesting special question is whether the formal inference concept of formal mathematics applied to experience facts of a dynamic empirical world is appropriate to the specific ‘world dynamics’ at all? For the area of the ‘apparently material structures’ of the universe, modern physics has located multiple phenomena which simply elude classical concepts. A ‘matter’, which is at the same time ‘energy’, tends to be no longer classically describable, and quantum physics is – despite all ‘modernity’ – in the end still a ‘classical thinking’ within the framework of a formal mathematics, which does not possess many properties from the approach, which, however, belong to the experienceable world.

This limitation of a formal-mathematical physical thinking shows up especially blatantly at the example of those phenomena which we call ‘life’. The experience-based phenomena that we associate with ‘living (= biological) systems’ are, at first sight, completely material structures, however, they have dynamic properties that say more about the ‘energy’ that gives rise to them than about the materiality by means of which they are realized. In this respect, implicit energy is the real ‘information content’ of living systems, which are ‘radically free’ systems in their basic structure, since energy appears as ‘unbounded’. The unmistakable tendency of living systems ‘out of themselves’ to always ‘enable more complexity’ and to integrate contradicts all known physical principles. ‘Entropy’ is often used as an argument to relativize this form of ‘biological self-dynamics’ with reference to a simple ‘upper bound’ as ‘limitation’, but this reference does not completely nullify the original phenomenon of the ‘living’.

It becomes especially exciting if one dares to ask the question of ‘truth’ at this point. If one locates the meaning of the term ‘truth’ first of all in the situation in which a biological system (here the human being) can establish a certain ‘correspondence’ between its abstract concepts and such concrete knowledge structures within its thinking, which can be related to properties of an experiential world through a process of interaction, not only as a single individual but together with other individuals, then any abstract system of expression (called ‘language’) has a ‘true relation to reality’ only to the extent that there are biological systems that can establish such relations. And these references further depend on the structure of perception and the structure of thought of these systems; these in turn depend on the nature of bodies as the context of brains, and bodies in turn depend on both the material structure and dynamics of the environment and the everyday social processes that largely determine what a member of a society can experience, learn, work, plan, and do. Whatever an individual can or could do, society either amplifies or ‘freezes’ the individual’s potential. ‘Truth’ exists under these conditions as a ‘free-moving parameter’ that is significantly affected by the particular process environment. Talk of ‘cultural diversity’ can be a dangerous ‘trivialization’ of massive suppression of ‘alternative processes of learning and action’ that are ‘withdrawn’ from a society because it ‘locks itself in’. Ignorance tends not to be a good advisor. However, knowledge as such does not guarantee ‘right’ action either. The ‘process of freedom’ on planet Earth is a ‘galactic experiment’, the seriousness and extent of which is hardly seen so far.

COMMENTS

[1] References are omitted here. Many hundreds of texts would have to be mentioned. No sketch can do that.

[2] See for the ‘incompleteness theorems’ of Kurt Gödel (1930, published 1931): https://en.wikipedia.org/wiki/Kurt_G%C3%B6del#Incompleteness_theorems

GRAMMAR FOR SUSTAINABLE DEVELOPMENT. Sketch


eJournal: uffmm.org
ISSN 2567-6458, 23.February 2023 – 23.February 2023, 13:23h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

This text is a translation from a German source, aided by the automatic translation program ‘www.DeepL.com/Translator’ (free version).

CONTEXT

This text is part of the Philosophy of Science theme within the the uffmm.org blog.

Motivation

The following text is a confluence of ideas that have been driving me for many months. Parts of it can be found as texts in all three blogs (Citizen Science 2.0 for Sustainable Development, Integrated Engineering and the Human Factor (this blog), Philosophy Now. In Search for a new Human Paradigm). The choice of the word ‘grammar’ [1] for the following text is rather unusual, but seems to me to reflect the character of the reflections well.

Sustainability for populations

The concept of sustainable development is considered here in the context of ‘biological populations’. Such populations are dynamic entities with many ‘complex properties’. For the analysis of the ‘sustainability’ of such populations, there is one aspect that seems ‘fundamental’ for a proper understanding. It is the aspect whether and how the members of a population – the actors – are interconnected or not.

An ‘unconnected’ set

If I have ‘actors’ of a ‘population’, which are in no direct ‘interaction’ with each other, then also the ‘acting’ of these actors is isolated from each other. In a wide area they probably do not ‘get in each other’s way’; in a narrow area they could easily hinder each other or even fight each other, up to mutual destruction.

It should be noted that even such disconnected actors must have minimal ‘knowledge’ about themselves and the environment, also minimal ’emotions’, in order to live at all.

Without direct interaction, an unconnected population will nevertheless die out relatively quickly as a population.

A ‘connected’ set

A ‘connected set’ exists if the actors of a population have a sufficient number of direct interactions through which they could ‘coordinate’ their knowledge about themselves and the world, as well as their emotions, to such an extent that they are capable of ‘coordinated action’. Thereby the single, individual actions become related to their possible effect to a ‘common (= social) action’ which can effect more than each of them would have been able to do individually.

The ’emotions’ involved must rather be such that they do not so much ‘delimit/exclude’, but rather ‘include/recognize’.

The ‘knowledge’ involved must be rather that it is not ‘static’ and not ‘unrealistic’, but rather ‘open’, ‘learning’ and ‘realistic’.

The ‘survival’ of a connected population is basically possible if the most important ‘factors’ of a survival are sufficiently fulfilled.

Transitions from – to

The ‘transition’ from an ‘unconnected’ to a ‘connected’ state of a population is not inevitable. The primary motive may simply be the ‘will to survive’ (an emotion), and the growing ‘insight’ (= knowledge) that this is only possible with ‘minimal cooperation’. An individual, however, can live in a state of ‘loner’ for the duration of his life, because he does not have to experience his individual death as a sufficient reason to ally with others. A population as such, however, can only survive if a sufficient number of individuals survive, interacting minimally with each other. The history of life on planet Earth suggests the working hypothesis that for 3.5 billion years there have always been sufficient members of a population in biological populations (including the human population) to counter the ‘self-destructive tendencies’ of individuals with a ‘constructive tendency’.

The emergence and the maintenance of a ‘connected population’ needs a minimum of ‘suitable knowledge’ and ‘suitable emotions’ to succeed.

It is a permanent challenge for all biological populations to shape their own emotions in such a way that they tend not to exclude, to despise, but rather to include and to recognize. Similarly, knowledge must be suitable for acquiring a realistic picture of oneself, others, and the environment so that the behavior in question is ‘factually appropriate’ and tends to be more likely to lead to ‘success’.

As the history of the human population shows, both the ‘shaping of emotions’ and the ‘shaping of powerful knowledge’ are usually largely underestimated and poorly or not at all organized. The necessary ‘effort’ is shied away from, one underestimates the necessary ‘duration’ of such processes. Within knowledge there is additionally the general problem that the ‘short time spans’ within an individual life are an obstacle to recognize and form such processes where larger time spans require it (this concerns almost all ‘important’ processes).

We must also note that ‘connected states’ of populations can also collapse again at any time, if those behaviors that make them possible are weakened or disappear altogether. Connections in the realm of biological populations are largely ‘undetermined’! They are based on complex processes within and between the individual actors. Whole societies can ‘topple overnight’ if an event destroys ‘trust in context’. Without trust no context is possible. The emergence and the passing away of trust should be part of the basic concern of every society in a state of interconnectedness.

Political rules of the game

‘Politics’ encompasses the totality of arrangements that members of a human population agree to organize jointly binding decision-making processes.[2] On a rough scale, one could place two extremes: (i) On the one hand, a population with a ‘democratic system’ [3] and a population with a maximally un-democratic system.[4]

As already noted in general for ‘connected systems’: the success of democratic systems is in no way determinate. Enabling and sustaining it requires the total commitment of all participants ‘by their own conviction’.

Basic reality ‘corporeality’

Biological populations are fundamentally characterized by a ‘corporeality’ which is determined through and through by ‘regularities’ of the known material structures. In their ‘complex formations’ biological systems manifest also ‘complex properties’, which cannot be derived simply from their ‘individual parts’, but the respective identifiable ‘material components’ of their ‘body’ together with many ‘functional connections’ are fundamentally subject to a multiplicity of ‘laws’ which are ‘given’. To ‘change’ these is – if at all – only possible under certain limited conditions.

All biological actors consist of ‘biological cells’ which are the same for all. In this, human actors are part of the total development of (biological) life on planet Earth. The totality of (biological) life is also called ‘biome’ and the total habitat of a biome is also called ‘biosphere’. [5] The population of homo sapiens is only a vanishingly small part of the biome, but with the homo sapiens typical way of life it claims ever larger parts of the biosphere for itself at the expense of all other life forms.

(Biological) life has been taking place on planet Earth for about 3.5 billion years.[6] Earth, as part of the solar system [7], has had a very eventful history and shows strong dynamics until today, which can and does have a direct impact on the living conditions of biological life (continental plate displacement, earthquakes, volcanic eruptions, magnetic field displacement, ocean currents, climate, …).

Biological systems generally require a continuous intake of material substances (with energy potentials) to enable their own metabolic processes. They also excrete substances. Human populations need certain amounts of ‘food’, ‘water’, ‘dwellings’, ‘storage facilities’, ‘means of transport’, ‘energy’, … ‘raw materials’, … ‘production processes’, ‘exchange processes’ … As the sheer size of a population grows, the material quantities required (and also wastes) multiply to orders of magnitude that can destroy the functioning of the biosphere.

Predictive knowledge

If a coherent population does not want to leave possible future states to pure chance, then it needs a ‘knowledge’ which is suitable to construct ‘predictions’ (‘prognoses’) for a possible future (or even many ‘variants of future’) from the knowledge about the present and about the past.

In the history of homo sapiens so far, there is only one form of knowledge that has been demonstrably demonstrated to be suitable for resilient sustainable forecasts: the knowledge form of empirical sciences. [8] This form of knowledge is so far not perfect, but a better alternative is actually not known. At its core, ’empirical knowledge’ comprises the following elements: (i) A description of a baseline situation that is assumed to be ’empirically true’; (ii) A set of ‘descriptions of change processes’ that one has been able to formulate over time, and from which one knows that it is ‘highly probable’ that the described changes will occur again and again under known conditions; (iii) An ‘inference concept’ that describes how to apply to the description of a ‘given current situation’ the known descriptions of change processes in such a way that one can modify the description of the current situation to produce a ‘modified description’ that describes a new situation that can be considered a ‘highly probable continuation’ of the current situation in the future. [9]

The just sketched ‘basic idea’ of an empirical theory with predictive ability can be realized concretely in many ways. To investigate and describe this is the task of ‘philosophy of science’. However, the vagueness found in dealing with the notion of an ’empirical theory’ is also found in the understanding of what is meant by ‘philosophy of science.'[9]

In the present text, the view is taken that the ‘basic concept’ of an empirical theory can be fully realized in normal everyday action using everyday language. This concept of a ‘General Empirical Theory’ can be extended by any special languages, methods and sub-theories as needed. In this way, the hitherto unsolved problem of the many different individual empirical disciplines could be solved almost by itself.[10]

Sustainable knowledge

In the normal case, an empirical theory can, at best, generate forecasts that can be said to have a certain empirically based probability. In ‘complex situations’ such a prognosis can comprise many ‘variants’: A, B, …, Z. Now which of these variants is ‘better’ or ‘worse’ in the light of an ‘assumable criterion’ cannot be determined by an empirical theory itself. Here the ‘producers’ and the ‘users’ of the theory are asked: Do they have any ‘preferences’ why e.g. variant ‘B’ should be preferred to variant ‘C”: “Bicycle, subway, car or plane?” , “Genetic engineering or not?”, “Pesticides or not?”, “Nuclear energy or not?”, “Uncontrolled fishing or not?” …

The ‘evaluation criteria’ to be applied actually themselves require ‘explicit knowledge’ for the estimation of a possible ‘benefit’ on the one hand, on the other hand the concept of ‘benefit’ is anchored in the feeling and wanting of human actors: Why exactly do I want something? Why does something ‘feel good’? …

Current discussions worldwide show that the arsenal of ‘evaluation criteria’ and their implementation offer anything but a clear picture.

COMMENTS

[1] For the typical use of the term ‘grammar’ see the English Wikipedia: https://en.wikipedia.org/wiki/Grammar. In the text here in the blog I transfer this concept of ‘language’ to that ‘complex process’ in which the population of the life form ‘homo sapiens’ tries to achieve an ‘overall state’ on planet earth that allows a ‘maximally good future’ for as much ‘life’ as possible (with humans as a sub-population). A ‘grammar of sustainability’ presupposes a certain set of basic conditions, factors, which ‘interact’ with each other in a dynamic process, in order to realize as many states as possible in a ‘sequence of states’, which enable as good a life as possible for as many as possible.

[2] For the typical usage of the term politics, see the English Wikipedia: https://en.wikipedia.org/wiki/Politics . This meaning is also assumed in the present text here.

[3] A very insightful project on empirical research on the state and development of ’empirical systems’democracies’ on planet Earth is the V-dem Institut:: https://www.v-dem.net/

[4] Of course, one could also choose completely different basic concepts for a scale. However, the concept of a ‘democratic system’ (with all its weaknesses) seems to me to be the ‘most suitable’ system in the light of the requirements for sustainable development; at the same time, however, it makes the highest demands of all systems on all those involved. That it came to the formation of ‘democracy-like’ systems at all in the course of history, actually borders almost on a miracle. The further development of such democracy-like systems fluctuates constantly between preservation and decay. Positively, one could say that the constant struggle for preservation is a kind of ‘training’ to enable sustainable development.

[5]  For typical uses of the terms ‘biome’ and ‘biosphere’, see the corresponding entries in the English Wikipedia: ‘biome’: https://en.wikipedia.org/wiki/Biome, ‘biosphere’: https://en.wikipedia.org/wiki/Biosphere

[6] Some basic data for planet Earth: https://en.wikipedia.org/wiki/Earth

[7] Some basic data for the solar system: https://en.wikipedia.org/wiki/Solar_System

[8] If you will search for he term ‘Empirical Science’ you ill be disappointed, because the English Wikipedia (as well as the German Version) does not provide such a term. You have either to accept the term ‘Science’ ( https://en.wikipedia.org/wiki/Science ) or the term ‘Empiricism’ (https://en.wikipedia.org/wiki/Empiricism), but both do not cover the general properties of an Empirical theory.

[9] If you have a clock with hour and minute hands, which currently shows 11:04h, and you know from everyday experience that the minute hand advances by one stroke every minute, then you can conclude with a fairly high probability that the minute hand will advance by one stroke ‘very soon’. The initial description ‘The clock shows 11:04h’ would then be changed to that of the new description ‘The clock shows 11:05h’. Before the ’11:05h event’ the statement ‘The clock shows 11:05h’ would have the status of a ‘forecast’.

[10] A single discipline (physics, chemistry, biology, psychology, …) cannot conceptually grasp ‘the whole’ ‘out of itself’; it does not have to. The various attempts to ‘reduce’ any single discipline to another (physics is especially popular here) have all failed so far. Without a suitable ‘meta-theory’ no single discipline can free itself from its specialization. The concept of a ‘General Empirical Theory’ is such a meta-theory. Such a meta-theory fits into the concept of a modern philosophical thinking.

COMMON SCIENCE as Sustainable Applied Empirical Theory, besides ENGINEERING, in a SOCIETY

eJournal: uffmm.org
ISSN 2567-6458, 19.Juni 2022 – 30.December 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of the Philosophy of Science theme within the the uffmm.org blog.

This is work in progress:

  1. The whole text shows a dynamic, which induces many changes. Difficult to plan ‘in advance’.
  2. Perhaps, some time, it will look like a ‘book’, at least ‘for a moment’.
  3. I have started a ‘book project’ in parallel. This was motivated by the need to provide potential users of our new oksimo.R software with a coherent explanation of how the oksimo.R software, when used, generates an empirical theory in the format of a screenplay. The primary source of the book is in German and will be translated step by step here in the uffmm.blog.

INTRODUCTION

In a rather foundational paper about an idea, how one can generalize ‘systems engineering’ [*1] to the art of ‘theory engineering’ [1] a new conceptual framework has been outlined for a ‘sustainable applied empirical theory (SAET)’. Part of this new framework has been the idea that the classical recourse to groups of special experts (mostly ‘engineers’ in engineering) is too restrictive in the light of the new requirement of being sustainable: sustainability is primarily based on ‘diversity’ combined with the ‘ability to predict’ from this diversity probable future states which keep life alive. The aspect of diversity induces the challenge to see every citizen as a ‘natural expert’, because nobody can know in advance and from some non-existing absolut point of truth, which knowledge is really important. History shows that the ‘mainstream’ is usually to a large degree ‘biased’ [*1b].

With this assumption, that every citizen is a ‘natural expert’, science turns into a ‘general science’ where all citizens are ‘natural members’ of science. I will call this more general concept of science ‘sustainable citizen science (SCS)’ or ‘Citizen Science 2.0 (CS2)’. The important point here is that a sustainable citizen science is not necessarily an ‘arbitrary’ process. While the requirement of ‘diversity’ relates to possible contents, to possible ideas, to possible experiments, and the like, it follows from the other requirement of ‘predictability’/ of being able to make some useful ‘forecasts’, that the given knowledge has to be in a format, which allows in a transparent way the construction of some consequences, which ‘derive’ from the ‘given’ knowledge and enable some ‘new’ knowledge. This ability of forecasting has often been understood as the business of ‘logic’ providing an ‘inference concept’ given by ‘rules of deduction’ and a ‘practical pattern (on the meta level)’, which defines how these rules have to be applied to satisfy the inference concept. But, looking to real life, to everyday life or to modern engineering and economy, one can learn that ‘forecasting’ is a complex process including much more than only cognitive structures nicely fitting into some formulas. For this more realistic forecasting concept we will use here the wording ‘common logic’ and for the cognitive adventure where common logic is applied we will use the wording ‘common science’. ‘Common science’ is structurally not different from ‘usual science’, but it has a substantial wider scope and is using the whole of mankind as ‘experts’.

The following chapters/ sections try to illustrate this common science view by visiting different special views which all are only ‘parts of a whole’, a whole which we can ‘feel’ in every moment, but which we can not yet completely grasp with our theoretical concepts.

CONTENT

  1. Language (Main message: “The ordinary language is the ‘meta language’ to every special language. This can be used as a ‘hint’ to something really great: the mystery of the ‘self-creating’ power of the ordinary language which for most people is unknown although it happens every moment.”)
  2. Concrete Abstract Statements (Main message: “… you will probably detect, that nearly all words of a language are ‘abstract words’ activating ‘abstract meanings’. …If you cannot provide … ‘concrete situations’ the intended meaning of your abstract words will stay ‘unclear’: they can mean ‘nothing or all’, depending from the decoding of the hearer.”)
  3. True False Undefined (Main message: “… it reveals that ’empirical (observational) evidence’ is not necessarily an automatism: it presupposes appropriate meaning spaces embedded in sets of preferences, which are ‘observation friendly’.
  4. Beyond Now (Main message: “With the aid of … sequences revealing possible changes the NOW is turned into a ‘moment’ embedded in a ‘process’, which is becoming the more important reality. The NOW is something, but the PROCESS is more.“)
  5. Playing with the Future (Main message: “In this sense seems ‘language’ to be the master tool for every brain to mediate its dynamic meaning structures with symbolic fix points (= words, expressions) which as such do not change, but the meaning is ‘free to change’ in any direction. And this ‘built in ‘dynamics’ represents an ‘internal potential’ for uncountable many possible states, which could perhaps become ‘true’ in some ‘future state’. Thus ‘future’ can begin in these potentials, and thinking is the ‘playground’ for possible futures.(but see [18])”)
  6. Forecasting – Prediction: What? (This chapter explains the cognitive machinery behind forecasting/ predictions, how groups of human actors can elaborate shared descriptions, and how it is possible to start with sequences of singularities to built up a growing picture of the empirical world which appears as a radical infinite and indeterministic space. )
  7. !!! From here all the following chapters have to be re-written !!!
  8. THE LOGIC OF EVERYDAY THINKING. Lets try an Example (Will probably be re-written too)
  9. Boolean Logic (Explains what boolean logic is, how it enables the working of programmable machines, but that it is of nearly no help for the ‘heart’ of forecasting.)
  10. … more re-writing will probably happen …
  11. Everyday Language: German Example
  12. Everyday Language: English
  13. Natural Logic
  14. Predicate Logic
  15. True Statements
  16. Formal Logic Inference: Preserving Truth
  17. Ordinary Language Inference: Preserving and Creating Truth
  18. Hidden Ontologies: Cognitively Real and Empirically Real
  19. AN INFERENCE IS NOT AUTOMATICALLY A FORECAST
  20. EMPIRICAL THEORY
  21. Side Trip to Wikipedia
  22. SUSTAINABLE EMPIRICAL THEORY
  23. CITIZEN SCIENCE 2.0
  24. … ???

COMMENTS

wkp-en := Englisch Wikipedia

/* Often people argue against the usage of the wikipedia encyclopedia as not ‘scientific’ because the ‘content’ of an entry in this encyclopedia can ‘change’. This presupposes the ‘classical view’ of scientific texts to be ‘stable’, which presupposes further, that such a ‘stable text’ describes some ‘stable subject matter’. But this view of ‘steadiness’ as the major property of ‘true descriptions’ is in no correspondence with real scientific texts! The reality of empirical science — even as in some special disciplines like ‘physics’ — is ‘change’. Looking to Aristotle’s view of nature, to Galileo Galilei, to Newton, to Einstein and many others, you will not find a ‘single steady picture’ of nature and science, and physics is only a very simple strand of science compared to the live-sciences and many others. Thus wikipedia is a real scientific encyclopedia give you the breath of world knowledge with all its strengths and limits at once. For another, more general argument, see In Favour for Wikipedia */

[*1] Meaning operator ‘…’ : In this text (and in nearly all other texts of this author) the ‘inverted comma’ is used quite heavily. In everyday language this is not common. In some special languages (theory of formal languages or in programming languages or in meta-logic) the inverted comma is used in some special way. In this text, which is primarily a philosophical text, the inverted comma sign is used as a ‘meta-language operator’ to raise the intention of the reader to be aware, that the ‘meaning’ of the word enclosed in the inverted commas is ‘text specific’: in everyday language usage the speaker uses a word and assumes tacitly that his ‘intended meaning’ will be understood by the hearer of his utterance as ‘it is’. And the speaker will adhere to his assumption until some hearer signals, that her understanding is different. That such a difference is signaled is quite normal, because the ‘meaning’ which is associated with a language expression can be diverse, and a decision, which one of these multiple possible meanings is the ‘intended one’ in a certain context is often a bit ‘arbitrary’. Thus, it can be — but must not — a meta-language strategy, to comment to the hearer (or here: the reader), that a certain expression in a communication is ‘intended’ with a special meaning which perhaps is not the commonly assumed one. Nevertheless, because the ‘common meaning’ is no ‘clear and sharp subject’, a ‘meaning operator’ with the inverted commas has also not a very sharp meaning. But in the ‘game of language’ it is more than nothing 🙂

[*1b] That the main stream ‘is biased’ is not an accident, not a ‘strange state’, not a ‘failure’, it is the ‘normal state’ based on the deeper structure how human actors are ‘built’ and ‘genetically’ and ‘cultural’ ‘programmed’. Thus the challenge to ‘survive’ as part of the ‘whole biosphere’ is not a ‘partial task’ to solve a single problem, but to solve in some sense the problem how to ‘shape the whole biosphere’ in a way, which enables a live in the universe for the time beyond that point where the sun is turning into a ‘red giant’ whereby life will be impossible on the planet earth (some billion years ahead)[22]. A remarkable text supporting this ‘complex view of sustainability’ can be found in Clark and Harvey, summarized at the end of the text. [23]

[*2] The meaning of the expression ‘normal’ is comparable to a wicked problem. In a certain sense we act in our everyday world ‘as if there exists some standard’ for what is assumed to be ‘normal’. Look for instance to houses, buildings: to a certain degree parts of a house have a ‘standard format’ assuming ‘normal people’. The whole traffic system, most parts of our ‘daily life’ are following certain ‘standards’ making ‘planning’ possible. But there exists a certain percentage of human persons which are ‘different’ compared to these introduced standards. We say that they have a ‘handicap’ compared to this assumed ‘standard’, but this so-called ‘standard’ is neither 100% true nor is the ‘given real world’ in its properties a ‘100% subject’. We have learned that ‘properties of the real world’ are distributed in a rather ‘statistical manner’ with different probabilities of occurrences. To ‘find our way’ in these varying occurrences we try to ‘mark’ the main occurrences as ‘normal’ to enable a basic structure for expectations and planning. Thus, if in this text the expression ‘normal’ is used it refers to the ‘most common occurrences’.

[*3] Thus we have here a ‘threefold structure’ embracing ‘perception events, memory events, and expression events’. Perception events represent ‘concrete events’; memory events represent all kinds of abstract events but they all have a ‘handle’ which maps to subsets of concrete events; expression events are parts of an abstract language system, which as such is dynamically mapped onto the abstract events. The main source for our knowledge about perceptions, memory and expressions is experimental psychology enhanced by many other disciplines.

[*4] Characterizing language expressions by meaning – the fate of any grammar: the sentence ” … ‘words’ (= expressions) of a language which can activate such abstract meanings are understood as ‘abstract words’, ‘general words’, ‘category words’ or the like.” is pointing to a deep property of every ordinary language, which represents the real power of language but at the same time the great weakness too: expressions as such have no meaning. Hundreds, thousands, millions of words arranged in ‘texts’, ‘documents’ can show some statistical patterns’ and as such these patterns can give some hint which expressions occur ‘how often’ and in ‘which combinations’, but they never can give a clue to the associated meaning(s). During more than three-thousand years humans have tried to describe ordinary language in a more systematic way called ‘grammar’. Due to this radically gap between ‘expressions’ as ‘observable empirical facts’ and ‘meaning constructs’ hidden inside the brain it was all the time a difficult job to ‘classify’ expressions as representing a certain ‘type’ of expression like ‘nouns’, ‘predicates’, ‘adjectives’, ‘defining article’ and the like. Without regressing to the assumed associated meaning such a classification is not possible. On account of the fuzziness of every meaning ‘sharp definitions’ of such ‘word classes’ was never and is not yet possible. One of the last big — perhaps the biggest ever — project of a complete systematic grammar of a language was the grammar project of the ‘Akademie der Wissenschaften der DDR’ (‘Academy of Sciences of the GDR’) from 1981 with the title “Grundzüge einer Deutschen Grammatik” (“Basic features of a German grammar”). A huge team of scientists worked together using many modern methods. But in the preface you can read, that many important properties of the language are still not sufficiently well describable and explainable. See: Karl Erich Heidolph, Walter Flämig, Wolfgang Motsch et al.: Grundzüge einer deutschen Grammatik. Akademie, Berlin 1981, 1028 Seiten.

[*5] Differing opinions about a given situation manifested in uttered expressions are a very common phenomenon in everyday communication. In some sense this is ‘natural’, can happen, and it should be no substantial problem to ‘solve the riddle of being different’. But as you can experience, the ability of people to solve the occurrence of different opinions is often quite weak. Culture is suffering by this as a whole.

[1] Gerd Doeben-Henisch, 2022, From SYSTEMS Engineering to THEORYEngineering, see: https://www.uffmm.org/2022/05/26/from-systems-engineering-to-theory-engineering/(Remark: At the time of citation this post was not yet finished, because there are other posts ‘corresponding’ with that post, which are too not finished. Knowledge is a dynamic network of interwoven views …).

[1d] ‘usual science’ is the game of science without having a sustainable format like in citizen science 2.0.

[2] Science, see e.g. wkp-en: https://en.wikipedia.org/wiki/Science

Citation = “Science is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1][2]

Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”

Citation = “New knowledge in science is advanced by research from scientists who are motivated by curiosity about the world and a desire to solve problems.[27][28] Contemporary scientific research is highly collaborative and is usually done by teams in academic and research institutions,[29] government agencies, and companies.[30][31] The practical impact of their work has led to the emergence of science policies that seek to influence the scientific enterprise by prioritizing the ethical and moral development of commercial productsarmamentshealth carepublic infrastructure, and environmental protection.”

[2b] History of science in wkp-en: https://en.wikipedia.org/wiki/History_of_science#Scientific_Revolution_and_birth_of_New_Science

[3] Theory, see wkp-en: https://en.wikipedia.org/wiki/Theory#:~:text=A%20theory%20is%20a%20rational,or%20no%20discipline%20at%20all.

Citation = “A theory is a rational type of abstract thinking about a phenomenon, or the results of such thinking. The process of contemplative and rational thinking is often associated with such processes as observational study or research. Theories may be scientific, belong to a non-scientific discipline, or no discipline at all. Depending on the context, a theory’s assertions might, for example, include generalized explanations of how nature works. The word has its roots in ancient Greek, but in modern use it has taken on several related meanings.”

[4] Scientific theory, see: wkp-en: https://en.wikipedia.org/wiki/Scientific_theory

Citation = “In modern science, the term “theory” refers to scientific theories, a well-confirmed type of explanation of nature, made in a way consistent with the scientific method, and fulfilling the criteria required by modern science. Such theories are described in such a way that scientific tests should be able to provide empirical support for it, or empirical contradiction (“falsify“) of it. Scientific theories are the most reliable, rigorous, and comprehensive form of scientific knowledge,[1] in contrast to more common uses of the word “theory” that imply that something is unproven or speculative (which in formal terms is better characterized by the word hypothesis).[2] Scientific theories are distinguished from hypotheses, which are individual empirically testable conjectures, and from scientific laws, which are descriptive accounts of the way nature behaves under certain conditions.”

[4b] Empiricism in wkp-en: https://en.wikipedia.org/wiki/Empiricism

[4c] Scientific method in wkp-en: https://en.wikipedia.org/wiki/Scientific_method

Citation =”The scientific method is an empirical method of acquiring knowledge that has characterized the development of science since at least the 17th century (with notable practitioners in previous centuries). It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; experimental and measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings. These are principles of the scientific method, as distinguished from a definitive series of steps applicable to all scientific enterprises.[1][2][3] [4c]

and

Citation = “The purpose of an experiment is to determine whether observations[A][a][b] agree with or conflict with the expectations deduced from a hypothesis.[6]: Book I, [6.54] pp.372, 408 [b] Experiments can take place anywhere from a garage to a remote mountaintop to CERN’s Large Hadron Collider. There are difficulties in a formulaic statement of method, however. Though the scientific method is often presented as a fixed sequence of steps, it represents rather a set of general principles.[7] Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order.[8][9]

[5] Gerd Doeben-Henisch, “Is Mathematics a Fake? No! Discussing N.Bourbaki, Theory of Sets (1968) – Introduction”, 2022, https://www.uffmm.org/2022/06/06/n-bourbaki-theory-of-sets-1968-introduction/

[6] Logic, see wkp-en: https://en.wikipedia.org/wiki/Logic

[7] W. C. Kneale, The Development of Logic, Oxford University Press (1962)

[8] Set theory, in wkp-en: https://en.wikipedia.org/wiki/Set_theory

[9] N.Bourbaki, Theory of Sets , 1968, with a chapter about structures, see: https://en.wikipedia.org/wiki/%C3%89l%C3%A9ments_de_math%C3%A9matique

[10] = [5]

[11] Ludwig Josef Johann Wittgenstein ( 1889 – 1951): https://en.wikipedia.org/wiki/Ludwig_Wittgenstein

[12] Ludwig Wittgenstein, 1953: Philosophische Untersuchungen [PU], 1953: Philosophical Investigations [PI], translated by G. E. M. Anscombe /* For more details see: https://en.wikipedia.org/wiki/Philosophical_Investigations */

[13] Wikipedia EN, Speech acts: https://en.wikipedia.org/wiki/Speech_act

[14] While the world view constructed in a brain is ‘virtual’ compared to the ‘real word’ outside the brain (where the body outside the brain is also functioning as ‘real world’ in relation to the brain), does the ‘virtual world’ in the brain function for the brain mostly ‘as if it is the real world’. Only under certain conditions can the brain realize a ‘difference’ between the triggering outside real world and the ‘virtual substitute for the real world’: You want to use your bicycle ‘as usual’ and then suddenly you have to notice that it is not at that place where is ‘should be’. …

[15] Propositional Calculus, see wkp-en: https://en.wikipedia.org/wiki/Propositional_calculus#:~:text=Propositional%20calculus%20is%20a%20branch,of%20arguments%20based%20on%20them.

[16] Boolean algebra, see wkp-en: https://en.wikipedia.org/wiki/Boolean_algebra

[17] Boolean (or propositional) Logic: As one can see in the mentioned articles of the English wikipedia, the term ‘boolean logic’ is not common. The more logic-oriented authors prefer the term ‘boolean calculus’ [15] and the more math-oriented authors prefer the term ‘boolean algebra’ [16]. In the view of this author the general view is that of ‘language use’ with ‘logic inference’ as leading idea. Therefore the main topic is ‘logic’, in the case of propositional logic reduced to a simple calculus whose similarity with ‘normal language’ is widely ‘reduced’ to a play with abstract names and operators. Recommended: the historical comments in [15].

[18] Clearly, thinking alone can not necessarily induce a possible state which along the time line will become a ‘real state’. There are numerous factors ‘outside’ the individual thinking which are ‘driving forces’ to push real states to change. But thinking can in principle synchronize with other individual thinking and — in some cases — can get a ‘grip’ on real factors causing real changes.

[19] This kind of knowledge is not delivered by brain science alone but primarily from experimental (cognitive) psychology which examines observable behavior and ‘interprets’ this behavior with functional models within an empirical theory.

[20] Predicate Logic or First-Order Logic or … see: wkp-en: https://en.wikipedia.org/wiki/First-order_logic#:~:text=First%2Dorder%20logic%E2%80%94also%20known,%2C%20linguistics%2C%20and%20computer%20science.

[21] Gerd Doeben-Henisch, In Favour of Wikipedia, https://www.uffmm.org/2022/07/31/in-favour-of-wikipedia/, 31 July 2022

[22] The sun, see wkp-ed https://en.wikipedia.org/wiki/Sun (accessed 8 Aug 2022)

[23] By Clark, William C., and Alicia G. Harley – https://doi.org/10.1146/annurev-environ-012420-043621, Clark, William C., and Alicia G. Harley. 2020. “Sustainability Science: Toward a Synthesis.” Annual Review of Environment and Resources 45 (1): 331–86, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=109026069

[24] Sustainability in wkp-en: https://en.wikipedia.org/wiki/Sustainability#Dimensions_of_sustainability

[25] Sustainable Development in wkp-en: https://en.wikipedia.org/wiki/Sustainable_development

[26] Marope, P.T.M; Chakroun, B.; Holmes, K.P. (2015). Unleashing the Potential: Transforming Technical and Vocational Education and Training (PDF). UNESCO. pp. 9, 23, 25–26. ISBN978-92-3-100091-1.

[27] SDG 4 in wkp-en: https://en.wikipedia.org/wiki/Sustainable_Development_Goal_4

[28] Thomas Rid, Rise of the Machines. A Cybernetic History, W.W.Norton & Company, 2016, New York – London

[29] Doeben-Henisch, G., 2006, Reducing Negative Complexity by a Semiotic System In: Gudwin, R., & Queiroz, J., (Eds). Semiotics and Intelligent Systems Development. Hershey et al: Idea Group Publishing, 2006, pp.330-342

[30] Döben-Henisch, G.,  Reinforcing the global heartbeat: Introducing the planet earth simulator project, In M. Faßler & C. Terkowsky (Eds.), URBAN FICTIONS. Die Zukunft des Städtischen. München, Germany: Wilhelm Fink Verlag, 2006, pp.251-263

[29] The idea that individual disciplines are not good enough for the ‘whole of knowledge’ is expressed in a clear way in a video of the theoretical physicist and philosopher Carlo Rovell: Carlo Rovelli on physics and philosophy, June 1, 2022, Video from the Perimeter Institute for Theoretical Physics. Theoretical physicist, philosopher, and international bestselling author Carlo Rovelli joins Lauren and Colin for a conversation about the quest for quantum gravity, the importance of unlearning outdated ideas, and a very unique way to get out of a speeding ticket.

[] By Azote for Stockholm Resilience Centre, Stockholm University – https://www.stockholmresilience.org/research/research-news/2016-06-14-how-food-connects-all-the-sdgs.html, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=112497386

[]  Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) in wkp-en, UTL: https://en.wikipedia.org/wiki/Intergovernmental_Science-Policy_Platform_on_Biodiversity_and_Ecosystem_Services

[] IPBES (2019): Global assessment report on biodiversity and ecosystem services of the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services. E. S. Brondizio, J. Settele, S. Díaz, and H. T. Ngo (editors). IPBES secretariat, Bonn, Germany. 1148 pages. https://doi.org/10.5281/zenodo.3831673

[] Michaelis, L. & Lorek, S. (2004). “Consumption and the Environment in Europe: Trends and Futures.” Danish Environmental Protection Agency. Environmental Project No. 904.

[] Pezzey, John C. V.; Michael A., Toman (2002). “The Economics of Sustainability: A Review of Journal Articles” (PDF). . Archived from the original (PDF) on 8 April 2014. Retrieved 8 April 2014.

[] World Business Council for Sustainable Development (WBCSD)  in wkp-en: https://en.wikipedia.org/wiki/World_Business_Council_for_Sustainable_Development

[] Sierra Club in wkp-en: https://en.wikipedia.org/wiki/Sierra_Club

[] Herbert Bruderer, Where is the Cradle of the Computer?, June 20, 2022, URL: https://cacm.acm.org/blogs/blog-cacm/262034-where-is-the-cradle-of-the-computer/fulltext (accessed: July 20, 2022)

[] UN. Secretary-GeneralWorld Commission on Environment and Development, 1987, Report of the World Commission on Environment and Development : note / by the Secretary-General., https://digitallibrary.un.org/record/139811 (accessed: July 20, 2022) (A more readable format: https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf )

/* Comment: Gro Harlem Brundtland (Norway) has been the main coordinator of this document */

[] Chaudhuri, S.,et al.Neurosymbolic programming. Foundations and Trends in Programming Languages 7, 158-243 (2021).

[] Noam Chomsky, “A Review of B. F. Skinner’s Verbal Behavior”, in Language, 35, No. 1 (1959), 26-58.(Online: https://chomsky.info/1967____/, accessed: July 21, 2022)

[] Churchman, C. West (December 1967). “Wicked Problems”Management Science. 14 (4): B-141–B-146. doi:10.1287/mnsc.14.4.B141.

[-] Yen-Chia Hsu, Illah Nourbakhsh, “When Human-Computer Interaction Meets Community Citizen Science“,Communications of the ACM, February 2020, Vol. 63 No. 2, Pages 31-34, 10.1145/3376892, https://cacm.acm.org/magazines/2020/2/242344-when-human-computer-interaction-meets-community-citizen-science/fulltext

[] Yen-Chia Hsu, Ting-Hao ‘Kenneth’ Huang, Himanshu Verma, Andrea Mauri, Illah Nourbakhsh, Alessandro Bozzon, Empowering local communities using artificial intelligence, DOI:https://doi.org/10.1016/j.patter.2022.100449, CellPress, Patterns, VOLUME 3, ISSUE 3, 100449, MARCH 11, 2022

[] Nello Cristianini, Teresa Scantamburlo, James Ladyman, The social turn of artificial intelligence, in: AI & SOCIETY, https://doi.org/10.1007/s00146-021-01289-8

[] Carl DiSalvo, Phoebe Sengers, and Hrönn Brynjarsdóttir, Mapping the landscape of sustainable hci, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’10, page 1975–1984, New York, NY, USA, 2010. Association for Computing Machinery.

[] Claude Draude, Christian Gruhl, Gerrit Hornung, Jonathan Kropf, Jörn Lamla, Jan Marco Leimeister, Bernhard Sick, Gerd Stumme, Social Machines, in: Informatik Spektrum, https://doi.org/10.1007/s00287-021-01421-4

[] EU: High-Level Expert Group on AI (AI HLEG), A definition of AI: Main capabilities and scientific disciplines, European Commission communications published on 25 April 2018 (COM(2018) 237 final), 7 December 2018 (COM(2018) 795 final) and 8 April 2019 (COM(2019) 168 final). For our definition of Artificial Intelligence (AI), please refer to our document published on 8 April 2019: https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=56341

[] EU: High-Level Expert Group on AI (AI HLEG), Policy and investment recommendations for trustworthy Artificial Intelligence, 2019, https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence

[] European Union. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC General Data Protection Regulation; http://eur-lex.europa.eu/eli/reg/2016/679/oj (Wirksam ab 25.Mai 2018) [26.2.2022]

[] C.S. Holling. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics, 4(1):1–23, 1973

[] John P. van Gigch. 1991. System Design Modeling and Metamodeling. Springer US. DOI:https://doi.org/10.1007/978-1-4899-0676-2

[] Gudwin, R.R. (2002), Semiotic Synthesis and Semionic Networks, S.E.E.D. Journal (Semiotics, Energy, Evolution, Development), Volume 2, No.2, pp.55-83.

[] Gudwin, R.R. (2003), On a Computational Model of the Peircean Semiosis, IEEE KIMAS 2003 Proceedings

[] J.A. Jacko and A. Sears, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 1st edition, 2003.

[] LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436-444 (2015).

[] Lenat, D. What AI can learn from Romeo & Juliet.Forbes (2019)

[] Pierre Lévy, Collective Intelligence. mankind’s emerging world in cyberspace, Perseus books, Cambridge (M A), 1997 (translated from the French Edition 1994 by Robert Bonnono)

[] Lexikon der Nachhaltigkeit, ‘Starke Nachhaltigkeit‘, https://www.nachhaltigkeit.info/artikel/schwache_vs_starke_nachhaltigkeit_1687.htm (acessed: July 21, 2022)

[] Michael L. Littman, Ifeoma Ajunwa, Guy Berger, Craig Boutilier, Morgan Currie, Finale Doshi-Velez, Gillian Hadfield, Michael C. Horowitz, Charles Isbell, Hiroaki Kitano, Karen Levy, Terah Lyons, Melanie Mitchell, Julie Shah, Steven Sloman, Shannon Vallor, and Toby Walsh. “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence (AI100) 2021 Study Panel Report.” Stanford University, Stanford, CA, September 2021. Doc: http://ai100.stanford.edu/2021-report.

[] Markus Luczak-Roesch, Kieron O’Hara, Ramine Tinati, Nigel Shadbolt, Socio-technical Computation, CSCW’15 Companion, March 14–18, 2015, Vancouver, BC, Canada, ACM 978-1-4503-2946-0/15/03, http://dx.doi.org/10.1145/2685553.2698991

[] Marcus, G.F., et al. Overregularization in language acquisition. Monographs of the Society for Research in Child Development 57 (1998).

[] Gary Marcus and Ernest Davis, Rebooting AI, Published by Pantheon,
Sep 10, 2019, 288 Pages

[] Gary Marcus, Deep Learning Is Hitting a Wall. What would it take for artificial intelligence to make real progress, March 10, 2022, URL: https://nautil.us/deep-learning-is-hitting-a-wall-14467/ (accessed: July 20, 2022)

[] Kathryn Merrick. Value systems for developmental cognitive robotics: A survey. Cognitive Systems Research, 41:38 – 55, 2017

[]  Illah Reza Nourbakhsh and Jennifer Keating, AI and Humanity, MIT Press, 2020 /* An examination of the implications for society of rapidly advancing artificial intelligence systems, combining a humanities perspective with technical analysis; includes exercises and discussion questions. */

[] Olazaran, M. , A sociological history of the neural network controversy. Advances in Computers 37, 335-425 (1993).

[] Friedrich August Hayek (1945), The use of knowledge in society. The American Economic Review 35, 4 (1945), 519–530

[] Karl Popper, „A World of Propensities“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1988, leicht erweitert neu abgedruckt 1990, repr. 1995)

[] Karl Popper, „Towards an Evolutionary Theory of Knowledge“, in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (Vortrag 1989, ab gedruckt in 1990, repr. 1995)

[] Karl Popper, „All Life is Problem Solving“, Artikel, ursprünglich ein Vortrag 1991 auf Deutsch, erstmalig publiziert in dem Buch (auf Deutsch) „Alles Leben ist Problemlösen“ (1994), dann in dem Buch (auf Englisch) „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York

[] Rittel, Horst W.J.; Webber, Melvin M. (1973). “Dilemmas in a General Theory of Planning” (PDF). Policy Sciences. 4 (2): 155–169. doi:10.1007/bf01405730S2CID 18634229. Archived from the original (PDF) on 30 September 2007. [Reprinted in Cross, N., ed. (1984). Developments in Design Methodology. Chichester, England: John Wiley & Sons. pp. 135–144.]

[] Ritchey, Tom (2013) [2005]. “Wicked Problems: Modelling Social Messes with Morphological Analysis”Acta Morphologica Generalis2 (1). ISSN 2001-2241. Retrieved 7 October 2017.

[] Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th US ed., 2021, URL: http://aima.cs.berkeley.edu/index.html (accessed: July 20, 2022)

[] A. Sears and J.A. Jacko, Eds., The Human-Computer Interaction Handbook. Fundamentals, Evolving Technologies, and emerging Applications. 2nd edition, 2008.

[] Skaburskis, Andrejs (19 December 2008). “The origin of “wicked problems””. Planning Theory & Practice9 (2): 277-280. doi:10.1080/14649350802041654. At the end of Rittel’s presentation, West Churchman responded with that pensive but expressive movement of voice that some may well remember, ‘Hmm, those sound like “wicked problems.”‘

[] Tonkinwise, Cameron (4 April 2015). “Design for Transitions – from and to what?”Academia.edu. Retrieved 9 November 2017.

[] Thoppilan, R., et al. LaMDA: Language models for dialog applications. arXiv 2201.08239 (2022).

[] Wurm, Daniel; Zielinski, Oliver; Lübben, Neeske; Jansen, Maike; Ramesohl,
Stephan (2021) : Wege in eine ökologische Machine Economy: Wir brauchen eine ‘Grüne Governance der Machine Economy’, um das Zusammenspiel von Internet of Things, Künstlicher Intelligenz und Distributed Ledger Technology ökologisch zu gestalten, Wuppertal Report, No. 22, Wuppertal Institut für Klima, Umwelt, Energie, Wuppertal, https://doi.org/10.48506/opus-7828

[] Aimee van Wynsberghe, Sustainable AI: AI for sustainability and the sustainability of AI, in: AI and Ethics (2021) 1:213–218, see: https://doi.org/10.1007/s43681

[-] Sarah West, Rachel Pateman, 2017, “How could citizen science support the Sustainable Development Goals?“, SEI Stockholm Environment Institut , 2017, see: https://mediamanager.sei.org/documents/Publications/SEI-2017-PB-citizen-science-sdgs.pdf

[] R. I. Damper (2000), Editorial for the special issue on ‘Emergent Properties of Complex Systems’: Emergence and levels of abstraction. International Journal of Systems Science 31, 7 (2000), 811–818. DOI:https://doi.org/10.1080/002077200406543

[] Gerd Doeben-Henisch (2004), The Planet Earth Simulator Project – A Case Study in Computational Semiotics, IEEE AFRICON 2004, pp.417 – 422

[] Boder, A. (2006), “Collective intelligence: a keystone in knowledge management”, Journal of Knowledge Management, Vol. 10 No. 1, pp. 81-93. https://doi.org/10.1108/13673270610650120

[] Wikipedia, ‘Weak and strong sustainability’, https://en.wikipedia.org/wiki/Weak_and_strong_sustainability (accessed: July 21, 2022)

[] Florence Maraninchi, Let us Not Put All Our Eggs in One Basket. Towards new research directions in computer Science, CACM Communications of the ACM, September 2022, Vol.65, No.9, pp.35-37, https://dl.acm.org/doi/10.1145/3528088

[] AYA H. KIMURA and ABBY KINCHY, “Citizen Science: Probing the Virtues and Contexts of Participatory Research”, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099

[] Eric Bonabeau (2009), Decisions 2.0: The power of collective intelligence. MIT Sloan Management Review 50, 2 (Winter 2009), 45-52.

[] Jim Giles (2005), Internet encyclopaedias go head to head. Nature 438, 7070 (Dec. 2005), 900–901. DOI:https://doi.org/10.1038/438900a

[] T. Bosse, C. M. Jonker, M. C. Schut, and J. Treur (2006), Collective representational content for shared extended mind. Cognitive Systems Research 7, 2-3 (2006), pp.151-174, DOI:https://doi.org/10.1016/j.cogsys.2005.11.007

[] Romina Cachia, Ramón Compañó, and Olivier Da Costa (2007), Grasping the potential of online social networks for foresight. Technological Forecasting and Social Change 74, 8 (2007), oo.1179-1203. DOI:https://doi.org/10.1016/j.techfore.2007.05.006

[] Tom Gruber (2008), Collective knowledge systems: Where the social web meets the semantic web. Web Semantics: Science, Services and Agents on the World Wide Web 6, 1 (2008), 4–13. DOI:https://doi.org/10.1016/j.websem.2007.11.011

[] Luca Iandoli, Mark Klein, and Giuseppe Zollo (2009), Enabling on-line deliberation and collective decision-making through large-scale argumentation. International Journal of Decision Support System Technology 1, 1 (Jan. 2009), 69–92. DOI:https://doi.org/10.4018/jdsst.2009010105

[] Shuangling Luo, Haoxiang Xia, Taketoshi Yoshida, and Zhongtuo Wang (2009), Toward collective intelligence of online communities: A primitive conceptual model. Journal of Systems Science and Systems Engineering 18, 2 (01 June 2009), 203–221. DOI:https://doi.org/10.1007/s11518-009-5095-0

[] Dawn G. Gregg (2010), Designing for collective intelligence. Communications of the ACM 53, 4 (April 2010), 134–138. DOI:https://doi.org/10.1145/1721654.1721691

[] Rolf Pfeifer, Jan Henrik Sieg, Thierry Bücheler, and Rudolf Marcel Füchslin. 2010. Crowdsourcing, open innovation and collective intelligence in the scientific method: A research agenda and operational framework. (2010). DOI:https://doi.org/10.21256/zhaw-4094

[] Martijn C. Schut. 2010. On model design for simulation of collective intelligence. Information Sciences 180, 1 (2010), 132–155. DOI:https://doi.org/10.1016/j.ins.2009.08.006 Special Issue on Collective Intelligence

[] Dimitrios J. Vergados, Ioanna Lykourentzou, and Epaminondas Kapetanios (2010), A resource allocation framework for collective intelligence system engineering. In Proceedings of the International Conference on Management of Emergent Digital EcoSystems (MEDES’10). ACM, New York, NY, 182–188. DOI:https://doi.org/10.1145/1936254.1936285

[] Anita Williams Woolley, Christopher F. Chabris, Alex Pentland, Nada Hashmi, and Thomas W. Malone (2010), Evidence for a collective intelligence factor in the performance of human groups. Science 330, 6004 (2010), 686–688. DOI:https://doi.org/10.1126/science.1193147

[] Michael A. Woodley and Edward Bell (2011), Is collective intelligence (mostly) the General Factor of Personality? A comment on Woolley, Chabris, Pentland, Hashmi and Malone (2010). Intelligence 39, 2 (2011), 79–81. DOI:https://doi.org/10.1016/j.intell.2011.01.004

[] Joshua Introne, Robert Laubacher, Gary Olson, and Thomas Malone (2011), The climate CoLab: Large scale model-based collaborative planning. In Proceedings of the 2011 International Conference on Collaboration Technologies and Systems (CTS’11). 40–47. DOI:https://doi.org/10.1109/CTS.2011.5928663

[] Miguel de Castro Neto and Ana Espírtio Santo (2012), Emerging collective intelligence business models. In MCIS 2012 Proceedings. Mediterranean Conference on Information Systems. https://aisel.aisnet.org/mcis2012/14

[] Peng Liu, Zhizhong Li (2012), Task complexity: A review and conceptualization framework, International Journal of Industrial Ergonomics 42 (2012), pp. 553 – 568

[] Sean Wise, Robert A. Paton, and Thomas Gegenhuber. (2012), Value co-creation through collective intelligence in the public sector: A review of US and European initiatives. VINE 42, 2 (2012), 251–276. DOI:https://doi.org/10.1108/03055721211227273

[] Antonietta Grasso and Gregorio Convertino (2012), Collective intelligence in organizations: Tools and studies. Computer Supported Cooperative Work (CSCW) 21, 4 (01 Oct 2012), 357–369. DOI:https://doi.org/10.1007/s10606-012-9165-3

[] Sandro Georgi and Reinhard Jung (2012), Collective intelligence model: How to describe collective intelligence. In Advances in Intelligent and Soft Computing. Vol. 113. Springer, 53–64. DOI:https://doi.org/10.1007/978-3-642-25321-8_5

[] H. Santos, L. Ayres, C. Caminha, and V. Furtado (2012), Open government and citizen participation in law enforcement via crowd mapping. IEEE Intelligent Systems 27 (2012), 63–69. DOI:https://doi.org/10.1109/MIS.2012.80

[] Jörg Schatzmann & René Schäfer & Frederik Eichelbaum (2013), Foresight 2.0 – Definition, overview & evaluation, Eur J Futures Res (2013) 1:15
DOI 10.1007/s40309-013-0015-4

[] Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin (2013), How diversity can drive innovation. Harvard Business Review 91, 12 (2013), 30–30

[] Tony Diggle (2013), Water: How collective intelligence initiatives can address this challenge. Foresight 15, 5 (2013), 342–353. DOI:https://doi.org/10.1108/FS-05-2012-0032

[] Hélène Landemore and Jon Elster. 2012. Collective Wisdom: Principles and Mechanisms. Cambridge University Press. DOI:https://doi.org/10.1017/CBO9780511846427

[] Jerome C. Glenn (2013), Collective intelligence and an application by the millennium project. World Futures Review 5, 3 (2013), 235–243. DOI:https://doi.org/10.1177/1946756713497331

[] Detlef Schoder, Peter A. Gloor, and Panagiotis Takis Metaxas (2013), Social media and collective intelligence—Ongoing and future research streams. KI – Künstliche Intelligenz 27, 1 (1 Feb. 2013), 9–15. DOI:https://doi.org/10.1007/s13218-012-0228-x

[] V. Singh, G. Singh, and S. Pande (2013), Emergence, self-organization and collective intelligence—Modeling the dynamics of complex collectives in social and organizational settings. In 2013 UKSim 15th International Conference on Computer Modelling and Simulation. 182–189. DOI:https://doi.org/10.1109/UKSim.2013.77

[] A. Kornrumpf and U. Baumöl (2014), A design science approach to collective intelligence systems. In 2014 47th Hawaii International Conference on System Sciences. 361–370. DOI:https://doi.org/10.1109/HICSS.2014.53

[] Michael A. Peters and Richard Heraud. 2015. Toward a political theory of social innovation: Collective intelligence and the co-creation of social goods. 3, 3 (2015), 7–23. https://researchcommons.waikato.ac.nz/handle/10289/9569

[] Juho Salminen. 2015. The Role of Collective Intelligence in Crowdsourcing Innovation. PhD dissertation. Lappeenranta University of Technology

[] Aelita Skarzauskiene and Monika Maciuliene (2015), Modelling the index of collective intelligence in online community projects. In International Conference on Cyber Warfare and Security. Academic Conferences International Limited, 313

[] AYA H. KIMURA and ABBY KINCHY (2016), Citizen Science: Probing the Virtues and Contexts of Participatory Research, Engaging Science, Technology, and Society 2 (2016), 331-361, DOI:10.17351/ests2016.099

[] Philip Tetlow, Dinesh Garg, Leigh Chase, Mark Mattingley-Scott, Nicholas Bronn, Kugendran Naidoo†, Emil Reinert (2022), Towards a Semantic Information Theory (Introducing Quantum Corollas), arXiv:2201.05478v1 [cs.IT] 14 Jan 2022, 28 pages

[] Melanie Mitchell, What Does It Mean to Align AI With Human Values?, quanta magazin, Quantized Columns, 19.Devember 2022, https://www.quantamagazine.org/what-does-it-mean-to-align-ai-with-human-values-20221213#

Comment by Gerd Doeben-Henisch:

[] Nick Bostrom. Superintelligence. Paths, Dangers, Strategies. Oxford University Press, Oxford (UK), 1 edition, 2014.

[] Scott Aaronson, Reform AI Alignment, Update: 22.November 2022, https://scottaaronson.blog/?p=6821

[] Andrew Y. Ng, Stuart J. Russell, Algorithms for Inverse Reinforcement Learning, ICML 2000: Proceedings of the Seventeenth International Conference on Machine LearningJune 2000 Pages 663–670

[] Pat Langley (ed.), ICML ’00: Proceedings of the Seventeenth International Conference on Machine Learning, Morgan Kaufmann Publishers Inc., 340 Pine Street, Sixth Floor, San Francisco, CA, United States, Conference 29 June 2000- 2 July 2000, 29.June 2000

[] Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum, (2019) Extrapolating Beyond Suboptimal Demonstrations via
Inverse Reinforcement Learning from Observations
, Proceedings of the 36 th International Conference on Machine Learning, Long Beach, California, PMLR 97, 2019. Copyright 2019 by the author(s): https://arxiv.org/pdf/1904.06387.pdf

Abstract: Extrapolating Beyond Suboptimal Demonstrations via
Inverse Reinforcement Learning from Observations
Daniel S. Brown * 1 Wonjoon Goo * 1 Prabhat Nagarajan 2 Scott Niekum 1
You can read in the abstract:
“A critical flaw of existing inverse reinforcement learning (IRL) methods is their inability to significantly outperform the demonstrator. This is because IRL typically seeks a reward function that makes the demonstrator appear near-optimal, rather than inferring the underlying intentions of the demonstrator that may have been poorly executed in practice. In this paper, we introduce
a novel reward-learning-from-observation algorithm, Trajectory-ranked Reward EXtrapolation (T-REX), that extrapolates beyond a set of (ap-
proximately) ranked demonstrations in order to infer high-quality reward functions from a set of potentially poor demonstrations. When combined
with deep reinforcement learning, T-REX outperforms state-of-the-art imitation learning and IRL methods on multiple Atari and MuJoCo bench-
mark tasks and achieves performance that is often more than twice the performance of the best demonstration. We also demonstrate that T-REX
is robust to ranking noise and can accurately extrapolate intention by simply watching a learner noisily improve at a task over time.”

[] Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, Dario Amodei, (2017), Deep reinforcement learning from human preferences, https://arxiv.org/abs/1706.03741

In the abstract you can read: “For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent’s interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.

[] Melanie Mitchell,(2021), Abstraction and Analogy-Making in Artificial
Intelligence
, https://arxiv.org/pdf/2102.10717.pdf

In the abstract you can read: “Conceptual abstraction and analogy-making are key abilities underlying humans’ abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing
challenge tasks and evaluation measures in order to make quantifiable and generalizable progress

[] Melanie Mitchell, (2021), Why AI is Harder Than We Think, https://arxiv.org/pdf/2102.10717.pdf

In the abstract you can read: “Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.”

[] Stuart Russell, (2019), Human Compatible: AI and the Problem of Control, Penguin books, Allen Lane; 1. Edition (8. Oktober 2019)

In the preface you can read: “This book is about the past , present , and future of our attempt to understand and create intelligence . This matters , not because AI is rapidly becoming a pervasive aspect of the present but because it is the dominant technology of the future . The world’s great powers are waking up to this fact , and the world’s largest corporations have known it for some time . We cannot predict exactly how the technology will develop or on what timeline . Nevertheless , we must plan for the possibility that machines will far exceed the human capacity for decision making in the real world . What then ? Everything civilization has to offer is the product of our intelligence ; gaining access to considerably greater intelligence would be the biggest event in human history . The purpose of the book is to explain why it might be the last event in human history and how to make sure that it is not .”

[] David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina Zvyagina, (2022), Method Cards for Prescriptive Machine-Learning Transparency, 2022 IEEE/ACM 1st International Conference on AI Engineering – Software Engineering for AI (CAIN), CAIN’22, May 16–24, 2022, Pittsburgh, PA, USA, pp. 90 – 100, Association for Computing Machinery, ACM ISBN 978-1-4503-9275-4/22/05, New York, NY, USA, https://doi.org/10.1145/3522664.3528600

In the abstract you can read: “Specialized documentation techniques have been developed to communicate key facts about machine-learning (ML) systems and the datasets and models they rely on. Techniques such as Datasheets,
AI FactSheets, and Model Cards have taken a mainly descriptive
approach, providing various details about the system components.
While the above information is essential for product developers
and external experts to assess whether the ML system meets their
requirements, other stakeholders might find it less actionable. In
particular, ML engineers need guidance on how to mitigate po-
tential shortcomings in order to fix bugs or improve the system’s
performance. We propose a documentation artifact that aims to
provide such guidance in a prescriptive way. Our proposal, called
Method Cards, aims to increase the transparency and reproducibil-
ity of ML systems by allowing stakeholders to reproduce the models,
understand the rationale behind their designs, and introduce adap-
tations in an informed way. We showcase our proposal with an
example in small object detection, and demonstrate how Method
Cards can communicate key considerations that help increase the
transparency and reproducibility of the detection model. We fur-
ther highlight avenues for improving the user experience of ML
engineers based on Method Cards.”

[] John H. Miller, (2022),  Ex Machina: Coevolving Machines and the Origins of the Social Universe, The SFI Press Scholars Series, 410 pages
Paperback ISBN: 978-1947864429 , DOI: 10.37911/9781947864429

In the announcement of the book you can read: “If we could rewind the tape of the Earth’s deep history back to the beginning and start the world anew—would social behavior arise yet again? While the study of origins is foundational to many scientific fields, such as physics and biology, it has rarely been pursued in the social sciences. Yet knowledge of something’s origins often gives us new insights into the present. In Ex Machina, John H. Miller introduces a methodology for exploring systems of adaptive, interacting, choice-making agents, and uses this approach to identify conditions sufficient for the emergence of social behavior. Miller combines ideas from biology, computation, game theory, and the social sciences to evolve a set of interacting automata from asocial to social behavior. Readers will learn how systems of simple adaptive agents—seemingly locked into an asocial morass—can be rapidly transformed into a bountiful social world driven only by a series of small evolutionary changes. Such unexpected revolutions by evolution may provide an important clue to the emergence of social life.”

[] Stefani A. Crabtree, Global Environmental Change, https://doi.org/10.1016/j.gloenvcha.2022.102597

In the abstract you can read: “Analyzing the spatial and temporal properties of information flow with a multi-century perspective could illuminate the sustainability of human resource-use strategies. This paper uses historical and archaeological datasets to assess how spatial, temporal, cognitive, and cultural limitations impact the generation and flow of information about ecosystems within past societies, and thus lead to tradeoffs in sustainable practices. While it is well understood that conflicting priorities can inhibit successful outcomes, case studies from Eastern Polynesia, the North Atlantic, and the American Southwest suggest that imperfect information can also be a major impediment
to sustainability. We formally develop a conceptual model of Environmental Information Flow and Perception (EnIFPe) to examine the scale of information flow to a society and the quality of the information needed to promote sustainable coupled natural-human systems. In our case studies, we assess key aspects of information flow by focusing on food web relationships and nutrient flows in socio-ecological systems, as well as the life cycles, population dynamics, and seasonal rhythms of organisms, the patterns and timing of species’ migration, and the trajectories of human-induced environmental change. We argue that the spatial and temporal dimensions of human environments shape society’s ability to wield information, while acknowledging that varied cultural factors also focus a society’s ability to act on such information. Our analyses demonstrate the analytical importance of completed experiments from the past, and their utility for contemporary debates concerning managing imperfect information and addressing conflicting priorities in modern environmental management and resource use.”



AN EMPIRICAL THEORY AS A DEVELOPMENT PROCESS

eJournal: uffmm.org
ISSN 2567-6458, 2.April 22 – 3.April 2022
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Philosophy of Science theme which is part of the uffmm blog.

PREFACE

In a preceding post I have illustrated how one can apply the concept of an empirical theory — highly inspired by Karl Popper — to an everyday problem given as a county and its demographic problem(s). In this post I like to develop this idea a little more.

AN EMPIRICAL THEORY AS A DEVELOPMENT PROCESS

The figure shows a simplified outline of the idea of an empirical theory being realized during a development process based on the interactions and the communication of citizens as ‘natural experts’.

CITIZENs – natural experts

As starting point we assume citizens understood as our ‘natural experts’ being members of a democratic society with political parties, an freely elected parliament, which can create some helpful laws for the societal life and some authorities serving the need of the citizens.

SYMBOLIC DESCRIPTIONS

To coordinate their actions by a sufficient communication the citizens produce symbolic descriptions to make public how they see the ‘given situation’, which kinds of ‘future states’ (‘goals’) they want to achieve, and a list of ‘actions’ which can ‘change/ transform’ the given situation step wise into the envisioned future state.

LEVELS OF ABSTRACTIONS

Using an everyday language — possibly enriched with some math expressions – one can talk about our world of experience on different levels of abstraction. To get a rather wide scope one starts with most abstract concepts, and then one can break down these abstract concepts more and more with concrete properties/ features until these concrete expressions are ‘touching the real experience’. It can be helpful — in most cases — not to describe everything in one description but one does a partition of ‘the whole’ into several more concrete descriptions to get the main points. Afterwards it should be possible to ‘unify’ these more concrete descriptions into one large picture showing how all these concrete descriptions ‘work together’.

LOGICAL INFERENCE BY SIMULATION

A very useful property of empirical theories is the possibility to derive from given assumptions and assumed rules of inference possible consequences which are ‘true’ if the assumptions an the rules of inference are ‘true’.

The above outlined descriptions are seen in this post as texts which satisfy the requirements of an empirical theory such that the ‘simulator’ is able to derive from these assumptions all possible ‘true’ consequences if these assumptions are assumed to be ‘true’. Especially will the simulator deliver not only one single consequence only but a whole ‘sequence of consequences’ following each other in time.

PURE WWW KNOWLEDGE SPACE

This simple outline describes the application format of the oksimo software which is understood here as a kind of a ‘theory machine’ for everybody.

It is assumed that a symbolic description is given as a pure text file or as a given HTML page somewhere in the world wide web [WWW].

The simulator realized as an oksimo program can load such a file and can run a simulation. The output will be send back as an HTML page.

No special special data base is needed inside of the oksimo application. All oksimo related HTML pages located by a citizen somewhere in the WWW are constituting a ‘global public knowledge space’ accessible by everybody.

DISTRIBUTED OKSIMO INSTANCES

An oksimo server positioned behind the oksimo address ‘oksimo.com’ can produce for a simulation demand a ‘simulator instance’ running one simulation. There can be many simulations running in parallel. A simulation can also be connected in real time to Internet-of-Things [IoT] instances to receive empirical data being used in the simulation. In ‘interactive mode’ an oksimo simulation does furthermore allow the participation of ‘actors’ which function as a ‘dynamic rule instance’: they receive input from the simulated given situation and can respond ‘on their own’. This turns a simulation into an ‘open process’ like we do encounter during ‘everyday real processes’. An ‘actor’ must not necessarily be a ‘human’ actor; it can also be a ‘non-human’ actor. Furthermore it is possible to establish a ‘simulation-meta-level’: because a simulation as a whole represents a ‘full theory’ on can feed this whole theory to an ‘artificial intelligence algorithm’ which dos not run only one simulation but checks the space of ‘all possible simulations’ and thereby identifies those sub-spaces which are — according to the defined goals — ‘zones of special interest’.

POPPER and EMPIRICAL THEORY. A conceptual Experiment


eJournal: uffmm.org
ISSN 2567-6458, 12.March 22 – 16.March 2022, 11:20 h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Philosophy of Science theme which is part of the uffmm blog.

PREFACE

In a preceding post I have outline the concept of an empirical theory based on a text from Popper 1971. In his article Popper points to a minimal structure of what he is calling an empirical theory. A closer investigation of his texts reveals many questions which should be clarified for a more concrete application of his concept of an empirical theory.

In this post it will be attempted to elaborate the concept of an empirical theory more concretely from a theoretical point of view as well as from an application point of view.

A Minimal Concept of an Empirical Theory

The figure shows the process of (i) observing phenomena, (ii) representing these in expressions of some language L, (iii) elaborating conjectures as hypothetical relations between different observations, (iv) using an inference concept to deduce some forecasts, and (v) compare these forecasts with those observations, which are possible in an assumed situation.

Empirical Basis

As starting point as well as a reference for testing does Popper assume an ’empirical basis’. The question arises what this means.

In the texts examined so far from Popper this is not well described. Thus in this text some ‘assumptions/ hypotheses’ will be formulated to describe some framework which should be able to ‘explain’ what an empirical basis is and how it works.

Experts

Those, who usually are building theories, are scientists, are experts. For a general concept of an ’empirical theory’ it is assumed here that every citizen is a ‘natural expert’.

Environment

Natural experts are living in ‘natural environments’ as part of the planet earth, as part of the solar system, as part of the whole universe.

Language

Experts ‘cooperate’ by using some ‘common language’. Here the ‘English language’ is used; many hundreds of other languages are possible.

Shared Goal (Changes, Time, Measuring, Successive States)

For cooperation it is necessary to have a ‘shared goal’. A ‘goal’ is an ‘idea’ about a possible state in the ‘future’ which is ‘somehow different’ to the given actual situation. Such a future state can be approached by some ‘process’, a series of possible ‘states’, which usually are characterized by ‘changes’ manifested by ‘differences’ between successive states. The concept of a ‘process’, a ‘sequence of states’, implies some concept of ‘time’. And time needs a concept of ‘measuring time’. ‘Measuring’ means basically to ‘compare something to be measured’ (the target) with ‘some given standard’ (the measuring unit). Thus to measure the height of a body one can compare it with some object called a ‘meter’ and then one states that the target (the height of the body) is 1,8 times as large as the given standard (the meter object). In case of time it was during many thousand years customary to use the ‘cycles of the sun’ to define the concept (‘unit’) of a ‘day’ and a ‘night’. Based on this one could ‘count’ the days as one day, two days, etc. and one could introduce further units like a ‘week’ by defining ‘One week compares to seven days’, or ‘one month compares to 30 days’, etc. This reveals that one needs some more concepts like ‘counting’, and associated with this implicitly then the concept of a ‘number’ (like ‘1’, ‘2’, …, ’12’, …) . Later the measuring of time has been delegated to ‘time machines’ (called ‘clocks’) producing mechanically ‘time units’ and then one could be ‘more precise’. But having more than one clock generates the need for ‘synchronizing’ different clocks at different locations. This challenge continues until today. Having a time machine called ‘clock’ one can define a ‘state’ only by relating the state to an ‘agreed time window’ = (t1,t2), which allows the description of states in a successive timely order: the state in the time-window (t1,t2) is ‘before’ the time-window (t2,t3). Then one can try to describe the properties of a given natural environment correlated with a certain time-window, e.g. saying that the ‘observed’ height of a body in time-window w1 was 1.8 m, in a later time window w6 the height was still 1.8 m. In this case no changes could be observed. If one would have observed at w6 1.9 m, then a difference is occurring by comparing two successive states.

Example: A County

Here we will assume as an example for a natural environment a ‘county’ in Germany called ‘Main-Kinzig Kreis’ (‘Kreis’ = ‘county’), abbreviated ‘MKK’. We are interested in the ‘number of citizens’ which are living in this county during a certain time-window, here the year 2018 = (1.January 2018, 31.December 2018). According to the statistical office of the state of Hessen, to which the MKK county belongs, the number of citizens in the MKK during 2018 was ‘418.950’.(cf. [2])

Observing the Number of Citizens

One can ask in which sense the number ‘418.950’ can be understood as an ‘observation statement’? If we understand ‘observation’ as the everyday expression for ‘measuring’, then we are looking for a ‘procedure’ which allows us to ‘produce’ this number ‘418.950’ associated with the unit ‘number of citizens during a year’. As everybody can immediately realize no single person can simply observe all citizens of that county. To ‘count’ all citizens in the county one had to ‘travel’ to all places in the county where citizens are living and count every person. Such a travelling would need some time. This can easily need more than 40 years working 24 hours a day. Thus, this procedure would not work. A different approach could be to find citizens in every of the 24 cities in the MKK [1] to help in this counting-procedure. To manage this and enable some ‘quality’ for the counting, this could perhaps work. An interesting experiment. Here we ‘believe’ in the number of citizens delivered by the statistical office of the state of Hessen [2], but keeping some reservation for the question how ‘good’ this number really is. Thus our ‘observation statement’ would be: “In the year 2018 418.950 citizens have been counted in the MKK (according to the information of the statistical office of the state of Hessen)” This observation statement lacks a complete account of the procedure, how this counting really happened.

Concrete and Abstract Words

There are interesting details in this observation statement. In this observation statement we notice words like ‘citizen’ and ‘MKK’. To talk about ‘citizens’ is not a talk about some objects in the direct environment. What we can directly observe are concrete bodies which we have learned to ‘classify’ as ‘humans’, enriched for example with ‘properties’ like ‘man’, ‘woman’, ‘child’, ‘elderly person’, neighbor’ and the like. Bu to classify someone as a ‘citizen’ deserves knowledge about some official procedure of ‘registering as a citizen’ at a municipal administration recorded in some certified document. Thus the word ‘citizen’ has a ‘meaning’ which needs some ‘concrete procedure to get the needed information’. Thus ‘citizen’ is not a ‘simple word’ but a ‘more abstract word’ with regard to the associated meaning. The same holds for the word ‘MKK’ short for ‘Main-Kinzig Kreis’. At a first glance ‘MKK’ appears as a ‘name’ for some entity. But this entity cannot directly be observed too. One component of the ‘meaning’ of the name ‘MKK’ is a ‘real geographical region’, whose exact geographic extensions have been ‘measured’ by official institutions marked in an ‘official map’ of the state of Hessen. This region is associated with an official document of the state of Hessen telling, that this geographical region has to be understood s a ‘county’ with the name MKK. There exist more official documents defining what is meant with the word ‘county’. Thus the word ‘MKK’ has a rather complex meaning which to understand and to check, whether everything is ‘true’, isn’t easy. The author of this post is living in the MKK and he would not be able to tell all the details of the complete meaning of the name ‘MKK’.

First Lessons Learned

Thus one can learn from these first considerations, that we as citizens are living in a natural environment where we are using observation statements which are using words with potentially rather complex meanings, which to ‘check’ deserves some serious amount of clarification.

Conjectures – Hypotheses

Changes

The above text shows that ‘observations as such’ show nothing of interest. Different numbers of citizens in different years have no ‘message’. But as soon as one arranges the years in a ‘time line’ according to some ‘time model’ the scene is changing: if the numbers of two consecutive years are ‘different’ then this ‘difference in numbers’ can be interpreted as a ‘change’ in the environment, but only if one ‘assumes’ that the observed phenomena (the number of counted citizens) are associated with some real entities (the citizens) whose ‘quantity’ is ‘represented’ in these numbers.[5]

And again, the ‘difference between consecutive numbers’ in a time line cannot be observed or measured directly. It is a ‘second order property’ derived from given measurements in time. Such a 2nd order property presupposes a relationship between different observations: they ‘show up’ in the expressions (here numbers), but they are connected back in the light of the agreed ‘meaning’ to some ‘real entities’ with the property ‘overall quantity’ which can change in the ‘real setting’ of these real entities called ‘citizens’.

In the example of the MKK the statistical office of the state of Hessen computed a difference between two consecutive years which has been represented as a ‘growth factor’ of 0,4%. This means that the number of citizens in the year 2018 will increase until the year 2019 as follows: number-citizens(2019) = number-citizens(2018) + (number of citizens(2018) * growth-factor). This means number-citizens(2019) =418.950 + (418.950 * 0.004) = 418.950 + 1.675,8 = 420.625,8

Applying change repeatedly

If one could assume that the ‘growth rate’ would stay constant through the time then one could apply the growth rate again and again onto the actual number of citizens in the MKK every year. This would yield the following simple table:

YearNumberGrowth Rate
2018418.950,00,0040
2019420.625,80
2020422.308,30
2021423.997,54
2022425.693,53
2023427.396,30
Table: Simplified description of the increase of the number of citizens in the Main-Kinzig county in Germany with an assumed growth rate of 0,4% per year.

As we know from reality an assumption of a fixed growth rate for complex dynamic systems is not very probable.

Theory

Continuing the previous considerations one has to ask the question, how the layout of a ‘complete empirical theory’ would look like?

As I commented in the preceding post about Popper’s 1971 article about ‘objective knowledge’ there exists today no one single accepted framework for a formalized empirical theory. Therefore I will stay here with a ‘bottom-up’ approach using elements taken from everyday reasoning.

What we have until now is the following:

  1. Before the beginning of a theory building process one needs a group of experts being part of a natural environment using the same language which share a common goal which they want to enable.
  2. The assumed natural environment is assumed from the experts as being a ‘process’ of consecutive states in time. The ‘granularity’ of the process depends from the used ‘time model’.
  3. As a starting point they collect a set of statements talking about those aspects of a ‘selected state’ at some time t which they are interested in.
  4. This set of statements describes a set of ‘observable properties’ of the selected state which is understood as a ‘subset’ of the properties of the natural environment.
  5. Every statement is understood by the experts as being ‘true’ in the sense, that the ‘known meaning’ of a statement has an ‘observable counterpart’ in the situation, which can be ‘confirmed’ by each expert.
  6. For each pair of consecutive states it holds that the set of statements of each state can be ‘equal’ or ‘can show ‘differences’.
  7. A ‘difference’ between sets of statements can be interpreted as pointing to a ‘change in the real environment’.[5]
  8. Observed differences can be described by special statements called ‘change statements’ or simply ‘rules’.
  9. A change statement has the format ‘IF a set of statements ST* is a subset of the statements ST of a given state S, THEN with probability p, a set of statements ST+ will be added to the actual state S and a set of statements ST- will be removed from the statements ST of a given state S. This will result in a new succeeding state S* with the representing statements ST – (ST-) + (ST+) depending from the assumed probability p.
  10. The list of change statements is an ‘open set’ according to the assumption, that an actual state is only a ‘subset’ of the real environment.
  11. Until now we have an assumed state S, an assumed goal V, and an open set of change statements X.
  12. Applying change statements to a given state S will generate a new state S*. Thus the application of a subset X’ of the open set of change statements X onto a given state S will here be called ‘generating a new state by a procedure’. Such a state-generating-procedure can be understood as an ‘inference’ (like in logic) oder as a ‘simulation’ (like in engineering).[6]
  13. To write this in a more condensed format we can introduce some signs —– S,V ⊩ ∑ X S‘ —– saying: If I have some state S and a goal V then the simulator will according to the change statements X generate a new state S’. In such a setting the newly generated state S’ can be understood as a ‘theorem’ which has been derived from the set of statements in the state S which are assumed to be ‘true’. And because the derived new state is assumed to happen in some ‘future’ ‘after’ the ‘actual state S’ this derived state can also be understood as a ‘forecast’.
  14. Because the experts can change all the time all parts ‘at will’ such a ‘natural empirical theory’ is an ‘open entity’ living in an ongoing ‘communication process’.
Second Lessons Learned

It is interestingly to know that from the set of statements in state S, which are assumed to be empirically true, together with some change statements X, whose proposed changes are also assumed to be ‘true’, and which have some probability P in the domain [0,1], one can forecast a set of statements in the state S* which shall be true, with a certainty being dependent from the preceding probability P and the overall uncertainty of the whole natural environment.

Confirmation – Non-Confirmation

A Theory with Forecasts

Having reached the formulation of an ordinary empirical theory T with the ingredients <S,V,X,⊩ > and the derivation concept S,V ⊩ ∑ X S‘ it is possible to generate theorems as forecasts. A forecast here is not a single statement st* but a whole state S* consisting of a finite set of statements ST* which ‘designate’ according to the ‘agreed meaning’ a set of ‘intended properties’ which need a set of ‘occurring empirical properties’ which can be observed by the experts. These observations are usually associated with ‘agreed procedures of measurement’, which generate as results ‘observation statements’/ ‘measurement statements’.

Within Time

Experts which are cooperating by ‘building’ an ordinary empirical theory are themselves part of a process in time. Thus making observations in the time-window (t1,t2) they have a state S describing some aspects of the world at ‘that time’ (t1,t2). When they then derive a forecast S* with their theory this forecast describes — with some probability P — a ‘possible state of the natural environment’ which is assumed to happen in the ‘future’. The precision of the predicted time when the forecasted statements in S* should happen depends from the assumptions in S.

To ‘check’ the ‘validity’ of such a forecast it is necessary that the overall natural process reaches a ‘point in time’ — or a time window — indicated by the used ‘time model’, where the ‘actual point in time’ is measured by an agreed time machine (mechanical clock). Because there is no observable time without a time machine the classification of a certain situation S* being ‘now’ at the predicted point of time depends completely from the used time machine.[7]

Given this the following can happen: According to the used theory a certain set of statements ST* is predicted to be ‘true’ — with some probability — either ‘at some time in the future’ or in the time-window (t1,t2) or at a certain point in time t*.

Validating Forecasts

If one of these cases would ‘happen’ then the experts would have the statements ST* of their forecast and a real situation in their natural environment which enables observations ‘Obs’ which are ‘translated’ into appropriate ‘observation statements’ STObs. The experts with their predicted statements ST* know a learned agreed meaning M* of their predicted statements ST* as intended-properties M* of ST*. The experts have also learned how they relate the intended meaning M* to the meaning MObs from the observation statements STobs. If the observed meaning MObs ‘agrees sufficiently well’ with the intended meaning M* then the experts would agree in a statement, that the intended meaning M* is ‘fulfilled’/ ‘satisfied’/ ‘confirmed’ by the observed meaning MObs. If not then it would stated that it is ‘not fulfilled’/ ‘not satisfied’/ ‘not confirmed’.

The ‘sufficient fulfillment’ of the intended meaning M* of a set of statements ST* is usually translated in a statement like “The statements ST* are ‘true'”. In the case of ‘no fulfillment’ it is unclear: this can be interpreted as ‘being false’ or as ‘being unclear’: No clear case of ‘being true’ and no clear case of ‘being false’.

Forecasting the Number of Citizens

In the used simple example we have the MKK county with an observed number of citizens in 2018 with 418950. The simple theory used a change statement with a growth factor of 0.4% per year. This resulted in the forecast with the number 420.625 citizens for the year 2019.

If the newly counting of the number of citizens in the years 2019 would yield 420.625, then there would be a perfect match, which could be interpreted as a ‘confirmation’ saying that the forecasted statement and the observed statement are ‘equal’ and therefore the theory seems to match the natural environment through the time. One could even say that the theory is ‘true for the observed time’. Nothing would follow from this for the unknown future. Thus the ‘truth’ of the theory is not an ‘absolute’ truth but a truth ‘within defined limits’.

We know from experience that in the case of forecasting numbers of citizens for some region — here a county — it is usually not so clear as it has been shown in this example.

This begins with the process of counting. Because it is very expensive to count the citizens of all cities of a county this happens only about every 20 years. In between the statistical office is applying the method of ‘forecasting projection’.[9] The state statistical office collects every year ‘electronically’ the numbers of ‘birth’, ‘death’, ‘outflow’, and ‘inflow’ from the individual cities and modifies with these numbers the last real census. In the case of the state of Hessen this was the year 2011. The next census in Germany will happen May 2022.[10] For such a census the data will be collected directly from the registration offices from the cities supported by a control survey of 10% of the population.

Because there are data from the statistical office of the state of Hessen for June 2021 [8:p.9] with saying that the MKK county had 421 936 citizens at 30. June 2021 we can compare this number with the theory forecast for the year 2021 with 423 997. This shows a difference in the numbers. The theory forecast is ‘higher’ than the observed forecast. What does this mean?

Purely arithmetically the forecast is ‘wrong’. The responsible growth factor is too large. If one would ‘adjust’ it in a simplified linear way to ‘0.24%’ then the theory could get a forecast for 2021 with 421 973 (observed: 421 936), but then the forecast for 2019 would be 419 955 (instead of 420 625).

This shows at least the following aspects:

  1. The empirical observations as such can vary ‘a little bit’. One had to clarify which degree of ‘variance’ is due to the method of measurement and therefore this variance should be taken into account for the evaluation of a theoretical forecast.
  2. As mentioned by the statistical office [9] there are four ‘factors’ which influence the final number of citizens in a region: ‘birth’, ‘death’, ‘outflow’, and ‘inflow’. These factors can change in time. Under ‘normal conditions’ the birth-rate and the death-rate are rather ‘stable’, but in case of an epidemic situation or even war this can change a lot. Outflow and inflow are very dynamic depending from many factors. Thus this can influence the growth factor a lot and these factors are difficult to forecast.
Third lessons Learned

Evaluating the ‘relatedness’ of some forecast F of an empirical theory T to the observations O in a given real natural environment is not a ‘clear-cut’ case. The ‘precision’ of such a relatedness depends from many factors where each of these factors has some ‘fuzziness’. Nevertheless as experience shows it can work in a limited way. And, this ‘limited way’ is the maximum we can get. The most helpful contribution of an ‘ordinary empirical theory’ seems to be the forecast of ‘What will happen if we have a certain set of assumptions’. Using such a forecast in the process of the experts this can help to improve to get some ‘informed guesses’ for planning.

Forecast

The next post will show, how this concept of an ordinary empirical theory can be used by applying the oksimo paradigm to a concrete case. See HERE.

Comments

[1] Cities of the MKK-county: 24, see: https://www.wegweiser-kommune.de/kommunen/main-kinzig-kreis-lk

[2] Forecast for development of the number of citizens in the MMK starting with 2018, See: the https://statistik.hessen.de/zahlen-fakten/bevoelkerung-gebiet-haushalte-familien/bevoelkerung/tabellen

[3] Karl Popper, „A World of Propensities“,(1988) and „Towards an Evolutionary Theory of Knowledge“, (1989) in: Karl Popper, „A World of Propensities“, Thoemmes Press, Bristol, (1990, repr. 1995)

[4] Karl Popper, „All Life is Problem Solving“, original a lecture 1991 in German, the first tome published (in German) „Alles Leben ist Problemlösen“ (1994), then in the book „All Life is Problem Solving“, 1999, Routledge, Taylor & Francis Group, London – New York

[5] This points to the concept of ‘propensity’ which the late Popper has discussed in the papers [3] and [4].

[6] This concept of a ‘generator’ or an ‘inference’ reminds to the general concept of Popper and the main stream philosophy of a logical derivation concept where a ‘set of logical rules’ defines a ‘derivation concept’ which allows the ‘derivation/ inference’ of a statement s* as a ‘theorem’ from an assumed set of statements S assumed to be true.

[7] The clock-based time is in the real world correlated with certain constellations of the real universe, but this — as a whole — is ‘changing’!

[8] Hessisches Statistisches Landesamt, “Die Bevölkerung der hessischen
Gemeinden am 30. Juni 2021. Fortschreibungsergebnisse Basis Zensus 09. Mai 2011″, Okt. 2021, Wiesbaden, URL: https://statistik.hessen.de/sites/statistik.hessen.de/files/AI2_AII_AIII_AV_21-1hj.pdf

[9] Method of the forward projection of the statistical office of the State of Hessen: “Bevölkerung: Die Bevölkerungszahlen sind Fortschreibungsergebnisse, die auf den bei der Zensuszählung 2011
ermittelten Bevölkerungszahlen basieren. Durch Auswertung von elektronisch übermittelten Daten für Geburten und Sterbefälle durch die Standesämter, sowie der Zu- und Fortzüge der Meldebehörden, werden diese nach einer bundeseinheitlichen Fortschreibungsmethode festgestellt. Die Zuordnung der Personen zur Bevölkerung einer Gemeinde erfolgt nach dem Hauptwohnungsprinzip (Bevölkerung am Ort der alleinigen oder der Hauptwohnung).”([8:p.2]

[10] Statistical Office state of Hessen, Next census 2022: https://statistik.hessen.de/zahlen-fakten/zensus/zensus-2022/zensus-2022-kurz-erklaert

POPPER – Objective Knowledge (1971). Summary, Comments, how to develope further


eJournal: uffmm.org
ISSN 2567-6458, 07.March 22 – 12.March 2022, 10:55h
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

BLOG-CONTEXT

This post is part of the Philosophy of Science theme which is part of the uffmm blog.

PREFACE

In this post a short summary of Poppers view of an empirical theory is outlined as he describes it in his article “Conjectural Knowledge: My Solution of the Problem of Induction” from 1971.[1] The view of Popper will be commented and the relationsship to the oksimo paradigm of the author will be outlined.

Empirical Theory according to Popper in a Nutshell

Figure: Popper’s concept from 1971 of an empirical theory, compressed in a nutshell. Graphic by Gerd Doeben-Henisch based on the article using Popper’s summarizing ideas on the pages 29-31

POPPER’S POSITION 1971

In this article from 1971 Popper discusses several positions. Finally he offers the following ‘demarcation’ between only two cases: ‘Pseudo Science’ and ‘Empirical Science’.(See p.29) In doing so this triggers the question how it is possible to declare something as an ‘objective empirical theory’ without claiming to have some ‘absolute truth’?

Although Popper denies to have some kind of absolute truth he will “not give up the search for truth”, which finally leads to a “true explanatory theory”.(cf. p.29) “Truth” plays the “role of a regulative idea”.(cf. p.30) Thus according to Popper one can “guess for truth” and some of the hypotheses “may well be true”.(cf.p.30)

In Popper’s view finally ‘observation’ shows up as that behaviour which enables the production of ‘statements’ as the ’empirical basis’ for all arguments.(cf.p.30) Empirical statements are a ‘function of the used language’.(cf. p.31)

This dimension of language leads Popper to the concept of ‘deductive logic’ which describes formal mechanisms to derive from a set of statements — which are assumed to be true — those statements, which are ‘true’ by logical deduction only. If statements are ‘logically false’ then this can be used to classify the set of assumed statements as ‘logically not consistent’. (cf. p.31)

comments on popper’s 1971-position 50 years later

The preceding outline of Popper’s position reveals a minimalist account of the ingredients of an ‘objective empirical theory’. But we as the readers of these ideas are living 50 years later. Our minds are shaped differently. The author of this text thinks that Popper is basically ‘true’, although there are some points in Popper’s argument, which deserve some comments.

Subjective – Absolute

Popper is moving between two boundaries: One boundary is the so called ‘subjective believe’ which can support any idea, and which thereby can include pure nonsense; the other boundary is ‘absolute truth’, which is requiring to hold all the time at all places although the ‘known world’ is evidently showing a steady change.

Empirical Basis

In searching for a possible position between these boundaries, which would allow a minimum of ‘rationality’, he is looking for an ’empirical Basis’ as a point of reference for a ‘rational theory’. He is locating such an empirical basis in ‘observation statements’ which can be used for ‘testing a theory’.

In his view a ‘rational empirical theory’ has to have a ‘set of statements’ (often called ‘assumptions’ of the theory or ‘axioms’) which are assumed to ‘describe the observable world’ in a way that these statements should be able to be ‘confirmed’ or be ‘falsified’.

Confirmation – Falsification

A ‘confirmation’ does not imply that the confirmed statement is ‘absolutely true’ (his basic conviction); but one can experience that a confirmed statement can function as a ‘hypothesis/ conjecture’ which ‘workes in the actual observation’. This does not exclude that it perhaps will not work in a future test. The pragmatical difference between ‘interesting conjectures’ and those which are of less interest is that a ‘repeated confirmation’ increases the ‘probability’, that such a confirmation can happen again. An ‘increasing probability’ can induce an ‘increased expectation’. Nevertheless, increased probabilities and associated increased expectations are no substitutes for ‘truth’.

A test which shows ‘no confirmation’ for a logically derived statement from the theory is difficult to interpret:

Case (i): A theory is claiming that a statement S refers to a proposition A to be ‘true in a certain experiment’, but in the real experiment the observation reveals a proposition B which translates to non-A which can interpreted as ‘the opposite to A is being the case’ (= being ‘true’). This outcome will be interpreted in the way that the proposition B interpreted as ‘non-A’ contradicts ‘A’ and this will be interpreted further in the way, that the statement S of the theory represents a partial contradiction to the observable world.

Case (ii): A theory is claiming that a statement S refers to a proposition A to be ‘true in a certain experiment’, but in the real experiment the observation reveals a proposition B ‘being the case’ (= being ‘true’) which shows a different proposition. And this outcome cannot be related to the proposition ‘A’ which is forecasted by the theory. If the statement ‘can not be interpreted sufficiently well’ then the situation is neither ‘true’ nor ‘false’; it is ‘undefined’.

Discussion: Case (ii) reveals that there exist an observable (empirical) fact which is not related to a certain ‘logically derived’ statement with proposition A. There can be many circumstances why the observation did not generate the ‘expected proposition A’. If one would assume that the observation is related to an ‘agreed process of generating an outcome M’, which can be ‘repeated at will’ from ‘everybody’, then the observed fact of a ‘proposition B distinguished from proposition A’ could be interpreted in the way, that the expectation of the theory cannot be reproduced with the agreed procedure M. This lets the question open, whether there could eventually exist another procedure M’ producing an outcome ‘A’. This case is for the actors which are running the procedure M with regard to the logically derived statement S talking about proposition A ‘unclear’, ‘not defined’, a ‘non-confirmation’. Otherwise it is at the same time no confirmation either.

Discussion: Case (i) seems — at a first glance — to be more ‘clear’ in its interpretation. Assuming here too that the observation is associated with an agreed procedure M producing the proposition B which can be interpreted as non-A (B = non-A). If everybody accepts this ‘classification’ of B as ‘non-A’, then by ‘purely logical reasons’ (depending from the assumed concept of logic !) ‘non-A’ contradicts ‘A’. But in the ‘real world’ with ‘real observations’ things are usually not as ‘clear-cut’ as a theory may assume. The observable outcome B of an agreed procedure M can show a broad spectrum of ‘similarities’ with proposition A varying between 100% and less. Even if one repeats the agreed procedure M several times it can show a ‘sequence of propositions <B1, B2, …, Bn>’ which all are not exactly 100% similar to proposition A. To speak in such a case (the normal case!), of a logical contradiction it is difficult if not impossible. The idea of Popper-1971 with a possible ‘falsification’ of a theory would then become difficult to interpret. A possible remedy for this situation could be to modify a theory in the way that a theory does forecast only statements with a proposition A which is represented as a ‘field of possible instances A = <a1, a2, …, am>’, where every ‘ai‘ represents some kind of a variation. In that modified case it would be ‘more probable’ to judge a non-confirmation between A as <a1, a2, …, am> and B as <B1, B2, …, Bn>, if one would take into account the ‘variability’ of a proposition.[3]

Having discussed the case of ‘non-confirmation’ in the described modified way this leads back again to the case of ‘confirmation’: The ‘fuzziness’ of observable facts even in the context of agreed procedures M of observation, which are repeatable by everyone (usually called measurement) requires for a broader concept of ‘similarity’ between ‘derived propositions’ and ‘observed propositions’. This is since long a hot debated point in the philosophy of science (see e.g. [4]). Until now does no general accepted solution exist for this problem.

Thus the clear idea of Popper to associate a theory candidate with a minimum of rationality by relating the theory in an agreed way to empirical observations becomes in the ‘dust of reality’ a difficult case. It is interesting that the ‘late Popper’ (1988-1991) has modified his view onto this subject a little bit more into the direction of the interpretation of observable events (cf. [5])

Logic as an Organon

In the discussion of the possible confirmation or falsification of a theory Popper uses two different perspectives: (i) in a more broader sense he is talking about the ‘process of justification’ of the theoretical statements with regard to an empirical basis relying on the ‘regulative idea of truth’, and (ii) in a more specialized sense he is talking about ‘deductive logic as an organon of criticism’. These two perspectives demand for more clarification.

While the meaning of the concept ‘theory’ is rather vague (statements, which have to be confirmed or falsified with respect to observational statements), the concept ‘deductive logic as an organon’ isn’t really clearer.

Until today we have two big paradigms of logic: (i) the ‘classical logic’ inspired by Aristotle (with many variants) and (ii) ‘modern formal logic’ (cf. [6]) in combination with modern mathematics (cf. [7],[8]). Both paradigms represent a whole universe of different variants, whose combinations into concrete formal empirical theories shows more than one paradigm.(cf. [4], [8], [10])

As outlined in the figure above the principal idea of logic in general follows the following schema: one has a set of expressions of some language L for which one assumes at least, that these expressions are classified as ‘true expressions’. According to an agreed procedure of ‘derivation’ one can derive (deduce, infer, …) other expressions of the language which are assumed to be classified as ‘true’ if the assumptions hold.[11]

The important point here is, that the modern concept of logic does not explain, what ‘true’ means nor exists there an explanation, how exactly a procedure looks like which enables the classification of an expression as ‘being true’. Logic works with the minimalist assumption that the ‘user of logic’ is using statements which he assumes to be ‘true’ independent of how this classification came into being. This frees the user of logic to deal with the cumbersome process of clarifying the meaning and the existence of something which makes a statement ‘true’, but on the other side the user of modern logic has no real control whether his ‘concept of derivation’ makes any sense in a real world, from which observation statements are generated claiming to be ’empirically true’, and that the relationships between these observational statements are appropriately ‘represented’ by the formal derivation concept. Until today there exists no ‘meta-theory’ which explains the relationship between the derivation concept of formal logic (there are many such concepts!) and the ‘dynamics of real events’.

Thus, if Popper mentions formal logic as a tool for the handling of assumed true statements of a theory, it is not really clear whether such a formal logical derivation really is appropriate to explain the ‘relationships between assumed true statements’ without knowing, which kind of reality is ‘designated’/ ‘referred to’ by such statements and their relationships between each other.

(Formalized) Theory and Logic

In his paper Popper does not explain too much what he is concretely mean with a (formalized) theory. Today there exist many different proposals of formalized theories for the usage as ’empirical theories’, but there is no commonly agreed final ‘template’ of a ‘formal empirical theory’.

Nevertheless we need some minimal conception to be able to discuss some of the properties of a theory more concretely. I will address this problem in another post accompanied with concrete applications.

COMMENTS

[1] Karl R.Popper, Conjectural Knowledge: My Solution of the Problem of Induction, in: [2], pp.1-31

[2] Karl R.Popper, Objective Knowledge. An Evolutionary Approach, Oxford University Press, London, 1972 (reprint with corrections 1973)

[3] In our everyday use of our ‘normal’ language it is the ‘normal’ case that a statement S like ‘There s a cup on the table’ can be interpreted in many different ways depending which concrete thing (= proposition B of the above examples) called a ‘cup’ or called ‘table’ can be observed.

[4] F. Suppe, Ed., The Structure of Scientific Theories, University of
Illinois Press, Urbana, 2nd edition, 1979.

[5] Gerd Doeben-Henisch, 2022,(SPÄTER) POPPER – WISSENSCHAFT – PHILOSOPHIE – OKSIMO-DISKURSRAUM, in: eJournal: Philosophie Jetzt – Menschenbild, ISSN 2365-5062, 22.-23.Februar 2022,
URL: https://www.cognitiveagent.org/2022/02/22/popper-wissenschaft-philosophie-oksimo-paradigma/

[6] William Kneale and Martha Kneale, The development of logic, Oxford University Press, Oxford, 1962 with several corrections and reprints 1986.

[7] Jean Dieudonnè, Geschichte der Mathematik 1700-1900, Friedrich Viehweg & Sohn, Braunschweig – Wiesbaden, 1985 (From the French edition “Abrégé d’histoire des mathématique 1700-1900, Hermann, Paris, 1978)

[8] Philip J.Davis & Reuben Hersh, The Mathematical Experience, Houghton Mifflin Company, Boston, 1981

[9] Nicolas Bourbaki, Elements of Mathematics. Theory of Sets, Springer-Verlag, Berlin, 1968

[10] Wolfgang Balzer, C.Ulises Moulines, Joseph D.Sneed, An Architectonic for Science. The Structuralist Program,D.Reidel Publ. Company, Dordrecht -Boston – Lancaster – Tokyo, 1987

[11] The usage of the terms ‘expression’, ‘proposition’, and ‘statement’ is in this text as follows: An ‘expression‘ is a string of signs from some alphabet A and which is accepted as ‘well formed expression’ of some language L. A ‘statement‘ is an utterance of some actor using expressions of the language L to talk ‘about’ some ‘experience’ — from the world of bodies or from his consciousness –, which is understood as the ‘meaning‘ of the statement. The relationship between the expressions of the statement and the meaning is located ‘in the actor’ and has been ‘learned’ by interactions with the world and himself. This hypothetical relationship is here called ‘meaning function  φ’. A ‘proposition‘ is (i) the inner construct of the meaning of a statement (here called ‘intended proposition’) and (ii) that part of the experience, which is correlated with the inner construct of the stated meaning (here called ‘occurring proposition’). The special relationship between the intended proposition and the occurring proposition is often expressed as ‘referring to’ or ‘designate’. A statement is called to ‘hold’/ to be ‘true’ or ‘being the case’ if there exists an occurring proposition which is ‘similar enough’ to the intended proposition of the statement. If such an occurring proposition is lacking then the designation of the statement is ‘undefined’ or ‘non confirming’ the expectation.

Follow-up Post

For a follow-up post see here.

OKSIMO MEETS POPPER. Popper’s Position

eJournal: uffmm.org
ISSN 2567-6458, 31.March – 31.March  2021
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

This text is part of a philosophy of science  analysis of the case of the oksimo software (oksimo.com). A specification of the oksimo software from an engineering point of view can be found in four consecutive  posts dedicated to the HMI-Analysis for  this software.

POPPERs POSITION IN THE CHAPTERS 1-17

In my reading of the chapters 1-17 of Popper’s The Logic of Scientific Discovery [1] I see the following three main concepts which are interrelated: (i) the concept of a scientific theory, (ii) the point of view of a meta-theory about scientific theories, and (iii) possible empirical interpretations of scientific theories.

Scientific Theory

A scientific theory is according to Popper a collection of universal statements AX, accompanied by a concept of logical inference , which allows the deduction of a certain theorem t  if one makes  some additional concrete assumptions H.

Example: Theory T1 = <AX1,>

AX1= {Birds can fly}

H1= {Peter is  a bird}

: Peter can fly

Because  there exists a concrete object which is classified as a bird and this concrete bird with the name ‘Peter’ can  fly one can infer that the universal statement could be verified by this concrete bird. But the question remains open whether all observable concrete objects classifiable as birds can fly.

One could continue with observations of several hundreds of concrete birds but according to Popper this would not prove the theory T1 completely true. Such a procedure can only support a numerical universality understood as a conjunction of finitely many observations about concrete birds   like ‘Peter can fly’ & ‘Mary can fly’ & …. &’AH2 can fly’.(cf. p.62)

The only procedure which is applicable to a universal theory according to Popper is to falsify a theory by only one observation like ‘Doxy is a bird’ and ‘Doxy cannot fly’. Then one could construct the following inference:

AX1= {Birds can fly}

H2= {Doxy is  a bird, Doxy cannot fly}

: ‘Doxy can fly’ & ~’Doxy can fly’

If a statement A can be inferred and simultaneously the negation ~A then this is called a logical contradiction:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

In this case the set {AX1, H2} is called inconsistent.

If a set of statements is classified as inconsistent then you can derive from this set everything. In this case you cannot any more distinguish between true or false statements.

Thus while the increase of the number of confirmed observations can only increase the trust in the axioms of a scientific theory T without enabling an absolute proof  a falsification of a theory T can destroy the ability  of this  theory to distinguish between true and false statements.

Another idea associated with this structure of a scientific theory is that the universal statements using universal concepts are strictly speaking speculative ideas which deserve some faith that these concepts will be provable every time one will try  it.(cf. p.33, 63)

Meta Theory, Logic of Scientific Discovery, Philosophy of Science

Talking about scientific theories has at least two aspects: scientific theories as objects and those who talk about these objects.

Those who talk about are usually Philosophers of Science which are only a special kind of Philosophers, e.g. a person  like Popper.

Reading the text of Popper one can identify the following elements which seem to be important to describe scientific theories in a more broader framework:

A scientific theory from a point of  view of Philosophy of Science represents a structure like the following one (minimal version):

MT=<S, A[μ], E, L, AX, , ET, E+, E-, true, false, contradiction, inconsistent>

In a shared empirical situation S there are some human actors A as experts producing expressions E of some language L.  Based on their built-in adaptive meaning function μ the human actors A can relate  properties of the situation S with expressions E of L.  Those expressions E which are considered to be observable and classified to be true are called true expressions E+, others are called false expressions  E-. Both sets of expressions are true subsets of E: E+ ⊂ E  and E- ⊂ E. Additionally the experts can define some special  set of expressions called axioms  AX which are universal statements which allow the logical derivation of expressions called theorems of the theory T  ET which are called logically true. If one combines the set of axioms AX with some set of empirically true expressions E+ as {AX, E+} then one can logically derive either  only expressions which are logically true and as well empirically true, or one can derive logically true expressions which are empirically true and empirically false at the same time, see the example from the paragraph before:

{AX1, H2}  ‘Doxy can fly’ & ~’Doxy can fly’

Such a case of a logically derived contradiction A and ~A tells about the set of axioms AX unified with the empirical true expressions  that this unified set  confronted with the known true empirical expressions is becoming inconsistent: the axioms AX unified with true empirical expressions  can not  distinguish between true and false expressions.

Popper gives some general requirements for the axioms of a theory (cf. p.71):

  1. Axioms must be free from contradiction.
  2. The axioms  must be independent , i.e . they must not contain any axiom deducible from the remaining axioms.
  3. The axioms should be sufficient for the deduction of all statements belonging to the theory which is to be axiomatized.

While the requirements (1) and (2) are purely logical and can be proved directly is the requirement (3) different: to know whether the theory covers all statements which are intended by the experts as the subject area is presupposing that all aspects of an empirical environment are already know. In the case of true empirical theories this seems not to be plausible. Rather we have to assume an open process which generates some hypothetical universal expressions which ideally will not be falsified but if so, then the theory has to be adapted to the new insights.

Empirical Interpretation(s)

Popper assumes that the universal statements  of scientific theories   are linguistic representations, and this means  they are systems of signs or symbols. (cf. p.60) Expressions as such have no meaning.  Meaning comes into play only if the human actors are using their built-in meaning function and set up a coordinated meaning function which allows all participating experts to map properties of the empirical situation S into the used expressions as E+ (expressions classified as being actually true),  or E- (expressions classified as being actually false) or AX (expressions having an abstract meaning space which can become true or false depending from the activated meaning function).

Examples:

  1. Two human actors in a situation S agree about the  fact, that there is ‘something’ which  they classify as a ‘bird’. Thus someone could say ‘There is something which is a bird’ or ‘There is  some bird’ or ‘There is a bird’. If there are two somethings which are ‘understood’ as being a bird then they could say ‘There are two birds’ or ‘There is a blue bird’ (If the one has the color ‘blue’) and ‘There is a red bird’ or ‘There are two birds. The one is blue and the other is red’. This shows that human actors can relate their ‘concrete perceptions’ with more abstract  concepts and can map these concepts into expressions. According to Popper in this way ‘bottom-up’ only numerical universal concepts can be constructed. But logically there are only two cases: concrete (one) or abstract (more than one).  To say that there is a ‘something’ or to say there is a ‘bird’ establishes a general concept which is independent from the number of its possible instances.
  2. These concrete somethings each classified as a ‘bird’ can ‘move’ from one position to another by ‘walking’ or by ‘flying’. While ‘walking’ they are changing the position connected to the ‘ground’ while during ‘flying’ they ‘go up in the air’.  If a human actor throws a stone up in the air the stone will come back to the ground. A bird which is going up in the air can stay there and move around in the air for a long while. Thus ‘flying’ is different to ‘throwing something’ up in the air.
  3. The  expression ‘A bird can fly’ understood as an expression which can be connected to the daily experience of bird-objects moving around in the air can be empirically interpreted, but only if there exists such a mapping called meaning function. Without a meaning function the expression ‘A bird can fly’ has no meaning as such.
  4. To use other expressions like ‘X can fly’ or ‘A bird can Y’ or ‘Y(X)’  they have the same fate: without a meaning function they have no meaning, but associated with a meaning function they can be interpreted. For instance saying the the form of the expression ‘Y(X)’ shall be interpreted as ‘Predicate(Object)’ and that a possible ‘instance’ for a predicate could be ‘Can Fly’ and for an object ‘a bird’ then we could get ‘Can Fly(a Bird)’ translated as ‘The object ‘a Bird’ has the property ‘can fly” or shortly ‘A Bird can fly’. This usually would be used as a possible candidate for the daily meaning function which relates this expression to those somethings which can move up in the air.
Axioms and Empirical Interpretations

The basic idea with a system of axioms AX is — according to Popper —  that the axioms as universal expressions represent  a system of equations where  the  general terms   should be able to be substituted by certain values. The set of admissible values is different from the set of  inadmissible values. The relation between those values which can be substituted for the terms  is called satisfaction: the values satisfy the terms with regard to the relations! And Popper introduces the term ‘model‘ for that set of admissible terms which can satisfy the equations.(cf. p.72f)

But Popper has difficulties with an axiomatic system interpreted as a system of equations  since it cannot be refuted by the falsification of its consequences ; for these too must be analytic.(cf. p.73) His main problem with axioms is,  that “the concepts which are to be used in the axiomatic system should be universal names, which cannot be defined by empirical indications, pointing, etc . They can be defined if at all only explicitly, with the help of other universal names; otherwise they can only be left undefined. That some universal names should remain undefined is therefore quite unavoidable; and herein lies the difficulty…” (p.74)

On the other hand Popper knows that “…it is usually possible for the primitive concepts of an axiomatic system such as geometry to be correlated with, or interpreted by, the concepts of another system , e.g . physics …. In such cases it may be possible to define the fundamental concepts of the new system with the help of concepts which were originally used in some of the old systems .”(p.75)

But the translation of the expressions of one system (geometry) in the expressions of another system (physics) does not necessarily solve his problem of the non-empirical character of universal terms. Especially physics is using also universal or abstract terms which as such have no meaning. To verify or falsify physical theories one has to show how the abstract terms of physics can be related to observable matters which can be decided to be true or not.

Thus the argument goes back to the primary problem of Popper that universal names cannot not be directly be interpreted in an empirically decidable way.

As the preceding examples (1) – (4) do show for human actors it is no principal problem to relate any kind of abstract expressions to some concrete real matters. The solution to the problem is given by the fact that expressions E  of some language L never will be used in isolation! The usage of expressions is always connected to human actors using expressions as part of a language L which consists  together with the set of possible expressions E also with the built-in meaning function μ which can map expressions into internal structures IS which are related to perceptions of the surrounding empirical situation S. Although these internal structures are processed internally in highly complex manners and  are — as we know today — no 1-to-1 mappings of the surrounding empirical situation S, they are related to S and therefore every kind of expressions — even those with so-called abstract or universal concepts — can be mapped into something real if the human actors agree about such mappings!

Example:

Lets us have a look to another  example.

If we take the system of axioms AX as the following schema:  AX= {a+b=c}. This schema as such has no clear meaning. But if the experts interpret it as an operation ‘+’ with some arguments as part of a math theory then one can construct a simple (partial) model m  as follows: m={<1,2,3>, <2,3,5>}. The values are again given as  a set of symbols which as such must not ave a meaning but in common usage they will be interpreted as sets of numbers   which can satisfy the general concept of the equation.  In this secondary interpretation m is becoming  a logically true (partial) model for the axiom Ax, whose empirical meaning is still unclear.

It is conceivable that one is using this formalism to describe empirical facts like the description of a group of humans collecting some objects. Different people are bringing  objects; the individual contributions will be  reported on a sheet of paper and at the same time they put their objects in some box. Sometimes someone is looking to the box and he will count the objects of the box. If it has been noted that A brought 1 egg and B brought 2 eggs then there should according to the theory be 3 eggs in the box. But perhaps only 2 could be found. Then there would be a difference between the logically derived forecast of the theory 1+2 = 3  and the empirically measured value 1+2 = 2. If one would  define all examples of measurement a+b=c’ as contradiction in that case where we assume a+b=c as theoretically given and c’ ≠ c, then we would have with  ‘1+2 = 3′ & ~’1+2 = 3’ a logically derived contradiction which leads to the inconsistency of the assumed system. But in reality the usual reaction of the counting person would not be to declare the system inconsistent but rather to suggest that some unknown actor has taken against the agreed rules one egg from the box. To prove his suggestion he had to find this unknown actor and to show that he has taken the egg … perhaps not a simple task … But what will the next authority do: will the authority belief  the suggestion of the counting person or will the authority blame the counter that eventually he himself has taken the missing egg? But would this make sense? Why should the counter write the notes how many eggs have been delivered to make a difference visible? …

Thus to interpret some abstract expression with regard to some observable reality is not a principal problem, but it can eventually be unsolvable by purely practical reasons, leaving questions of empirical soundness open.

SOURCES

[1] Karl Popper, The Logic of Scientific Discovery, First published 1935 in German as Logik der Forschung, then 1959 in English by  Basic Books, New York (more editions have been published  later; I am using the eBook version of Routledge (2002))

 

 

HMI Analysis for the CM:MI paradigm. Part 3. Actor Story and Theories

Integrating Engineering and the Human Factor (info@uffmm.org)
eJournal uffmm.org ISSN 2567-6458, March 2, 2021,
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: March 2, 2021 13:59h (Minor corrections)

HISTORY

As described in the uffmm eJournal  the wider context of this software project is an integrated  engineering theory called Distributed Actor-Actor Interaction [DAAI] further extended to the Collective Man-Machine Intelligence [CM:MI] paradigm.  This document is part of the Case Studies section.

HMI ANALYSIS, Part 3: Actor Story and  Theories

Context

This text is preceded by the following texts:

Introduction

Having a vision is that moment  where something really new in the whole universe is getting an initial status in some real brain which can enable other neural events which  can possibly be translated in bodily events which finally can change the body-external outside world. If this possibility is turned into reality than the outside world has been changed.

When human persons (groups of homo sapiens specimens) as experts — here acting as stakeholder and intended users as one but in different roles! — have stated a problem and a vision document, then they have to translate these inevitably more fuzzy than clear ideas into the concrete terms of an everyday world, into something which can really work.

To enable a real cooperation  the experts have to generate a symbolic description of their vision (called specification) — using an everyday language, possibly enhanced by special expressions —  in a way that  it can became clear to the whole group, which kind of real events, actions and processes are intended.

In the general case an engineering specification describes concrete forms of entanglements of human persons which enable  these human persons to cooperate   in a real situation. Thereby the translation of  the vision inside the brain  into the everyday body-external reality happens. This is the language of life in the universe.

WRITING A STORY

To elaborate a usable specification can metaphorically be understood  as the writing of a new story: which kinds of actors will do something in certain situations, what kinds of other objects, instruments etc. will be used, what kinds of intrinsic motivations and experiences are pushing individual actors, what are possible outcomes of situations with certain actors, which kind of cooperation is  helpful, and the like. Such a story is  called here  Actor Story [AS].

COULD BE REAL

An Actor Story must be written in a way, that all participating experts can understand the language of the specification in a way that   the content, the meaning of the specification is either decidable real or that it eventually can become real.  At least the starting point of the story should be classifiable as   being decidable actual real. What it means to be decidable actual real has to be defined and agreed between the participating experts before they start writing the Actor Story.

ACTOR STORY [AS]

An Actor Story assumes that the described reality is classifiable as a set of situations (states) and  a situation as part of the Actor Story — abbreviated: situationAS — is understood  as a set of expressions of some everyday language. Every expression being part of an situationAS can be decided as being real (= being true) in the understood real situation.

If the understood real situation is changing (by some event), then the describing situationAS has to be changed too; either some expressions have to be removed or have to be added.

Every kind of change in the real situation S* has to be represented in the actor story with the situationAS S symbolically in the format of a change rule:

X: If condition  C is satisfied in S then with probability π  add to S Eplus and remove from  S Eminus.

or as a formula:

S’π = S + Eplus – Eminus

This reads as follows: If there is an situationAS S and there is a change rule X, then you can apply this change rule X with probability π onto S if the condition of X is satisfied in S. In that case you have to add Eplus to S and you have to remove Eminus from S. The result of these operations is the new (successor) state S’.

The expression C is satisfied in S means, that all elements of C are elements of S too, written as C ⊆ S. The expression add Eplus to S means, that the set Eplus is unified with the set S, written as Eplus ∪ S (or here: Eplus + S). The expression remove Eminus from S means, that the set Eminus is subtracted from the set S, written as S – Eminus.

The concept of apply change rule X to a given state S resulting in S’ is logically a kind of a derivation. Given S,X you will derive by applicating X the new  S’. One can write this as S,X ⊢X S’. The ‘meaning’ of the sign ⊢  is explained above.

Because every successor state S’ can become again a given state S onto which change rules X can be applied — written shortly as X(S)=S’, X(S’)=S”, … — the repeated application of change rules X can generate a whole sequence of states, written as SQ(S,X) = <S’, S”, … Sgoal>.

To realize such a derivation in the real world outside of the thinking of the experts one needs a machine, a computer — formally an automaton — which can read S and X documents and can then can compute the derivation leading to S’. An automaton which is doing such a job is often called a simulator [SIM], abbreviated here as ∑. We could then write with more information:

S,X ⊢ S’

This will read: Given a set S of many states S and a set X of change rules we can derive by an actor story simulator ∑ a successor state S’.

A Model M=<S,X>

In this context of a set S and a set of change rules X we can speak of a model M which is defined by these two sets.

A Theory T=<M,>

Combining a model M with an actor story simulator enables a theory T which allows a set of derivations based on the model, written as SQ(S,X,⊢) = <S’, S”, … Sgoal>. Every derived final state Sgoal in such a derivation is called a theorem of T.

An Empirical Theory Temp

An empirical theory Temp is possible if there exists a theory T with a group of experts which are using this theory and where these experts can interpret the expressions used in theory T by their built-in meaning functions in a way that they always can decide whether the expressions are related to a real situation or not.

Evaluation [ε]

If one generates an Actor Story Theory [TAS] then it can be of practical importance to get some measure how good this theory is. Because measurement is always an operation of comparison between the subject x to be measured and some agreed standard s one has to clarify which kind of a standard for to be good is available. In the general case the only possible source of standards are the experts themselves. In the context of an Actor Story the experts have agreed to some vision [V] which they think to be a better state than a  given state S classified as a problem [P]. These assumptions allow a possible evaluation of a given state S in the ‘light’ of an agreed vision V as follows:

ε: V x S —> |V ⊆ S|[%]
ε(V,S) = |V ⊆ S|[%]

This reads as follows: the evaluation ε is a mapping from the sets V and S into the number of elements from the set V included in the set S converted in the percentage of the number of elements included. Thus if no  element of V is included in the set S then 0% of the vision is realized, if all elements are included then 100%, etc. As more ‘fine grained’ the set V is as more ‘fine grained’  the evaluation can be.

An Evaluated Theory Tε=<M,,ε>

If one combines the concept of a  theory T with the concept of evaluation ε then one can use the evaluation in combination with the derivation in the way that every  state in a derivation SQ(S,X,⊢) = <S’, S”, … Sgoal> will additionally be evaluated, thus one gets sequences of pairs as follows:

SQ(S,X,⊢∑,ε) = <(S’,ε(V,S’)), (S”,ε(V,S”)), …, (Sgoal, ε(V,Sgoal))>

In the ideal case Sgoal is evaluated to 100% ‘good’. In real cases 100% is only an ideal value which usually will only  be approximated until some threshold.

An Evaluated Theory Tε with Algorithmic Intelligence Tε,α=<M,,ε,α>

Because every theory defines a so-called problem space which is here enhanced by some evaluation function one can add an additional operation α (realized by an algorithm) which can repeat the simulator based derivations enhanced with the evaluations to identify those sets of theorems which are qualified as the best theorems according to some criteria given. This operation α is here called algorithmic intelligence of an actor story AS]. The existence of such an algorithmic intelligence of an actor story [αAS] allows the introduction of another derivation concept:

S,X ⊢∑,ε,α S* ⊆  S’

This reads as follows: Given a set S and a set X an evaluated theory with algorithmic intelligence Tε,α can derive a subset S* of all possible theorems S’ where S* matches certain given criteria within V.

WHERE WE ARE NOW

As it should have become clear now the work of HMI analysis is the elaboration of a story which can be done in the format of different kinds of theories all of which can be simulated and evaluated. Even better, the only language you have to know is your everyday language, your mother tongue (mathematics is understood here as a sub-language of the everyday language, which in some special cases can be of some help). For this theory every human person — in all ages! — can be a valuable  colleague to help you in understanding better possible futures. Because all parts of an actor story theory are plain texts, everybody ran read and understand everything. And if different groups of experts have investigated different  aspects of a common field you can merge all texts by only ‘pressing a button’ and you will immediately see how all these texts either work together or show discrepancies. The last effect is a great opportunity  to improve learning and understanding! Together we represent some of the power of life in the universe.

CONTINUATION

See here.

 

 

 

 

 

 

 

 

CASE STUDY 1. FROM DAAI to ACA. Transforming HMI into ACA (Applied Cultural Anthropology)

eJournal: uffmm.org
ISSN 2567-6458, 28.July 2020
Email: info@uffmm.org

Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Abstract

The collection of papers in the Case Studies Section deals with the
possible applications of the general concept of a GCA Generative Cul-
tural Anthropology to all kinds of cultural processes. The GCA paradigm
has been derived from the formalized DAAI Distributed Actor-Actor In-
teraction theory, which in turn is a development based on the common
HMI Human Machine Interaction paradigm reformulated within the Sys-
tems Engineering paradigm. The GCA is a very general and strong theory
paradigm, but, saying this, it is for most people difficult to understand,
because it is highly interdisciplinary, and it needs some formal technical
skills, which are not too common. During the work in the last three
months it became clear, that the original HMI and DAAI approach can
also be understood as the case of something which one could call ACA
Applied Cultural Anthropology as part of an GCA. The concept of ACA
is more or less directly understandable for most people.

case1-daai-aca-v1

REVIEW OF MASLOW (1966) The Psychology of Science, Part II

eJournal: uffmm.org,
ISSN 2567-6458,
8.-21.June 2020
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

CONTEXT

In this review I discuss the ideas of the book  The Psychology of Science (1966) from A.Maslow. His book is in a certain sense  outstanding  because the point of view is in one respect inspired by an artificial borderline between the mainstream-view of empirical science and the mainstream-view of psychotherapy. In another respect the book discusses a possible  integrated view of empirical science with psychotherapy as an integral part. The point of view of the reviewer is the new paradigm of a  Generative Cultural Anthropology[GCA]. Part II of this review reports some considerations reflecting the relationship of the point of view of Maslow and the point of view of GCA.

This review is part of the general review section of the uffmm.org blog.

More extended version (21.June 2020): reviews-maslow1966-II-v09

See here (8.Juni 2020): reviews-maslow1966-II-v08

See here (7.June 2020): reviews-maslow1966-II-v07

 

AAI THEORY V2 –A Philosophical Framework

eJournal: uffmm.org,
ISSN 2567-6458, 22.February 2019
Email: info@uffmm.org
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.de

Last change: 23.February 2019 (continued the text)

Last change: 24.February 2019 (extended the text)

CONTEXT

In the overview of the AAI paradigm version 2 you can find this section  dealing with the philosophical perspective of the AAI paradigm. Enjoy reading (or not, then send a comment :-)).

THE DAILY LIFE PERSPECTIVE

The perspective of Philosophy is rooted in the everyday life perspective. With our body we occur in a space with other bodies and objects; different features, properties  are associated with the objects, different kinds of relations an changes from one state to another.

From the empirical sciences we have learned to see more details of the everyday life with regard to detailed structures of matter and biological life, with regard to the long history of the actual world, with regard to many interesting dynamics within the objects, within biological systems, as part of earth, the solar system and much more.

A certain aspect of the empirical view of the world is the fact, that some biological systems called ‘homo sapiens’, which emerged only some 300.000 years ago in Africa, show a special property usually called ‘consciousness’ combined with the ability to ‘communicate by symbolic languages’.

General setting of the homo sapiens species (simplified)
Figure 1: General setting of the homo sapiens species (simplified)

As we know today the consciousness is associated with the brain, which in turn is embedded in the body, which  is further embedded in an environment.

Thus those ‘things’ about which we are ‘conscious’ are not ‘directly’ the objects and events of the surrounding real world but the ‘constructions of the brain’ based on actual external and internal sensor inputs as well as already collected ‘knowledge’. To qualify the ‘conscious things’ as ‘different’ from the assumed ‘real things’ ‘outside there’ it is common to speak of these brain-generated virtual things either as ‘qualia’ or — more often — as ‘phenomena’ which are  different to the assumed possible real things somewhere ‘out there’.

PHILOSOPHY AS FIRST PERSON VIEW

‘Philosophy’ has many facets.  One enters the scene if we are taking the insight into the general virtual character of our primary knowledge to be the primary and irreducible perspective of knowledge.  Every other more special kind of knowledge is necessarily a subspace of this primary phenomenological knowledge.

There is already from the beginning a fundamental distinction possible in the realm of conscious phenomena (PH): there are phenomena which can be ‘generated’ by the consciousness ‘itself’  — mostly called ‘by will’ — and those which are occurring and disappearing without a direct influence of the consciousness, which are in a certain basic sense ‘given’ and ‘independent’,  which are appearing  and disappearing according to ‘their own’. It is common to call these independent phenomena ’empirical phenomena’ which represent a true subset of all phenomena: PH_emp  PH. Attention: These empirical phenomena’ are still ‘phenomena’, virtual entities generated by the brain inside the brain, not directly controllable ‘by will’.

There is a further basic distinction which differentiates the empirical phenomena into those PH_emp_bdy which are controlled by some processes in the body (being tired, being hungry, having pain, …) and those PH_emp_ext which are controlled by objects and events in the environment beyond the body (light, sounds, temperature, surfaces of objects, …). Both subsets of empirical phenomena are different: PH_emp_bdy PH_emp_ext = 0. Because phenomena usually are occurring  associated with typical other phenomena there are ‘clusters’/ ‘pattern’ of phenomena which ‘represent’ possible events or states.

Modern empirical science has ‘refined’ the concept of an empirical phenomenon by introducing  ‘standard objects’ which can be used to ‘compare’ some empirical phenomenon with such an empirical standard object. Thus even when the perception of two different observers possibly differs somehow with regard to a certain empirical phenomenon, the additional comparison with an ’empirical standard object’ which is the ‘same’ for both observers, enhances the quality, improves the precision of the perception of the empirical phenomena.

From these considerations we can derive the following informal definitions:

  1. Something is ‘empirical‘ if it is the ‘real counterpart’ of a phenomenon which can be observed by other persons in my environment too.
  2. Something is ‘standardized empirical‘ if it is empirical and can additionally be associated with a before introduced empirical standard object.
  3. Something is ‘weak empirical‘ if it is the ‘real counterpart’ of a phenomenon which can potentially be observed by other persons in my body as causally correlated with the phenomenon.
  4. Something is ‘cognitive‘ if it is the counterpart of a phenomenon which is not empirical in one of the meanings (1) – (3).

It is a common task within philosophy to analyze the space of the phenomena with regard to its structure as well as to its dynamics.  Until today there exists not yet a complete accepted theory for this subject. This indicates that this seems to be some ‘hard’ task to do.

BRIDGING THE GAP BETWEEN BRAINS

As one can see in figure 1 a brain in a body is completely disconnected from the brain in another body. There is a real, deep ‘gap’ which has to be overcome if the two brains want to ‘coordinate’ their ‘planned actions’.

Luckily the emergence of homo sapiens with the new extended property of ‘consciousness’ was accompanied by another exciting property, the ability to ‘talk’. This ability enabled the creation of symbolic languages which can help two disconnected brains to have some exchange.

But ‘language’ does not consist of sounds or a ‘sequence of sounds’ only; the special power of a language is the further property that sequences of sounds can be associated with ‘something else’ which serves as the ‘meaning’ of these sounds. Thus we can use sounds to ‘talk about’ other things like objects, events, properties etc.

The single brain ‘knows’ about the relationship between some sounds and ‘something else’ because the brain is able to ‘generate relations’ between brain-structures for sounds and brain-structures for something else. These relations are some real connections in the brain. Therefore sounds can be related to ‘something  else’ or certain objects, and events, objects etc.  can become related to certain sounds. But these ‘meaning relations’ can only ‘bridge the gap’ to another brain if both brains are using the same ‘mapping’, the same ‘encoding’. This is only possible if the two brains with their bodies share a real world situation RW_S where the perceptions of the both brains are associated with the same parts of the real world between both bodies. If this is the case the perceptions P(RW_S) can become somehow ‘synchronized’ by the shared part of the real world which in turn is transformed in the brain structures P(RW_S) —> B_S which represent in the brain the stimulating aspects of the real world.  These brain structures B_S can then be associated with some sound structures B_A written as a relation  MEANING(B_S, B_A). Such a relation  realizes an encoding which can be used for communication. Communication is using sound sequences exchanged between brains via the body and the air of an environment as ‘expressions’ which can be recognized as part of a learned encoding which enables the receiving brain to identify a possible meaning candidate.

DIFFERENT MODES TO EXPRESS MEANING

Following the evolution of communication one can distinguish four important modes of expressing meaning, which will be used in this AAI paradigm.

VISUAL ENCODING

A direct way to express the internal meaning structures of a brain is to use a ‘visual code’ which represents by some kinds of drawing the visual shapes of objects in the space, some attributes of  shapes, which are common for all people who can ‘see’. Thus a picture and then a sequence of pictures like a comic or a story board can communicate simple ideas of situations, participating objects, persons and animals, showing changes in the arrangement of the shapes in the space.

Pictorial expressions representing aspects of the visual and the auditory sens modes
Figure 2: Pictorial expressions representing aspects of the visual and the auditory sens modes

Even with a simple visual code one can generate many sequences of situations which all together can ‘tell a story’. The basic elements are a presupposed ‘space’ with possible ‘objects’ in this space with different positions, sizes, relations and properties. One can even enhance these visual shapes with written expressions of  a spoken language. The sequence of the pictures represents additionally some ‘timely order’. ‘Changes’ can be encoded by ‘differences’ between consecutive pictures.

FROM SPOKEN TO WRITTEN LANGUAGE EXPRESSIONS

Later in the evolution of language, much later, the homo sapiens has learned to translate the spoken language L_s in a written format L_w using signs for parts of words or even whole words.  The possible meaning of these written expressions were no longer directly ‘visible’. The meaning was now only available for those people who had learned how these written expressions are associated with intended meanings encoded in the head of all language participants. Thus only hearing or reading a language expression would tell the reader either ‘nothing’ or some ‘possible meanings’ or a ‘definite meaning’.

A written textual version in parallel to a pictorial version
Figure 3: A written textual version in parallel to a pictorial version

If one has only the written expressions then one has to ‘know’ with which ‘meaning in the brain’ the expressions have to be associated. And what is very special with the written expressions compared to the pictorial expressions is the fact that the elements of the pictorial expressions are always very ‘concrete’ visual objects while the written expressions are ‘general’ expressions allowing many different concrete interpretations. Thus the expression ‘person’ can be used to be associated with many thousands different concrete objects; the same holds for the expression ‘road’, ‘moving’, ‘before’ and so on. Thus the written expressions are like ‘manufacturing instructions’ to search for possible meanings and configure these meanings to a ‘reasonable’ complex matter. And because written expressions are in general rather ‘abstract’/ ‘general’ which allow numerous possible concrete realizations they are very ‘economic’ because they use minimal expressions to built many complex meanings. Nevertheless the daily experience with spoken and written expressions shows that they are continuously candidates for false interpretations.

FORMAL MATHEMATICAL WRITTEN EXPRESSIONS

Besides the written expressions of everyday languages one can observe later in the history of written languages the steady development of a specialized version called ‘formal languages’ L_f with many different domains of application. Here I am  focusing   on the formal written languages which are used in mathematics as well as some pictorial elements to ‘visualize’  the intended ‘meaning’ of these formal mathematical expressions.

Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)
Fig. 4: Properties of an acyclic directed graph with nodes (vertices) and edges (directed edges = arrows)

One prominent concept in mathematics is the concept of a ‘graph’. In  the basic version there are only some ‘nodes’ (also called vertices) and some ‘edges’ connecting the nodes.  Formally one can represent these edges as ‘pairs of nodes’. If N represents the set of nodes then N x N represents the set of all pairs of these nodes.

In a more specialized version the edges are ‘directed’ (like a ‘one way road’) and also can be ‘looped back’ to a node   occurring ‘earlier’ in the graph. If such back-looping arrows occur a graph is called a ‘cyclic graph’.

Directed cyclic graph extended to represent 'states of affairs'
Fig.5: Directed cyclic graph extended to represent ‘states of affairs’

If one wants to use such a graph to describe some ‘states of affairs’ with their possible ‘changes’ one can ‘interpret’ a ‘node’ as  a state of affairs and an arrow as a change which turns one state of affairs S in a new one S’ which is minimally different to the old one.

As a state of affairs I  understand here a ‘situation’ embedded in some ‘context’ presupposing some common ‘space’. The possible ‘changes’ represented by arrows presuppose some dimension of ‘time’. Thus if a node n’  is following a node n indicated by an arrow then the state of affairs represented by the node n’ is to interpret as following the state of affairs represented in the node n with regard to the presupposed time T ‘later’, or n < n’ with ‘<‘ as a symbol for a timely ordering relation.

Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token
Fig.6: Example of a state of affairs with a 2-dimensional space configured as a grid with a black and a white token

The space can be any kind of a space. If one assumes as an example a 2-dimensional space configured as a grid –as shown in figure 6 — with two tokens at certain positions one can introduce a language to describe the ‘facts’ which constitute the state of affairs. In this example one needs ‘names for objects’, ‘properties of objects’ as well as ‘relations between objects’. A possible finite set of facts for situation 1 could be the following:

  1. TOKEN(T1), BLACK(T1), POSITION(T1,1,1)
  2. TOKEN(T2), WHITE(T2), POSITION(T2,2,1)
  3. NEIGHBOR(T1,T2)
  4. CELL(C1), POSITION(1,2), FREE(C1)

‘T1’, ‘T2’, as well as ‘C1’ are names of objects, ‘TOKEN’, ‘BACK’ etc. are names of properties, and ‘NEIGHBOR’ is a relation between objects. This results in the equation:

S1 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), TOKEN(T2), WHITE(T2), POSITION(T2,2,1), NEIGHBOR(T1,T2), CELL(C1), POSITION(1,2), FREE(C1)}

These facts describe the situation S1. If it is important to describe possible objects ‘external to the situation’ as important factors which can cause some changes then one can describe these objects as a set of facts  in a separated ‘context’. In this example this could be two players which can move the black and white tokens and thereby causing a change of the situation. What is the situation and what belongs to a context is somewhat arbitrary. If one describes the agriculture of some region one usually would not count the planets and the atmosphere as part of this region but one knows that e.g. the sun can severely influence the situation   in combination with the atmosphere.

Change of a state of affairs given as a state which will be enhanced by a new object
Fig.7: Change of a state of affairs given as a state which will be enhanced by a new object

Let us stay with a state of affairs with only a situation without a context. The state of affairs is     a ‘state’. In the example shown in figure 6 I assume a ‘change’ caused by the insertion of a new black token at position (2,2). Written in the language of facts L_fact we get:

  1. TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)

Thus the new state S2 is generated out of the old state S1 by unifying S1 with the set of new facts: S2 = S1 {TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)}. All the other facts of S1 are still ‘valid’. In a more general manner one can introduce a change-expression with the following format:

<S1, S2, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2), NEIGHBOR(T3,T2)})>

This can be read as follows: The follow-up state S2 is generated out of the state S1 by adding to the state S1 the set of facts { … }.

This layout of a change expression can also be used if some facts have to be modified or removed from a state. If for instance  by some reason the white token should be removed from the situation one could write:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)})>

Another notation for this is S2 = S1 – {TOKEN(T2), WHITE(T2), POSITION(2,1)}.

The resulting state S2 would then look like:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1)}

And a combination of subtraction of facts and addition of facts would read as follows:

<S1, S2, subtract(S1,{TOKEN(T2), WHITE(T2), POSITION(2,1)}, add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would result in the final state S2:

S2 = {TOKEN(T1), BLACK(T1), POSITION(T1,1,1), CELL(C1), POSITION(1,2), FREE(C1),TOKEN(T3), BLACK(T3), POSITION(2,2)}

These simple examples demonstrate another fact: while facts about objects and their properties are independent from each other do relational facts depend from the state of their object facts. The relation of neighborhood e.g. depends from the participating neighbors. If — as in the example above — the object token T2 disappears then the relation ‘NEIGHBOR(T1,T2)’ no longer holds. This points to a hierarchy of dependencies with the ‘basic facts’ at the ‘root’ of a situation and all the other facts ‘above’ basic facts or ‘higher’ depending from the basic facts. Thus ‘higher order’ facts should be added only for the actual state and have to be ‘re-computed’ for every follow-up state anew.

If one would specify a context for state S1 saying that there are two players and one allows for each player actions like ‘move’, ‘insert’ or ‘delete’ then one could make the change from state S1 to state S2 more precise. Assuming the following facts for the context:

  1. PLAYER(PB1), PLAYER(PW1), HAS-THE-TURN(PB1)

In that case one could enhance the change statement in the following way:

<S1, S2, PB1,insert(TOKEN(T3,2,2)),add(S1,{TOKEN(T3), BLACK(T3), POSITION(2,2)})>

This would read as follows: given state S1 the player PB1 inserts a  black token at position (2,2); this yields a new state S2.

With or without a specified context but with regard to a set of possible change statements it can be — which is the usual case — that there is more than one option what can be changed. Some of the main types of changes are the following ones:

  1. RANDOM
  2. NOT RANDOM, which can be specified as follows:
    1. With PROBABILITIES (classical, quantum probability, …)
    2. DETERMINISTIC

Furthermore, if the causing object is an actor which can adapt structurally or even learn locally then this actor can appear in some time period like a deterministic system, in different collected time periods as an ‘oscillating system’ with different behavior, or even as a random system with changing probabilities. This make the forecast of systems with adaptive and/ or learning systems rather difficult.

Another aspect results from the fact that there can be states either with one actor which can cause more than one action in parallel or a state with multiple actors which can act simultaneously. In both cases the resulting total change has eventually to be ‘filtered’ through some additional rules telling what  is ‘possible’ in a state and what not. Thus if in the example of figure 6 both player want to insert a token at position (2,2) simultaneously then either  the rules of the game would forbid such a simultaneous action or — like in a computer game — simultaneous actions are allowed but the ‘geometry of a 2-dimensional space’ would not allow that two different tokens are at the same position.

Another aspect of change is the dimension of time. If the time dimension is not explicitly specified then a change from some state S_i to a state S_j does only mark the follow up state S_j as later. There is no specific ‘metric’ of time. If instead a certain ‘clock’ is specified then all changes have to be aligned with this ‘overall clock’. Then one can specify at what ‘point of time t’ the change will begin and at what point of time t*’ the change will be ended. If there is more than one change specified then these different changes can have different timings.

THIRD PERSON VIEW

Up until now the point of view describing a state and the possible changes of states is done in the so-called 3rd-person view: what can a person perceive if it is part of a situation and is looking into the situation.  It is explicitly assumed that such a person can perceive only the ‘surface’ of objects, including all kinds of actors. Thus if a driver of a car stears his car in a certain direction than the ‘observing person’ can see what happens, but can not ‘look into’ the driver ‘why’ he is steering in this way or ‘what he is planning next’.

A 3rd-person view is assumed to be the ‘normal mode of observation’ and it is the normal mode of empirical science.

Nevertheless there are situations where one wants to ‘understand’ a bit more ‘what is going on in a system’. Thus a biologist can be  interested to understand what mechanisms ‘inside a plant’ are responsible for the growth of a plant or for some kinds of plant-disfunctions. There are similar cases for to understand the behavior of animals and men. For instance it is an interesting question what kinds of ‘processes’ are in an animal available to ‘navigate’ in the environment across distances. Even if the biologist can look ‘into the body’, even ‘into the brain’, the cells as such do not tell a sufficient story. One has to understand the ‘functions’ which are enabled by the billions of cells, these functions are complex relations associated with certain ‘structures’ and certain ‘signals’. For this it is necessary to construct an explicit formal (mathematical) model/ theory representing all the necessary signals and relations which can be used to ‘explain’ the obsrvable behavior and which ‘explains’ the behavior of the billions of cells enabling such a behavior.

In a simpler, ‘relaxed’ kind of modeling  one would not take into account the properties and behavior of the ‘real cells’ but one would limit the scope to build a formal model which suffices to explain the oservable behavior.

This kind of approach to set up models of possible ‘internal’ (as such hidden) processes of an actor can extend the 3rd-person view substantially. These models are called in this text ‘actor models (AM)’.

HIDDEN WORLD PROCESSES

In this text all reported 3rd-person observations are called ‘actor story’, independent whether they are done in a pictorial or a textual mode.

As has been pointed out such actor stories are somewhat ‘limited’ in what they can describe.

It is possible to extend such an actor story (AS)  by several actor models (AM).

An actor story defines the situations in which an actor can occur. This  includes all kinds of stimuli which can trigger the possible senses of the actor as well as all kinds of actions an actor can apply to a situation.

The actor model of such an actor has to enable the actor to handle all these assumed stimuli as well as all these actions in the expected way.

While the actor story can be checked whether it is describing a process in an empirical ‘sound’ way,  the actor models are either ‘purely theoretical’ but ‘behavioral sound’ or they are also empirically sound with regard to the body of a biological or a technological system.

A serious challenge is the occurrence of adaptiv or/ and locally learning systems. While the actor story is a finite  description of possible states and changes, adaptiv or/ and locally learning systeme can change their behavior while ‘living’ in the actor story. These changes in the behavior can not completely be ‘foreseen’!

COGNITIVE EXPERT PROCESSES

According to the preceding considerations a homo sapiens as a biological system has besides many properties at least a consciousness and the ability to talk and by this to communicate with symbolic languages.

Looking to basic modes of an actor story (AS) one can infer some basic concepts inherently present in the communication.

Without having an explicit model of the internal processes in a homo sapiens system one can infer some basic properties from the communicative acts:

  1. Speaker and hearer presuppose a space within which objects with properties can occur.
  2. Changes can happen which presuppose some timely ordering.
  3. There is a disctinction between concrete things and abstract concepts which correspond to many concrete things.
  4. There is an implicit hierarchy of concepts starting with concrete objects at the ‘root level’ given as occurence in a concrete situation. Other concepts of ‘higher levels’ refer to concepts of lower levels.
  5. There are different kinds of relations between objects on different conceptual levels.
  6. The usage of language expressions presupposes structures which can be associated with the expressions as their ‘meanings’. The mapping between expressions and their meaning has to be learned by each actor separately, but in cooperation with all the other actors, with which the actor wants to share his meanings.
  7. It is assume that all the processes which enable the generation of concepts, concept hierarchies, relations, meaning relations etc. are unconscious! In the consciousness one can  use parts of the unconscious structures and processes under strictly limited conditions.
  8. To ‘learn’ dedicated matters and to be ‘critical’ about the quality of what one is learnig requires some disciplin, some learning methods, and a ‘learning-friendly’ environment. There is no guaranteed method of success.
  9. There are lots of unconscious processes which can influence understanding, learning, planning, decisions etc. and which until today are not yet sufficiently cleared up.