All posts by Gerd Doeben-Henisch


Author: Gerd Doeben-Henisch

Time: Nov 12, 2023 — Nov 12, 2023


–!! This is not yet finished !!–


This text belongs to the topic Philosophy (of Science).


The ‘coming out’ of a new type of ‘text generator’ in November 2022 called ‘chatGPT’ — this is not the only one around — caused an explosion of publications and usages around the world. The author is working since about 40 years in this field and never did this happen before. What has happened? Is this the beginning of the end of humans being the main actors on this planet (yes, we know, not really the best until now) or is there something around and in-between which we do overlook captivated by these new text generators?

Reading many papers since that event, talking with people, experimenting directly with chatGPT4, continuing working with theories and working with people in a city by trying new forms of ‘citizens at work in their community’, slowly a picture was forming in my head how it could perhaps be possible to ‘benchmark’ text generators with human activities directly.

After several first trials everything came together when I could give a speech in the Goethe University in Frankfurt Friday Nov-10. [1] There was a wonderful audience with elderly people from the so-called University of the 3rd age … a bit different to young students 🙂

There was an idea that hit me like a bolt of lightning when I wrote it down afterwards: it is the fundamental role of literature for our understanding of world and people, which will be completely eliminated by using text generators. The number of written text will explode in the near future, but the meaning of the world will vanish more and more at the same time. You will see letters, but there will be no more meaning around. And with the meaning the world of humans will disappear. You won’t even be able to know yourself anymore.

Clearly, it can only happen if we substitute our own thinking and writing completely by text generators.

Is the author of this text a bit ‘ill’ to write down such ideas or are there some arguments around which make it clear why this can be the fate of humans after the year 2023?


[1] See the text written down after the speech:

chat GPT4: Cognitive Science, HCI, Rationality (Stanovich 2016), Dual-Process Theory (Kahnemann)

Last change: Nov 1, 2023


This text belongs to the theme ‘chats with chatGPT4′.

After trying a review of the book from Stanovich et al. (2016) I have checked a possible contribution from chatGPT 4. As expected, chatGPT4 replicated all the common concepts which are used in the context of cognitive science and Stanovich as well as Kahnemann. All the real questions about methodological details have not been answered. Nevertheless, chatGPT4 lists all main concepts which one should know if one is dealing with these subjects, but that’s it. This is not really science, and has never to do with philosophy or philosophy of science.



What do you know about cognitive science


Cognitive science is an interdisciplinary field that studies the mind, intelligence, and behavior from multiple perspectives. At its core, it seeks to understand how the human mind works and how it might be replicated or modeled, including through artificial intelligence. The field encompasses a broad range of topics, from the basic processes of cognition to complex interactions with the environment. Here are some key aspects of cognitive science: 1. **Interdisciplinary Nature**: Cognitive science integrates methods and theories from several disciplines, including psychology, neuroscience, linguistics, philosophy, computer science, anthropology, and biology. 2. **Research Areas**: – Perception: Understanding how organisms interpret sensory data. – Attention: Investigating how the mind allocates cognitive resources to various stimuli. – Memory: Exploring the processes involved in storing and retrieving information. – Language: Studying the cognitive processes underlying language acquisition, comprehension, and production. – Problem Solving and Decision Making: Analyzing how humans and other animals think, make decisions, and solve problems. – Consciousness: Examining the nature of subjective experience and self-awareness. 3. **Cognitive Neuroscience**: This branch of cognitive science looks at how brain structures and functions are related to cognitive processes. It makes use of technologies such as fMRI and EEG to study the brain in action. 4. **Computational Models**: Cognitive scientists often construct computational models to simulate aspects of human cognition. These models can range from symbolic representations that mimic logic and reasoning to connectionist models that emulate neural networks. 5. **Philosophy of Mind**: Philosophical questions about the nature of mind, the relationship between mind and body, and the nature of consciousness are central to cognitive science. Philosophy contributes questions about the nature of knowledge, perception, and the structure of language, influencing cognitive theories. 6. **Human-Computer Interaction (HCI)**: This field draws on cognitive science principles to improve the design of user interfaces and enhance user experience with technology. 7. **Education and Application**: Insights from cognitive science have been applied to education, informing teaching methods and instructional design to align with how the brain learns best. 8. **Artificial Intelligence**: Cognitive science and AI are deeply intertwined, with insights from the study of human cognition informing the development of algorithms and systems that aim to replicate aspects of human thought. Cognitive science continues to evolve, with new findings and technologies perpetually reshaping our understanding of the human mind. As it progresses, it offers potential advancements not only in AI but also in how we approach mental health, education, design, and a multitude of other areas that require an understanding of how we think and learn.


You are listing many disciplines engaged in cognitive science. How are all these disciplines interrelated?


The disciplines within cognitive science are interrelated through their shared goal of understanding the mind and cognition. Here’s how each contributes to the field and how they intersect with each other: 1. **Psychology**: Psychology contributes experimental methods and theories about behavior and mental processes. It often serves as the basis for hypotheses about cognitive mechanisms that other disciplines then further investigate. 2. **Neuroscience**: Neuroscience provides insights into the brain structures and neural circuits that underlie cognitive functions. It adds a biological perspective to theories of cognition that can confirm, refute, or refine psychological theories. 3. **Linguistics**: Linguistics studies the structure, use, and psychology of language, which is a complex cognitive ability. Theories of syntax, semantics, and language acquisition inform models of how the brain processes language. 4. **Philosophy**: Philosophy addresses fundamental questions about the nature of knowledge, mind, reality, and values. It offers frameworks for understanding consciousness, intentionality, and the mind-body problem, which can inform empirical research. 5. **Computer Science**: Computer science, particularly artificial intelligence, contributes algorithms and computational models that mimic cognitive processes. The feedback loop between AI and cognitive science is strong: cognitive science inspires AI development, and AI provides tools and metaphors for understanding cognition. 6. **Anthropology**: Anthropology brings in the cultural dimension, studying how cognition varies across different human societies. This helps to differentiate between universal cognitive processes and those that are culturally specific. 7. **Biology**: Biology, especially evolutionary biology, contributes to our understanding of the evolutionary pressures that shaped our cognitive abilities. It provides a framework for understanding why certain cognitive functions may have developed. 8. **Education**: Research in cognitive science directly impacts educational practices by providing insights into how learning and memory work, thus informing teaching strategies and curriculum development. 9. **Human-Computer Interaction (HCI)**: HCI draws on principles from psychology and computer science to design better interfaces. It uses cognitive models to predict how users will interact with technology and to create user experiences that complement human cognitive abilities. The interrelation is often practical; for example: – A cognitive psychologist might devise an experiment based on a philosophical question. – A neuroscientist might use the experiment’s results to guide brain imaging studies. – A linguist might analyze the language data from the experiment. – A computer scientist might model the cognitive process observed in a computational framework. – An anthropologist might compare these results with similar cognitive processes in different cultures. – An educationalist might apply the findings to improve classroom teaching methods. – An HCI researcher might use the findings to design better computer interfaces. The intersection of these disciplines within cognitive science ensures a comprehensive approach to studying cognition, taking into account its complexity and richness by viewing it through multiple lenses.


HCI is an interesting example. HCI analyses the interaction between humans and machines with regard to certain tasks and within certain environments. Where do the cognitive models come from which shall be used?

Remark for the following: chatGPT4 offered two versions for an answer. I could select and opted for version 2:


Human-Computer Interaction (HCI) relies heavily on cognitive models to predict and interpret how users interact with computer systems. These cognitive models are derived from various sources within the broader field of cognitive science: 1. **Psychological Theories**: Many cognitive models in HCI are based on psychological theories about how the human mind processes information. For instance, models like the Cognitive Load Theory help designers understand how to present information to users without overwhelming them. 2. **Cognitive Architectures**: There are comprehensive computational frameworks that aim to simulate human cognitive processes. Examples include the ACT-R (Adaptive Control of Thought-Rational) and Soar architectures. These are often used in HCI to predict how users will learn new systems and solve problems. 3. **Human Information Processing Models**: HCI often uses specific models that describe the stages of human information processing, such as the Model Human Processor, which includes perceptual, cognitive, and motor processors, each with their own cycle times and memories. 4. **Attention Models**: Models of visual attention, such as the See-Think-Do framework, explain how users allocate their attention when interacting with interfaces and can inform the design of systems to make important information more salient. 5. **Memory Models**: Understanding working memory and long-term memory is crucial for interface design. HCI uses models like the Working Memory model by Baddeley and Hitch to predict how users retain and process information while completing tasks. 6. **Error Prediction Models**: Models like Norman’s Seven Stages of Action are used to predict possible errors in human-computer interaction, allowing for the design of error-resistant systems. 7. **Task Models**: Task analysis models, such as GOMS (Goals, Operators, Methods, and Selection rules), are used to estimate how long it takes a user to accomplish a task and to identify ways to streamline the interaction. 8. **Ecological Models**: These models, like the distributed cognition approach, look at cognitive processes as distributed across people, time, and tools, considering the environment in which HCI takes place. 9. **User Mental Models**: HCI considers the mental models that users form of how systems work. Designers try to align interfaces with these mental models to make them more intuitive. 10. **Usability Heuristics**: While not models in the traditional sense, heuristics like those proposed by Jakob Nielsen are based on principles of cognitive psychology and are used to evaluate user interfaces. 11. **Empirical Studies**: Empirical studies, including both qualitative and quantitative user research, continue to inform and refine cognitive models. User testing and observation can lead to the development of new models or the refinement of existing ones. These models are not mutually exclusive and can often be combined or used in parallel to tackle different aspects of HCI design and evaluation. The choice of model(s) typically depends on the particular aspect of human cognition that is most relevant to the interaction being studied or the problem being solved. By applying cognitive models, HCI professionals aim to create user-centered designs that are efficient, effective, and satisfying for the user.


Your answer is helpful. But these different contributions are usually not written within a coherent conceptual framework. Is it possible to extract those concepts which are common between these different contributions?


Indeed, the diverse disciplines within cognitive science and HCI contribute a multitude of models and theories, and these contributions often lack a unifying framework. However, several core concepts are shared across these models and form the bedrock of cognitive science and HCI. Here are some of these shared concepts: 1. **Information Processing**: A key concept across many cognitive models is that humans process information in stages (e.g., encoding, storage, retrieval). This idea, originating from the computer metaphor of the mind, is a common thread in both psychological models and HCI designs. 2. **Attention**: The concept of attention, how it is directed, and how it is managed, is central to understanding human cognition and designing interfaces. Attentional models inform interface design by suggesting where to place important information and how to alert users to changes. 3. **Memory**: The functions of working memory and long-term memory and their limitations are critical concepts. They influence how much information should be presented at once and how interfaces can enhance recall through cues and design. 4. **Mental Models**: Across disciplines, there is recognition that users create mental models of how systems work, and the closeness of these models to the actual workings of the system can affect usability. 5. **Cognitive Load**: Cognitive load refers to the amount of mental effort being used in the working memory. Across HCI and cognitive science, there’s an emphasis on designing systems that manage and minimize cognitive load to prevent errors and improve efficiency. 6. **Problem Solving and Decision Making**: Understanding the cognitive processes behind problem solving and decision making is essential for creating interfaces that support these activities. 7. **Metacognition**: Awareness and understanding of one’s own thought processes are recognized as crucial for learning and interaction with systems. This is reflected in designs that support user control and flexibility. 8. **Perception**: Theories of visual, auditory, and haptic perception are fundamental in designing interfaces that are perceivable and that map onto the human sensory capabilities. 9. **User Experience**: The concept of creating positive interactions encompasses not just the usability but also the affective responses, such as satisfaction and enjoyment. 10. **Error Recognition and Recovery**: Understanding how and why errors occur in human-machine interaction guides the design of systems that help users avoid mistakes or recover from them easily. 11. **Adaptability and Learning**: The concept that users learn and adapt to systems, and that systems can adapt to users, is common across cognitive science and HCI. This is key in designing interfaces that are capable of accommodating users of varying skill levels and learning styles. 12. **Multimodality**: Recognizing that human interaction with the environment is multimodal has led to designs that utilize multiple modes of input and output (e.g., touch, voice, gesture). These shared concepts help to form a bridge across the various contributions to cognitive science and HCI, providing a more cohesive understanding of how to design technology that works in harmony with human cognitive processes. Design principles and guidelines in HCI are often derived from these fundamental cognitive concepts, ensuring that despite the lack of a single, coherent framework, the discipline remains grounded in an understanding of human cognition.


I agree with you. Nevertheless, it shows many important questions are still open. Last questions: How would you locate the book ‘The Rational Quotient’ of Stanovich and others in this framework of HCI?


“The Rationality Quotient” by Keith E. Stanovich and colleagues is actually more directly related to cognitive psychology and the study of rational thought rather than Human-Computer Interaction (HCI) per se. However, its insights can certainly be located within the broader context of HCI and cognitive science. Stanovich’s work focuses on the nature of rationality, distinguishing between different types of cognitive processes: those that are algorithmic (akin to computational processes in the brain), those that relate to reflective thinking and decision-making, and those that pertain to the knowledge base that individuals have at their disposal. Here’s how this relates to HCI: 1. **User Decision Making**: HCI design often considers how users make decisions while interacting with interfaces. Stanovich’s discussion of rational thinking can inform the design of systems that support better decision-making processes. 2. **Error Mitigation**: Understanding why people make irrational decisions or errors can help HCI professionals design systems that reduce the likelihood of user errors. 3. **Cognitive Biases**: The work on rationality includes the study of cognitive biases, which has implications for HCI in terms of designing interfaces that help mitigate these biases or use them to guide users towards more productive behaviors. 4. **Personalization and Adaptability**: Rationality research may influence the development of adaptive systems that personalize the user experience based on an individual’s demonstrated rationality levels or decision-making styles. 5. **Instructional Design**: Insights from studies of rationality can impact how educational technologies are designed to enhance learning and critical thinking skills. 6. **Reflective Interfaces**: The concept of reflective thinking can lead to the creation of interfaces that encourage users to think more deeply about their choices, such as in systems that prompt users to confirm or reconsider actions. While “The Rationality Quotient” doesn’t provide a direct commentary on HCI, the underlying cognitive principles it explores are highly relevant for designing systems that align with human cognitive strengths and compensate for weaknesses. Thus, within the framework of HCI, Stanovich’s work can be seen as contributing to the foundational cognitive understanding necessary to create user-centered designs that support effective, rational, and satisfying interactions with technology.


This sounds positive for the book. But I ask myself in which sense the concept of ‘rationality’ can be related to ‘cognitive processes’?


The concept of “rationality” is deeply intertwined with cognitive processes. Rationality, in the context of cognitive science and psychology, is generally understood as the quality of being reasonable, based on facts or reason. It refers to the way in which individuals think, make decisions, and act based on certain principles of logic and consistency. Here’s how rationality relates to cognitive processes: 1. **Dual-Process Theories**: Theories like those proposed by Daniel Kahneman, which distinguish between two types of thinking processes—fast, automatic, intuitive thought (System 1) and slower, more deliberate, analytical thought (System 2)—are directly related to the study of rationality. Rationality is often associated with the reflective, analytical processes of System 2. 2. **Decision-Making**: Rationality is central to decision-making, which is a complex cognitive process involving the evaluation of options, the assessment of probabilities and outcomes, and the selection of a course of action. Cognitive science studies the heuristics and biases that affect this process. 3. **Logic and Reasoning**: Cognitive processes related to understanding logical structures and applying rules of inference fall under the umbrella of rationality. This includes deductive reasoning, inductive reasoning, and abductive reasoning. 4. **Problem Solving**: Rational approaches to problem solving involve systematic analysis, the application of knowledge, and consideration of potential solutions. This relates to cognitive processes involving working memory, planning, and executive function. 5. **Belief Formation and Revision**: Rationality also pertains to how individuals form, maintain, and revise their beliefs in light of new evidence. This involves cognitive processes related to learning, memory, and critical thinking. 6. **Metacognition**: Rational thinking involves metacognition, which is the awareness and control of one’s own cognitive processes. Metacognitive skills enable individuals to recognize when a problem is beyond their current understanding and to adjust their approach accordingly. 7. **Epistemic Rationality**: This aspect of rationality is concerned with how beliefs are justified and how knowledge is constructed, which is central to cognitive processes involving perception, interpretation, and the assessment of truth claims. Stanovich and others in the field of cognitive science study these aspects of cognition to understand how and why people think and act the way they do, especially when their actions deviate from what might be considered objectively rational. Their work seeks to explain cognitive biases, irrational preferences, and other anomalies in human thought and behavior. In “The Rationality Quotient,” Stanovich et al. propose a framework for understanding and measuring rationality that goes beyond the traditional IQ tests, which typically focus on problem-solving abilities and cognitive capacity. They emphasize the importance of rational thought and behavior in everyday decision-making and the practical outcomes of such thought processes. The concept of rationality thus serves as a lens through which cognitive processes are evaluated in terms of their effectiveness and alignment with goals and reality.


You are listing important contributions to the question. You are mentioning the Dual-Process Theory by Kahnemann. Why do you call this a ‘theory’?


The term “Dual-Process Theory” is often used to describe a range of theoretical frameworks in psychology that posit two distinct modes or systems of thinking that operate when we process information. Daniel Kahneman popularized one of the most well-known dual-process models with his characterization of System 1 and System 2 in his book “Thinking, Fast and Slow.” The reason it’s called a “theory” is because it offers a systematic explanation for observations and findings from numerous psychological studies. A theory in science is a well-substantiated explanation acquired through the scientific method and repeatedly tested and confirmed through observation and experimentation. Here’s why the Dual-Process Theory fits this definition: 1. **Empirical Support**: The Dual-Process Theory is supported by a wide range of experiments and observations in cognitive psychology, neuropsychology, and related fields. 2. **Predictive Power**: It provides predictions about human behavior that can be tested in experimental settings. For instance, it predicts that under certain conditions, people will rely more on fast, intuitive thinking (System 1) than on slow, deliberative thinking (System 2). 3. **Explanatory Scope**: The theory explains a variety of phenomena across different contexts, such as why people struggle with statistical reasoning or are susceptible to cognitive biases. 4. **Integrative Framework**: It integrates findings from various areas of cognitive science into a coherent framework, explaining how different types of thinking contribute to behavior and judgment. 5. **Heuristic Value**: The theory has heuristic value in that it generates new hypotheses and lines of research, driving the field forward. In scientific parlance, a theory is not a mere guess or a tentative hypothesis; it is a comprehensive explanation for a range of phenomena. However, like all scientific theories, Dual-Process Theory is not beyond criticism or revision. As new data emerge, theories are often refined or challenged. Some researchers question the dichotomy proposed by Dual-Process Theories or argue for more nuanced or integrated models of cognition. Despite this, the Dual-Process Theory as presented by Kahneman has been immensely influential in our understanding of human thought and has shaped research in cognitive psychology, economics, and other fields.


OK, I agree.


Great, I’m glad to hear that the information resonated with you! If you have any more questions or if there’s another topic you’re curious about, feel free to ask.

REVIEW: Keith E.Stanovich, Richard F.West, Maggie E.Toplak, “The Rational Quotient. Towards a Test of Rational Thinking”, MIT Press, 2016

(Last change: Nov 1, 2023)


This text belongs to the overall theme REVIEWS.

In the last months I was engaged with the topic of text-generating algorithms and the possible impact for a scientific discourse (some first notices to this discussion you can find here ( In this context it is important to clarify the role and structure of human actors as well as the concept of Intelligence. Meanwhile I have abandoned the word Intelligence completely because the inflationary use in today mainstream pulverises any meaning. Even in one discipline — like psychology — you can find many different concepts. In this context I have read the book of Stanovich to have a prominent example of using the concept of intelligence, there combined with the concept of rationality, which is no less vague.


The book “The Rationality Quotient” from 2016 represents not the beginning of a discourse but is a kind of summary of a long lasting discourse with many publications before. This makes this book interesting, but also difficult to read in the beginning, because the book is using nearly on every page theoretical terms, which are assumed to be known to the reader and cites other publications without giving sufficient explanations why exactly these cited publications are important. This is no argument against this book but sheds some light on the reader, who has to learn a lot to understand the text.

A text with the character of summing up its subject is good, because it has a confirmed meaning about the subject which enables a kind of clarity which is typical for that state of elaborated point of view.

In the following review it is not the goal to give a complete account of every detail of this book but only to present the main thesis and then to analyze the used methods and the applied epistemological framework.

Main Thesis of the Book

The reviewing starts with the basic assumptions and the main thesis.

FIGURE 1 : The beginning. Note: the number ‘2015’ has to be corrected to ‘2016’.

FIGURE 2 : First outline of cognition. Note: the number ‘2015’ has to be corrected to ‘2016’.

As mentioned in the introduction you will in the book not find a real overview about the history of psychological research dealing with the concept of Intelligence and also no overview about the historical discourse to the concept of Rationality, whereby the last concept has also a rich tradition in Philosophy. Thus, somehow you have to know it.

There are some clear warnings with regard to the fuzziness of the concept rationality (p.3) as well as to the concept of intelligence (p.15). From a point of view of Philosophy of Science it could be interesting to know what the circumstances are which are causing such a fuzziness, but this is not a topic of the book. The book talks within its own selected conceptual paradigm. Being in the dilemma, of what kind of intelligence paradigm one wants to use, the book has decided to work with the Cattell-Horn-Carroll (CTC) paradigm, which some call a theory. [1]

Directly from the beginning it is explained that the discussion of Intelligence is missing a clear explanation of the full human model of cognition (p.15) and that intelligence tests therefore are mostly measuring only parts of human cognitive functions. (p.21)

Thus let us have a more detailed look to the scenario.

[1] For a first look to the Cattell–Horn–Carroll theory see:, a first overview.

Which point of View?

The book starts with a first characterization of the concept of Rationality within a point of view which is not really clear. From different remarks one gets some hints to modern Cognitive Science (4,6), to Decision Theory (4) and Probability Calculus (9), but a clear description is missing.

And it is declared right from the beginning, that the main aim of the book is the Construction of a rational Thinking Test (4), because for the authors the used Intelligence Tests — later reduced to the Carroll-Horn-Carroll (CHC) type of intelligence test (16) — are too narrow in what they are measuring (15, 16, 21).

Related to the term Rationality the book characterizes some requirements which the term rationality should fulfill (e.g. ‘Rationality as a continuum’ (4), ’empirically based’ (4), ‘operationally grounded’ (4), a ‘strong definition’ (5), a ‘normative one’ (5), ‘normative model of optimum judgment’ (5)), but it is more or less open, what these requirements imply and what tacit assumptions have to be fulfilled, that this will work.

The two requirements ’empirically based’ as well as ‘operationally grounded’ point in the direction of an tacitly assumed concept of an empirical theory, but exactly this concept — and especially in association with the term cognitive science — isn’t really clear today.

Because the authors make in the next pages a lot of statements which claim to be serious, it seems to be important for the discussion in this review text to clarify the conditions of the ‘meaning of language expressions’ and of being classified as ‘being true’.

If we assume — tentatively — that the authors assume a scientific theory to be primarily a text whose expressions have a meaning which can transparently be associated with an empirical fact and if this is the case, then the expression will be understood as being grounded and classified as true, then we have characterized a normal text which can be used in everyday live for the communication of meanings which can become demonstrated as being true.

Is there a difference between such a ‘normal text’ and a ‘scientific theory’? And, especially here, where the context should be a scientific theory within the discipline of cognitive science: what distinguishes a normal text from a ‘scientific theory within cognitive science’?

Because the authors do not explain their conceptual framework called cognitive science we recur here to a most general characterization [2,3] which tells us, that cognitive science is not a single discipline but an interdisciplinary study which is taking from many different disciplines. It has not yet reached a state where all used methods and terms are embedded in one general coherent framework. Thus the relationship of the used conceptual frameworks is mostly fuzzy, unclear. From this follows directly, that the relationship of the different terms to each other — e.g. like ‘underlying preferences’ and ‘well ordered’ — is within such a blurred context rather unclear.

Even the simple characterization of an expression as ‘having an empirical meaning’ is unclear: what are the kinds of empirical subjects and the used terms? According to the list of involved disciplines the disciplines linguistics [4], psychology [5] or neuroscience [6] — besides others — are mentioned. But every of these disciplines is itself today a broad field of methods, not integrated, dealing with a multifaceted subject.

Using an Auxiliary Construction as a Minimal Point of Reference

Instead of becoming somehow paralyzed from these one-and-all characterizations of the individual disciplines one can try to step back and taking a look to basic assumptions about empirical perspectives.

If we take a group of Human Observers which shall investigate these subjects we could make the following assumptions:

  1. Empirical Linguistics is dealing with languages, spoken as well as written by human persons, within certain environments, and these can be observed as empirical entities.
  2. Empirical Psychology is dealing with the behavior of human persons (a kind of biological systems) within certain environments, and these can be observed.
  3. Empirical Neuroscience is dealing with the brain as part of a body which is located in some environment, and this all can be observed.

The empirical observations of certain kinds of empirical phenomena can be used to define more abstract concepts, relations, and processes. These more abstract concepts, relations, and processes have ‘as such’ no empirical meaning! They constitute a formal framework which has to become correlated with empirical facts to get some empirical meaning. As it is known from philosophy of science [7] the combination of empirical concepts within a formal framework of abstracts terms can enable ‘abstract meanings’ which by logical conclusions can produce statements which are — in the moment of stating them — not empirically true, because ‘real future’ has not yet happened. And on account of the ‘generality’ of abstract terms compared to the finiteness and concreteness of empirical facts it can happen, that the inferred statements never will become true. Therefore the mere usage of abstract terms within a text called scientific theory does not guarantee valid empirical statements.

And in general one has to state, that a coherent scientific theory including e.g. linguistics, psychology and neuroscience, is not yet in existence.

To speak of cognitive science as if this represents a clearly defined coherent discipline seems therefore to be misleading.

This raises questions about the project of a constructing a coherent rational thinking test (CART).

[2] See ‘cognitive science’ in wikipedia:

[3] See too ‘cognitive science’ in the Stanford Encyclopedia of Philosophy:

[4] See ‘linguistics’ in wikipedia:

[5] See ‘psychology’ in wikipedia:

[6] See ‘neuroscience’ in wikipedia:

[7] See ‘philosophy of science’ in wikipedia:

‘CART’ TEST FRAMEWORK – A Reconstruction from the point of View of Philosophy of Science

Before I will dig deeper into the theory I try to understand the intended outcome of this theory as some point of reference. The following figure 3 gives some hints.

FIGURE 3 : Outline of the Test Framework based on the Appendix in Stanovich 2016. This Outline is a Reconstruction by the author of this review.

It seems to be important to distinguish at least three main parts of the whole scientific endeavor:

  1. The group of scientists which has decided to process a certain problem.
  2. The generated scientific theory as a text.
  3. The description of a CART Test, which describes a procedure, how the abstract terms of the theory can be associated with real facts.

From the group of scientists (Stanovich et al.) we know that they understand themselves as cognitive scientists (without having a clear characterization, what this means concretely).

The intended scientific theory as a text is here assumed to be realized in the book, which is here the subject of a review.

The description of a CART Test is here taken from the appendix of the book.

To understand the theory it is interesting to see, that in the real test the test system (assumed here as a human person) has to read (and hear?) a instruction, how to proceed with a task form, and then the test system (a human person) has to process the test form in the way it has understood the instructions and the test form as it is.

The result is a completed test form.

And it is then this completed test form which will be rated according to the assumed CART theory.

This complete paradigm raises a whole bunch of questions which to answer here in full is somehow out of range.

Mix-Up of Abstract Terms

Because the Test Scenario presupposes a CART theory and within this theory some kind of a model of intended test users it can be helpful to have a more closer look to this assumed CART model, which is located in a person.

FIGURE 4 : General outline of the logic behind CART according to Stanovich et al. (2016).

The presented cognitive architecture shall present a framework for the CART (Comprehensive Assessment of Rational Thinking), whereby this framework is including a model. The model is not only assumed to contextualize and classify heuristics and tasks, but it also presents Rationality in a way that one can deduce mental characteristics included in rationality.(cf. 37)

Because the term Rationality is not an individual empirical fact but an abstract term of a conceptual framework, this term has as such no meaning. The meaning of this abstract term has to be arranged by relations to other abstract terms which themselves are sufficiently related to concrete empirical statements. And these relations between abstract terms and empirical facts (represented as language expressions) have to be represented in a manner, that it is transparent how the the measured facts are related to the abstract terms.

Here Stanovich et al. is using another abstract term Mind, which is associated with characteristics called mental characteristics: Reflective mind, Algorithmic Level, and Mindware.

And then the text tells that Rationality is presenting mental characteristics. What does this mean? Is rationality different from the mind, who has some characteristics, which can be presented from rationality using somehow the mind, or is rationality nevertheless part of the mind and manifests themself in these mental characteristics? But what kind of the meaning could this be for an abstract term like rationality to be part of the mind? Without an explicit model associated with the term Mind which arranges the other abstract term Rationality within this model there exists no meaning which can be used here.

These considerations are the effect of a text, which uses different abstract terms in a way, which is rather unclear. In a scientific theory this should not be the case.

Measuring Degrees of Rationality

In the beginning of chapter 4 Stanovich et al. are looking back to chapter 1. Here they built up a chain of arguments which illustrate some general perspective (cf. 63):

  1. Rationality has degrees.
  2. These degrees of rationality can be measured.
  3. Measurement is realized by experimental methods of cognitive science.
  4. The measuring is based on the observable behavior of people.
  5. The observable behavior can manifest whether the individual actor (a human person) follows assumed preferences related to an assumed axiom of choice.
  6. Observable behavior which is classified as manifesting assumed internal preferences according to an assumed internal axiom of choice can show descriptive and procedural invariance.
  7. Based on these deduced descriptive and procedural invariance, it can be inferred further, that these actors are behaving as if they are maximizing utility.
  8. It is difficult to assess utility maximization directly.
  9. It is much easier to assess whether one of the axioms of rational choice is being violated.

These statements characterize the Logic of the CART according to Stanovich et al. (cf.64)

A major point in this argumentation is the assumption, that observable behavior is such, that one can deduce from the properties of this behavior those attributes/ properties, which point (i) to an internal model of an axiom of choice, (ii) to internal processes, which manifest the effects of this internal model, (iii) to certain characteristics of these internal processes which allow the deduction of the property of maximizing utility or not.

These are very strong assumptions.

If one takes further into account the explanations from the pages 7f about the required properties for an abstract term axiom of choice (cf. figure 1) then these assumptions appear to be very demanding.

Can it be possible to extract the necessary meaning out of observable behavior in a way, which is clear enough by empirical standards, that this behavior shows property A and not property B ?

As we know from the description of the CART in the appendix of the book (cf. figure 3) the real behavior assumed for an CART is the (i) reading (or hearing?) of an instruction communicated by ordinary English, and then (ii) a behavior deduced from the understanding of the instruction, which (iii) manifests themself in the reading of a form with a text and filling out this form in predefined positions in a required language.

This described procedure is quite common throughout psychology and similar disciplines. But it is well known, that the understanding of language instructions is very error-prone. Furthermore, the presentation of a task as a text is inevitably highly biased and additionally too very error-prone with regard to the understanding (this is a reason why in usability testing purely text-based tests are rather useless).

The point is, that the empirical basis is not given as a protocol of observations of language free behavior but of a behavior which is nearly completely embedded in the understanding and handling of texts. This points to the underlying processes of text understanding which are completely internal to the actor. There exists no prewired connection between the observable strings of signs constituting a text and the possible meaning which can be organized by the individual processes of text understanding.

Stopping Here

Having reached this point of reading and trying to understand I decided to stop here: to many questions on all levels of a scientific discourse and the relationships between main concepts and terms appear in the book of Stanovich et al. to be not clear enough. I feel therefore confirmed in my working hypothesis from the beginning, that the concept of intelligence today is far too vague, too ambiguous to contain any useful kernel of meaning any more. And concepts like Rationality, Mind (and many others) seem to do not better.

Chatting with chatGPT4

Since April 2023 I have started to check the ability of chatGPT4 to contribute to a philosophical and scientific discourse. The working hypothesis is, that chatGPT4 is good in summarizing the common concepts, which are used in public texts, but chatGPT is not able for critical evaluations, not for really new creative ideas and in no case for systematic analysis of used methods, used frameworks, their interrelations, their truth-conditons and much more, what it cannot. Nevertheless, it is a good ‘common sense check’. Until now I couldn’t learn anything new from these chats.

If you have read this review with all the details and open questions you will be perhaps a little bit disappointed about the answers from chatGPT4. But keep calm: it is a bit helpful.

Protocol with chatGPT4

Pain does not replace the truth …

Time: Oct 18, 2023 — Oct 24, 2023)
Author: Gerd Doeben-Henisch
Email: gerd@doeben-henisch.d


This post is part of the uffmm science blog. It is a translation from the German source: For the translation I have used chatGPT4 and Because in the text the word ‘hamas’ is occurring, chatGPT didn’t translate a long paragraph with this word. Thus the algorithm is somehow ‘biased’ by a certain kind of training. This is really bad because the following text is offers some reflections about a situation where someone ‘hates’ others. This is one of our biggest ‘disease’ today.


The Hamas terrorist attack on Israeli citizens on October 7, 2023, has shaken the world. For years, terrorist acts have been shaking our world. In front of our eyes, a is attempting, since 2022 (actually since 2014), to brutally eradicate the entire Ukrainian population. Similar events have been and are taking place in many other regions of the world…

… Pain does not replace the truth [0]…

Truth is not automatic. Making truth available requires significantly more effort than remaining in a state of partial truth.

The probability that a person knows the truth or seeks the truth is smaller than staying in a state of partial truth or outright falsehood.

Whether in a democracy, falsehood or truth predominates depends on how a democracy shapes the process of truth-finding and the communication of truth. There is no automatic path to truth.

In a dictatorship, the likelihood of truth being available is extremely dependent on those who exercise centralized power. Absolute power, however, has already fundamentally broken with the truth (which does not exclude the possibility that this power can have significant effects).

The course of human history on planet Earth thus far has shown that there is evidently no simple, quick path that uniformly leads all people to a state of happiness. This must have to do with humans themselves—with us.

The interest in seeking truth, in cultivating truth, in a collective process of truth, has never been strong enough to overcome the everyday exclusions, falsehoods, hostilities, atrocities…

One’s own pain is terrible, but it does not help us to move forward…

Who even wants a future for all of us?????

[0] There is an overview article by the author from 2018, in which he presents 15 major texts from the blog “Philosophie Jetzt” ( “Philosophy Now”) ( “INFORMAL COSMOLOGY. Part 3a. Evolution – Truth – Society. Synopsis of previous contributions to truth in this blog” ( )), in which the matter of truth is considered from many points of view. In the 5 years since, society’s treatment of truth has continued to deteriorate dramatically.

Hate cancels the truth

Truth is related to knowledge. However, in humans, knowledge most often is subservient to emotions. Whatever we may know or wish to know, when our emotions are against it, we tend to suppress that knowledge.

One form of emotion is hatred. The destructive impact of hatred has accompanied human history like a shadow, leaving a trail of devastation everywhere it goes: in the hater themselves and in their surroundings.

The event of the inhumane attack on October 7, 2023 in Israel, claimed by Hamas, is unthinkable without hatred.

If one traces the history of Hamas since its founding in 1987 [1,2], then one can see that hatred is already laid down as an essential moment in its founding. This hatred is joined by the moment of a religious interpretation, which calls itself Islamic, but which represents a special, very radicalized and at the same time fundamentalist form of Islam.

The history of the state of Israel is complex, and the history of Judaism is no less so. And the fact that today’s Judaism also contains strong components that are clearly fundamentalist and to which hatred is not alien, this also leads within many other factors at the core to a constellation of fundamentalist antagonisms on both sides that do not in themselves reveal any approaches to a solution. The many other people in Israel and Palestine ‘around’ are part of these ‘fundamentalist force fields’, which simply evaporate humanity and truth in their vicinity. By the trail of blood one can see this reality.

Both Judaism and Islam have produced wonderful things, but what does all this mean in the face of a burning hatred that pushes everything aside, that sees only itself.

[1] Jeffrey Herf, Sie machen den Hass zum Weltbild, FAZ 20.Okt. 23, S.11 (Abriss der Geschichte der Hamas und ihr Weltbild, als Teil der größeren Geschichte) (Translation:They make hatred their worldview, FAZ Oct. 20, 23, p.11 (outlining the history of Hamas and its worldview, as part of the larger story)).

[2] Joachim Krause, Die Quellen des Arabischen Antisemitismus, FAZ, 23.10.2023,p.8 (This text “The Sources of Arab Anti-Semitism” complements the account by Jeffrey Herf. According to Krause, Arab anti-Semitism has been widely disseminated in the Arab world since the 1920s/ 30s via the Muslim Brotherhood, founded in 1928).

A society in decline

When truth diminishes and hatred grows (and, indirectly, trust evaporates), a society is in free fall. There is no remedy for this; the use of force cannot heal it, only worsen it.

The mere fact that we believe that lack of truth, dwindling trust, and above all, manifest hatred can only be eradicated through violence, shows how seriously we regard these phenomena and at the same time, how helpless we feel in the face of these attitudes.

In a world whose survival is linked to the availability of truth and trust, it is a piercing alarm signal to observe how difficult it is for us as humans to deal with the absence of truth and face hatred.

Is Hatred Incurable?

When we observe how tenaciously hatred persists in humanity, how unimaginably cruel actions driven by hatred can be, and how helpless we humans seem in the face of hatred, one might wonder if hatred is ultimately not a kind of disease—one that threatens the hater themselves and, particularly, those who are hated with severe harm, ultimately death.

With typical diseases, we have learned to search for remedies that can free us from the illness. But what about a disease like hatred? What helps here? Does anything help? Must we, like in earlier times with people afflicted by deadly diseases (like the plague), isolate, lock away, or send away those who are consumed by hatred to some no man’s land? … but everyone knows that this isn’t feasible… What is feasible? What can combat hatred?

After approximately 300.000 years of Homo sapiens on this planet, we seem strangely helpless in the face of the disease of hatred.

What’s even worse is that there are other people who see in every hater a potential tool to redirect that hatred toward goals they want to damage or destroy, using suitable manipulation. Thus, hatred does not disappear; on the contrary, it feels justified, and new injustices fuel the emergence of new hatred… the disease continues to spread.

One of the greatest events in the entire known universe—the emergence of mysterious life on this planet Earth—has a vulnerable point where this life appears strangely weak and helpless. Throughout history, humans have demonstrated their capability for actions that endure for many generations, that enable more people to live fulfilling lives, but in the face of hatred, they appear oddly helpless… and the one consumed by hatred is left incapacitated, incapable of anything else… plummeting into their dark inner abyss…

Instead of hatred, we need (minimally and in outline):

  1. Water: To sustain human life, along with the infrastructure to provide it, and individuals to maintain that infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  2. Food: To sustain human life, along with the infrastructure for its production, storage, processing, transportation, distribution, and provision. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  3. Shelter: To provide a living environment, including the infrastructure for its creation, provisioning, maintenance, and distribution. Individuals are needed to manage this provision, and they, too, require everything they need for their own lives to fulfill this task.
  4. Energy: For heating, cooling, daily activities, and life itself, along with the infrastructure for its generation, provisioning, maintenance, and distribution. Individuals are needed to oversee this infrastructure, and they, too, require everything they need for their own lives to fulfill this task.
  5. Authorization and Participation: To access water, food, shelter, and energy. This requires an infrastructure of agreements, and individuals to manage these agreements. These individuals also require everything they need for their own lives to fulfill this task.
  6. Education: To be capable of undertaking and successfully completing tasks in real life. This necessitates individuals with enough experience and knowledge to offer and conduct such education. These individuals also require everything they need for their own lives to fulfill this task.
  7. Medical Care: To help with injuries, accidents, and illnesses. This requires individuals with sufficient experience and knowledge to offer and provide medical care, as well as the necessary facilities and equipment. These individuals also require everything they need for their own lives to fulfill this task.
  8. Communication Facilities: So that everyone can receive helpful information needed to navigate their world effectively. This requires suitable infrastructure and individuals with enough experience and knowledge to provide such information. These individuals also require everything they need for their own lives to fulfill this task.
  9. Transportation Facilities: So that people and goods can reach the places they need to go. This necessitates suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  10. Decision Structures: To mediate the diverse needs and necessary services in a way that ensures most people have access to what they need for their daily lives. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such infrastructure. These individuals also require everything they need for their own lives to fulfill this task.
  11. Law Enforcement: To ensure disruptions and damage to the infrastructure necessary for daily life are resolved without creating new disruptions. This requires suitable infrastructure and individuals with enough experience and knowledge to offer such services. These individuals also require everything they need for their own lives to fulfill this task.
  12. Sufficient Land: To provide enough space for all these requirements, along with suitable soil (for water, food, shelter, transportation, storage, production, etc.).
  13. Suitable Climate
  14. A functioning ecosystem.
  15. A capable scientific community to explore and understand the world.
  16. Suitable technology to accomplish everyday tasks and support scientific endeavors.
  17. Knowledge in the minds of people to understand daily events and make responsible decisions.
  18. Goal orientations (preferences, values, etc.) in the minds of people to make informed decisions.
  19. Ample time and peace to allow these processes to occur and produce results.
  20. Strong and lasting relationships with other population groups pursuing the same goals.
  21. Sufficient commonality among all population groups on Earth to address their shared needs where they are affected.
  22. A sustained positive and constructive competition for those goal orientations that make life possible and viable for as many people on this planet (in this solar system, in this galaxy, etc.) as possible.
  23. The freedom present within the experiential world, included within every living being, especially within humans, should be given as much room as possible, as it is this freedom that can overcome false ideas from the past in the face of a constantly changing world, enabling us to potentially thrive in the world of the future.

Review of Nancy Leveson (2020), Are you sure your software will not kill anyone?

Author: Gerd Doeben-Henisch

Time: April 2, 2020 — Oct 20, 2023


This post is part of the REVIEW section of the uffmm blog.

Review of Nancy Leveson
Are you sure your software will not kill anyone?

A Review from the Point of View of the DAAI Paradigm


Three years after the above reviewed paper Nancy Leveson published a new paper to the subject of Safety Critical Systems together with John P. Thomas with the title Inside Risks. Certification of Safety-Critical Systems. Seeking new approaches toward ensuring the safety of software-intensive systems. in the COMMUNICATIONS OF THE ACM, OCT 2023, VOL . 66, NO. 1 0, 22-26

In a first reading one can get the impression that the task of securing safety critical systems seems to be more and more unsolvable. The complexity of the task seems to be is beyond all paradigms we know today. Is this the end of modern technology? Nancy and John exclude explicitly that a ‘solution’ could be based on ‘more AI’ only; without human persons it will not work. What does this mean?

My personal judgment: We have to go back to ‘start’. We have to consider this challenge from scratch in a new way: What do we really need? What methods are known? What is still missing?

A personal guess: We have to think about the human factor more radically: the human factor is not only one more ‘factor’ besides others in the scenario; the human factor is indeed an ‘object’ but at the same time also the ‘author’ inducing implicitly all the conditions of thinking, the medium of communications, as well as the decisions. To be silent about this causes hiding many important factors which are effective.

Collective human-machine intelligence and text generation. A transdisciplinary analysis.

Author: Gerd Doeben-Henisch


Time: Sept 25, 2023 – Oct 3, 2023

Translation: This text is a translation from the German Version into English with the aid of the software as well as with chatGPT4, moderated by the author. The style of the two translators is different. The author is not good enough to classify which translator is ‘better’.


This text is the outcome of a conference held at the Technical University of Darmstadt (Germany) with the title: Discourses of disruptive digital technologies using the example of AI text generators ( ). A German version of this article will appear in a book from de Gruyter as open access in the beginning of 2024.

Collective human-machine intelligence and text generation. A transdisciplinary analysis.


Based on the conference theme “AI – Text and Validity. How do AI text generators change scientific discourse?” as well as the special topic “Collective human-machine intelligence using the example of text generation”, the possible interaction relationship between text generators and a scientific discourse will be played out in a transdisciplinary analysis. For this purpose, the concept of scientific discourse will be specified on a case-by-case basis using the text types empirical theory as well as sustained empirical theory in such a way that the role of human and machine actors in these discourses can be sufficiently specified. The result shows a very clear limitation of current text generators compared to the requirements of scientific discourse. This leads to further fundamental analyses on the example of the dimension of time with the phenomenon of the qualitatively new as well as on the example of the foundations of decision-making to the problem of the inherent bias of the modern scientific disciplines. A solution to the inherent bias as well as the factual disconnectedness of the many individual disciplines is located in the form of a new service of transdisciplinary integration by re-activating the philosophy of science as a genuine part of philosophy. This leaves the question open whether a supervision of the individual sciences by philosophy could be a viable path? Finally, the borderline case of a world in which humans no longer have a human counterpart is pointed out.

AUDIO: Keyword Sound


This text takes its starting point from the conference topic “AI – Text and Validity. How do AI text generators change scientific discourses?” and adds to this topic the perspective of a Collective Human-Machine Intelligence using the example of text generation. The concepts of text and validity, AI text generators, scientific discourse, and collective human-machine intelligence that are invoked in this constellation represent different fields of meaning that cannot automatically be interpreted as elements of a common conceptual framework.


In order to be able to let the mentioned terms appear as elements in a common conceptual framework, a meta-level is needed from which one can talk about these terms and their possible relations to each other. This approach is usually located in the philosophy of science, which can have as its subject not only single terms or whole propositions, but even whole theories that are compared or possibly even united. The term transdisciplinary [1] , which is often used today, is understood here in this philosophy of science understanding as an approach in which the integration of different concepts is redeemed by introducing appropriate meta-levels. Such a meta-level ultimately always represents a structure in which all important elements and relations can gather.

[1] Jürgen Mittelstraß paraphrases the possible meaning of the term transdisciplinarity as a “research and knowledge principle … that becomes effective wherever a solely technical or disciplinary definition of problem situations and problem solutions is not possible…”. Article Methodological Transdisciplinarity, in LIFIS ONLINE,, ISSN 1864-6972, p.1 (first published in: Technology Assessment – Theory and Practice No.2, 14.Jg., June 2005, 18-23). In his text Mittelstrass distinguishes transdisciplinarity from the disciplinary and from the interdisciplinary. However, he uses only a general characterization of transdisciplinarity as a research guiding principle and scientific form of organization. He leaves the concrete conceptual formulation of transdisciplinarity open. This is different in the present text: here the transdisciplinary theme is projected down to the concreteness of the related terms and – as is usual in philosophy of science (and meta-logic) – realized by means of the construct of meta-levels.


Here the notion of scientific discourse is assumed as a basic situation in which different actors can be involved. The main types of actors considered here are humans, who represent a part of the biological systems on planet Earth as a kind of Homo sapiens, and text generators, which represent a technical product consisting of a combination of software and hardware.

It is assumed that humans perceive their environment and themselves in a species-typical way, that they can process and store what they perceive internally, that they can recall what they have stored to a limited extent in a species-typical way, and that they can change it in a species-typical way, so that internal structures can emerge that are available for action and communication. All these elements are attributed to human cognition. They are working partially consciously, but largely unconsciously. Cognition also includes the subsystem language, which represents a structure that on the one hand is largely species-typically fixed, but on the other hand can be flexibly mapped to different elements of cognition.

In the terminology of semiotics [2] the language system represents a symbolic level and those elements of cognition, on which the symbolic structures are mapped, form correlates of meaning, which, however, represent a meaning only insofar as they occur in a mapping relation – also called meaning relation. A cognitive element as such does not constitute meaning in the linguistic sense. In addition to cognition, there are a variety of emotional factors that can influence both cognitive processes and the process of decision-making. The latter in turn can influence thought processes as well as action processes, consciously as well as unconsciously. The exact meaning of these listed structural elements is revealed in a process model [3] complementary to this structure.

[2] See, for example, Winfried Nöth: Handbuch der Semiotik. 2nd, completely revised edition. Metzler, Stuttgart/Weimar, 2000

[3] Such a process model is presented here only in partial aspects.


What is important for human actors is that they can interact in the context of symbolic communication with the help of both spoken and written language. Here it is assumed – simplistically — that spoken language can be mapped sufficiently accurately into written language, which in the standard case is called text. It should be noted that texts only represent meaning if the text producers involved, as well as the text recipients, have a meaning function that is sufficiently similar.
For texts by human text producers it is generally true that, with respect to concrete situations, statements as part of texts can be qualified under agreed conditions as now matching the situation (true) or as now not now matching the situation (false). However, a now-true can become a now-not-true again in the next moment and vice versa.

This dynamic fact refers to the fact that a punctual occurrence or non-occurrence of a statement is to be distinguished from a structural occurrence/ non-occurrence of a statement, which speaks about occurrence/ non-occurrence in context. This refers to relations which are only indirectly apparent in the context of a multitude of individual events, if one considers chains of events over many points in time. Finally, one must also consider that the correlates of meaning are primarily located within the human biological system. Meaning correlates are not automatically true as such, but only if there is an active correspondence between a remembered/thought/imagined meaning correlate and an active perceptual element, where an intersubjective fact must correspond to the perceptual element. Just because someone talks about a rabbit and the recipient understands what a rabbit is, this does not mean that there is also a real rabbit which the recipient can perceive.


When distinguishing between the two different types of actors – here biological systems of the type Homo sapiens and there technical systems of the type text-generators – a first fundamental asymmetry immediately strikes the eye: so-called text-generators are entities invented and built by humans; furthermore, it is humans who use them, and the essential material used by text-generators are furthermore texts, which are considered human cultural property, created and used by humans for a variety of discourse types, here restricted to scientific discourse.

In the case of text generators, let us first note that we are dealing with machines that have input and output, a minimal learning capability, and whose input and output can process text-like objects.
Insofar as text generators can process text-like objects as input and process them again as output, an exchange of texts between humans and text generators can take place in principle.

At the current state of development (September 2023), text generators do not yet have an independent real-world perception within the scope of their input, and the entire text generator system does not yet have such processes as those that enable species-typical cognitions in humans. Furthermore, a text generator does not yet have a meaning function as it is given with humans.

From this fact it follows automatically that text generators cannot decide about selective or structural correctness/not correctness in the case of statements of a text. In general, they do not have their own assignment of meaning as with humans. Texts generated by text generators only have a meaning if a human as a recipient automatically assigns a meaning to a text due to his species-typical meaning relation, because this is the learned behavior of a human. In fact, the text generator itself has never assigned any meaning to the generated text. Salopp one could also formulate that a technical text generator works like a parasite: it collects texts that humans have generated, rearranges them combinatorially according to formal criteria for the output, and for the receiving human a meaning event is automatically triggered by the text in the human, which does not exist anywhere in the text generator.
Whether this very restricted form of text generation is now in any sense detrimental or advantageous for the type of scientific discourse (with texts), that is to be examined in the further course.


There is no clear definition for the term scientific discourse. This is not surprising, since an unambiguous definition presupposes that there is a fully specified conceptual framework within which terms such as discourse and scientific can be clearly delimited. However, in the case of a scientific enterprise with a global reach, broken down into countless individual disciplines, this does not seem to be the case at present (Sept 2023). For the further procedure, we will therefore fall back on core ideas of the discussion in philosophy of science since the 20th century [4]and we will introduce working hypotheses on the concept of empirical theory as well as sustainable empirical theory, so that a working hypothesis on the concept of scientific discourse will be possible, which has a minimal sharpness.

[4] A good place to start may be: F. Suppe, Editor. The Structure of Scientific Theories. University of Illinois Press, Urbana, 2 edition, 1979.


The following assumptions are made for the notion of an empirical theory:

  1. an empirical theory is basically a text, written in a language that all participants understand.
  2. one part of the theory contains a description of an initial situation, the statements of which can be qualified by the theory users as now matching (true) or now not matching (false).
  3. another part of the theory contains a text that lists all changes that, to the knowledge of the participants, occur in the context of the initial situation and can change parts of the initial situation.
  4. changes in the initial situation are expressed by replacing certain statements of the initial situation with other statements. The resulting new text replaces the previous text.
  5. through the possibility of generating new initial situations, there is the possibility of making predictions (forming expectations) by applying rules of change to an applicable initial situation several times (at least once) in succession. The texts that are generated in each case induce in the minds of the participants, on the basis of the available meaning functions, the idea of a situation that is demanded to become qualified as now matching with intersubjective reality, should it occur. In the case of occurrence, the situation must correspond to the conception in the mind via perception. Whether such a correspondence can be established, after how long time and to what extent, this is basically open in the presence of the prediction making (the formation of an expectation) (autonomy of the object!).


Even before the introduction of the notion of a sustainable empirical theory, the notion of an empirical theory just introduced allows for initial statements about what a scientific discourse could or should be.
It is assumed here that the participants in a scientific discourse exchange their opinions about the world in the form of texts, all of which meet the requirements of an empirical theory.


Insofar as the participants in the discourse are human beings, we know that they have basic species-typical abilities to generate subjective meanings by means of language, which can be decided as now true or now not true by reference to a real situation.


Insofar as the discourse participants would be text generators lacking these elementary skills (meaning, reality reference), they drop out as independent actors for a scientific discourse.
To illustrate this, here is a self-assessment of chatGPT4, a current text generator:


Yes, your summary of our previous dialogue is accurate based on the points you’ve listed:

  1. I cannot read a book in the same way a human does, meaning I don’t interpret or comprehend in the same manner.
  2. My knowledge is up to September 2021, which means that I might not be aware of events, advancements, or publications after that point.
  3. My “learning” is different from human learning. I don’t learn from new experiences or information after my last update. Instead, I generate responses based on patterns in the data I was trained on.
  4. I don’t cite individual sources in my responses. I provide information based on the patterns recognized during my training.
  5. Responses I give can vary slightly even if the same question is posed multiple times.
  6. I can’t directly attribute my statements to specific, individual sources.
  7. The specifics about my training data and the processes that determine my responses are not transparently provided in each interaction.

You’ve captured the essence of our prior conversation well. If you have further questions or need clarifications on any of these points, please let me know. [5]

[5] This text is from a chat experiment conducted by the author with chatGPT4 on August 31, 2023. The citation can be found in: Gerd Doeben-Henisch, CONTINUE EXPERIMENT WITH CHATGPT4: Science, Empirical Science, Philosophy of Science, uffmm, Integrating Engineering and the Human Factor, eJournal ISSN 2567-6458,
8/31/2023 in [ ] (accessed 9/27/2023).

The question then arises whether (current) text generators, despite their severely limited capabilities, could nevertheless contribute to scientific discourse, and what this contribution means for human participants. Since text generators fail for the hard scientific criteria (decidable reality reference, reproducible predictive behavior, separation of sources), one can only assume a possible contribution within human behavior: since humans can understand and empirically verify texts, they would in principle be able to rudimentarily classify a text from a text generator within their considerations.

For hard theory work, these texts would not be usable, but due to their literary-associative character across a very large body of texts, the texts of text generators could – in the positive case – at least introduce thoughts into the discourse through texts as stimulators via the detour of human understanding, which would stimulate the human user to examine these additional aspects to see if they might be important for the actual theory building after all. In this way, the text generators would not participate independently in the scientific discourse, but they would indirectly support the knowledge process of the human actors as aids to them.[6]

[6] A detailed illustration of this associative role of a text generator can also be found in (Doeben-Henisch, 2023) on the example of the term philosophy of science and on the question of the role of philosophy of science.


The application of an empirical theory can – in the positive case — enable an expanded picture of everyday experience, in that, related to an initial situation, possible continuations (possible futures) are brought before one’s eyes.
For people who have to shape their own individual processes in their respective everyday life, however, it is usually not enough to know only what one can do. Rather, everyday life requires deciding in each case which continuation to choose, given the many possible continuations. In order to be able to assert themselves in everyday life with as little effort as possible and with – at least imagined – as little risk as possible, people have adopted well-rehearsed behavior patterns for as many everyday situations as possible, which they follow spontaneously without questioning them anew each time. These well-rehearsed behavior patterns include decisions that have been made. Nevertheless, there are always situations in which the ingrained automatisms have to be interrupted in order to consciously clarify the question for which of several possibilities one wants to decide.

The example of an individual decision-maker can also be directly applied to the behavior of larger groups. Normally, even more individual factors play a role here, all of which have to be integrated in order to reach a decision. However, the characteristic feature of a decision situation remains the same: whatever knowledge one may have at the time of decision, when alternatives are available, one has to decide for one of many alternatives without any further, additional knowledge at this point. Empirical science cannot help here [7]: it is an indisputable basic ability of humans to be able to decide.

So far, however, it remains rather hidden in the darkness of not knowing oneself, which ultimately leads to deciding for one and not for the other. Whether and to what extent the various cultural patterns of decision-making aids in the form of religious, moral, ethical or similar formats actually form or have formed a helpful role for projecting a successful future appears to be more unclear than ever.[8]

[7] No matter how much detail she can contribute about the nature of decision-making processes.

[8] This topic is taken up again in the following in a different context and embedded there in a different solution context.


Through the newly flared up discussion about sustainability in the context of the United Nations, the question of prioritizing action relevant to survival has received a specific global impulse. The multitude of aspects that arise in this discourse context [9] are difficult, if not impossible, to classify into an overarching, consistent conceptual framework.

[9] For an example see the 17 development goals: [] (Accessed: September 27, 2023)

A rough classification of development goals into resource-oriented and actor-oriented can help to make an underlying asymmetry visible: a resource problem only exists if there are biological systems on this planet that require a certain configuration of resources (an ecosystem) for their physical existence. Since the physical resources that can be found on planet Earth are quantitatively limited, it is possible, in principle, to determine through thought and science under what conditions the available physical resources — given a prevailing behavior — are insufficient. Added to this is the factor that biological systems, by their very existence, also actively alter the resources that can be found.

So, if there should be a resource problem, it is exclusively because the behavior of the biological systems has led to such a biologically caused shortage. Resources as such are neither too much, nor too little, nor good, nor bad. If one accepts that the behavior of biological systems in the case of the species Homo sapiens can be controlled by internal states, then the resource problem is primarily a cognitive and emotional problem: Do we know enough? Do we want the right thing? And these questions point to motivations beyond what is currently knowable. Is there a dark spot in the human self-image here?

On the one hand, this questioning refers to the driving forces for a current decision beyond the possibilities of the empirical sciences (trans-empirical, meta-physical, …), but on the other hand, this questioning also refers to the center/ core of human competence. This motivates to extend the notion of empirical theory to the notion of a sustainable empirical theory. This does not automatically solve the question of the inner mechanism of a value decision, but it systematically classifies the problem. The problem thus has an official place. The following formulation is suggested as a characterization for the concept of a sustainable empirical theory:

  1. a sustainable empirical theory contains an empirical theory as its core.
    1. besides the parts of initial situation, rules of change and application of rules of change, a sustainable theory also contains a text with a list of such situations, which are considered desirable for a possible future (goals, visions, …).
    2. under the condition of goals, it is possible to minimally compare each current situation with the available goals and thereby indicate the degree of goal achievement.

Stating desired goals says nothing about how realistic or promising it is to pursue those goals. It only expresses that the authors of this theory know these goals and consider them optimal at the time of theory creation. [10] The irrationality of chosen goals is in this way officially included in the domain of thought of the theory creators and in this way facilitates the extension of the rational to the irrational without already having a real solution. Nobody can exclude that the phenomenon of bringing forth something new, respectively of preferring a certain point of view in comparison to others, can be understood further and better in the future.

[10] Something can only be classified as optimal if it can be placed within an overarching framework, which allows for positioning on a scale. This refers to a minimal cognitive model as an expression of rationality. However, the decision itself takes place outside of such a rational model; in this sense, the decision as an independent process is pre-rational.


If one accepts the concept of a sustainable empirical theory, then one can extend the concept of a scientific discourse in such a way that not only texts that represent empirical theories can be introduced, but also those texts that represent sustainable empirical theories with their own goals. Here too, one can ask whether the current text generators (September 2023) can make a constructive contribution. Insofar as a sustainable empirical theory contains an empirical theory as a hard core, the preceding observations on the limitations of text generators apply. In the creative part of the development of an empirical theory, they can contribute text fragments through their associative-combinatorial character based on a very large number of documents, which may inspire the active human theory authors to expand their view. But what about that part that manifests itself in the selection of possible goals? At this point, one must realize that it is not about any formulations, but about those that represent possible solution formulations within a systematic framework; this implies knowledge of relevant and verifiable meaning structures that could be taken into account in the context of symbolic patterns. Text generators fundamentally do not have these abilities. But it is – again – not to be excluded that their associative-combinatorial character based on a very large number of documents can still provide one or the other suggestion.

In retrospect of humanity’s history of knowledge, research, and technology, it is suggested that the great advances were each triggered by something really new, that is, by something that had never existed before in this form. The praise for Big Data, as often heard today, represents – colloquially speaking — exactly the opposite: The burial of the new by cementing the old.[11]

[11] A prominent example of the naive fixation on the old as a standard for what is right can be seen, for example, in the book by Seth Stephens-Davidowitz, Don’t Trust Your Gut. Using Data Instead of Instinct To Make Better Choices, London – Oxford New York et al., 2022.


The concept of an empirical theory inherently contains the element of change, and even in the extended concept of a sustainable empirical theory, in addition to the fundamental concept of change, there is the aspect of a possible goal. A possible goal itself is not a change, but presupposes the reality of changes! The concept of change does not result from any objects but is the result of a brain performance, through which a current present is transformed into a partially memorable state (memory contents) by forming time slices in the context of perception processes – largely unconsciously. These produced memory contents have different abstract structures, are networked differently with each other, and are assessed in different ways. In addition, the brain automatically compares current perceptions with such stored contents and immediately reports when a current perception has changed compared to the last perception contents. In this way, the phenomenon of change is a fundamental cognitive achievement of the brain, which thus makes the character of a feeling of time available in the form of a fundamental process structure. The weight of this property in the context of evolution is hardly to be overestimated, as time as such is in no way perceptible.

[12] The modern invention of machines that can generate periodic signals (oscillators, clocks) has been successfully integrated into people’s everyday lives. However, the artificially (technically) producible time has nothing to do with the fundamental change found in reality. Technical time is a tool that we humans have invented to somehow structure the otherwise amorphous mass of a phenomenon stream. Since structure itself shows in the amorphous mass, which manifest obviously for all, repeating change cycles (e.g., sunrise and sunset, moonrise and moonset, seasons, …), a correlation of technical time models and natural time phenomena was offered. From the correlations resulting here, however, one should not conclude that the amorphous mass of the world phenomenon stream actually behaves according to our technical time model. Einstein’s theory of relativity at least makes us aware that there can be various — or only one? — asymmetries between technical time and world phenomenon stream.

Assuming this fundamental sense of time in humans, one can in principle recognize whether a current phenomenon, compared to all preceding phenomena, is somehow similar or markedly different, and in this sense indicates something qualitatively new.[13]

[13] Ultimately, an individual human only has its individual memory contents available for comparison, while a collective of people can in principle consult the set of all records. However, as is known, only a minimal fraction of the experiential reality is symbolically transformed.

By presupposing the concept of directed time for the designation of qualitatively new things, such a new event is assigned an information value in the Shannonian sense, as well as the phenomenon itself in terms of linguistic meaning, and possibly also in the cognitive area: relative to a spanned knowledge space, the occurrence of a qualitatively new event can significantly strengthen a theoretical assumption. In the latter case, the cognitive relevance may possibly mutate to a sustainable relevance if the assumption marks a real action option that could be important for further progress. In the latter case, this would provoke the necessity of a decision: should we adopt this action option or not? Humans can accomplish the finding of qualitatively new things. They are designed for it by evolution. But what about text generators?

Text generators so far do not have a sense of time comparable to that of humans. Their starting point would be texts that are different, in such a way that there is at least one text that is the most recent on the timeline and describes real events in the real world of phenomena. Since a text generator (as of September 2023) does not yet have the ability to classify texts regarding their applicability/non-applicability in the real world, its use would normally end here. Assuming that there are people who manually perform this classification for a text generator [14] (which would greatly limit the number of possible texts), then a text generator could search the surface of these texts for similar patterns and, relative to them, for those that cannot be compared. Assuming that the text generator would find a set of non-comparable patterns in acceptable time despite a massive combinatorial explosion, the problem of semantic qualification would arise again: which of these patterns can be classified as an indication of something qualitatively new? Again, humans would have to become active.

[14] Such support of machines by humans in the field of so-called intelligent algorithms has often been applied (and is still being applied today, see: [] (Accessed: September 27, 2023)), and is known to be very prone to errors.

As before, the verdict is mixed: left to itself, a text generator will not be able to solve this task, but in cooperation with humans, it may possibly provide important auxiliary services, which could ultimately be of existential importance to humans in search of something qualitatively new despite all limitations.


A prejudice is known to be the assessment of a situation as an instance of a certain pattern, which the judge assumes applies, even though there are numerous indications that the assumed applicability is empirically false. Due to the permanent requirement of everyday life that we have to make decisions, humans, through their evolutionary development, have the fundamental ability to make many of their everyday decisions largely automatically. This offers many advantages, but can also lead to conflicts.

Daniel Kahneman introduced in this context in his book [15] the two terms System 1 and System 2 for a human actor. These terms describe in his concept of a human actor two behavioral complexes that can be distinguished based on some properties.[16] System 1 is set by the overall system of human actor and is characterized by the fact that the actor can respond largely automatically to requirements by everyday life. The human actor has automatic answers to certain stimuli from his environment, without having to think much about it. In case of conflicts within System 1 or from the perspective of System 2, which exercises some control over the appropriateness of System 1 reactions in a certain situation in conscious mode, System 2 becomes active. This does not have automatic answers ready, but has to laboriously work out an answer to a given situation step by step. However, there is also the phenomenon that complex processes, which must be carried out frequently, can be automated to a certain extent (bicycling, swimming, playing a musical instrument, learning language, doing mental arithmetic, …). All these processes are based on preceding decisions that encompass different forms of preferences. As long as these automated processes are appropriate in the light of a certain rational model, everything seems to be OK. But if the corresponding model is distorted in any sense, then it would be said that these models carry a prejudice.

[15] Daniel Kahnemann, Thinking Fast and Slow, Pinguin Boooks Random House, UK, 2012 (zuerst 2011)

[16] See the following Chapter 1 in Part 1 of (Kahnemann, 2012, pages 19-30).

In addition to the countless examples that Kahneman himself cites in his book to show the susceptibility of System 1 to such prejudices, it should be pointed out here that the model of Kahneman himself (and many similar models) can carry a prejudice that is of a considerably more fundamental nature. The division of the behavioral space of a human actor into a System 1 and 2, as Kahneman does, obviously has great potential to classify many everyday events. But what about all the everyday phenomena that fit neither the scheme of System 1 nor the scheme of System 2?

In the case of making a decision, System 1 comments that people – if available – automatically call up and execute an available answer. Only in the case of conflict under the control of System 2 can there be lengthy operations that lead to other, new answers.

In the case of decisions, however, it is not just about reacting at all, but there is also the problem of choosing between known possibilities or even finding something new because the known old is unsatisfactory.

Established scientific disciplines have their specific perspectives and methods that define areas of everyday life as a subject area. Phenomena that do not fit into this predefined space do not occur for the relevant discipline – methodically conditioned. In the area of decision-making and thus the typical human structures, there are not a few areas that have so far not found official entry into a scientific discipline. At a certain point in time, there are ultimately many, large phenomenon areas that really exist, but methodically are not present in the view of individual sciences. For a scientific investigation of the real world, this means that the sciences, due to their immanent exclusions, are burdened with a massive reservation against the empirical world. For the task of selecting suitable sustainable goals within the framework of sustainable science, this structurally conditioned fact can be fatal. Loosely formulated: under the banner of actual science, a central principle of science – the openness to all phenomena – is simply excluded, so as not to have to change the existing structure.

For this question of a meta-reflection on science itself, text generators are again only reduced to possible abstract text delivery services under the direction of humans.


The just-described fatal dilemma of all modern sciences is to be taken seriously, as without an efficient science, sustainable reflection on the present and future cannot be realized in the long term. If one agrees that the fatal bias of science is caused by the fact that each discipline works intensively within its discipline boundaries, but does not systematically organize communication and reflection beyond its own boundaries with a view to other disciplines as meta-reflection, the question must be answered whether and how this deficit can be overcome.

There is only one known answer to this question: one must search for that conceptual framework within which these guiding concepts can meaningfully interact both in their own right and in their interaction with other guiding concepts, starting from those guiding concepts that are constitutive for the individual disciplines.

This is genuinely the task of philosophy, concretized by the example of the philosophy of science. However, this would mean that each individual science would have to use a sufficiently large part of its capacities to make the idea of the one science in maximum diversity available in a real process.

For the hard conceptual work hinted at here, text generators will hardly be able to play a central role.


Since so far there is no concept of intelligence in any individual science that goes beyond a single discipline, it makes little sense at first glance to apply the term intelligence to collectives. However, looking at the cultural achievements of humanity as a whole, and here not least with a view to the used language, it is undeniable that a description of the performance of an individual person, its individual performance, is incomplete without reference to the whole.

So, if one tries to assign an overarching meaning to the letter combination intelligence, one will not be able to avoid deciphering this phenomenon of the human collective in the form of complex everyday processes in a no less complex dynamic world, at least to the extent that one can identify a somewhat corresponding empirical something for the letter combination intelligence, with which one could constitute a comprehensible meaning.

Of course, this term should be scalable for all biological systems, and one would have to have a comprehensible procedure that allows the various technical systems to be related to this collective intelligence term in such a way that direct performance comparisons between biological and technical systems would be possible.[17]

[17] The often quoted and popular Turing Test (See: Alan M. Turing: Computing Machinery and Intelligence. In: Mind. Volume LIX, No. 236, 1950, 433–460, [doi:10.1093/mind/LIX.236.433] (Accessed: Sept 29, 2023) in no way meets the methodological requirements that one would have to adhere to if one actually wanted to come to a qualified performance comparison between humans and machines. Nevertheless, the basic idea of Turing in his meta-logical text from 1936, published in 1937 (see: A. M. Turing: On Computable Numbers, with an Application to the Entscheidungsproblem. In: Proceedings of the London Mathematical Society. s2-42. Volume, No. 1, 1937, 230–265 [doi:10.1112/plms/s2-42.1.230] (Accessed: Sept 29, 2023) seems to be a promising starting point, since he, in trying to present an alternative formulation to Kurt Gödel’s (1931) proof on the undecidability of arithmetic, leads a meta-logical proof, and in this context Turing introduces the concept of a machine that was later called Universal Turing Machine.

Already in this proof approach, it can be seen how Turing transforms the phenomenon of a human bookkeeper at a meta-level into a theoretical concept, by means of which he can then meta-logically examine the behavior of this bookkeeper in a specific behavioral space. His meta-logical proof not only confirmed Gödel’s meta-logical proof, but also indirectly indicates how ultimately any phenomenal complexes can be formalized on a meta-level in such a way that one can then argue formally demanding with it.


The idea of philosophical supervision of the individual sciences with the goal of a concrete integration of all disciplines into an overall conceptual structure seems to be fundamentally possible from a philosophy of science perspective based on the previous considerations. From today’s point of view, specific phenomena claimed by individual disciplines should no longer be a fundamental obstacle for a modern theory concept. This would clarify the basics of the concept of Collective Intelligence and it would surely be possible to more clearly identify interactions between human collective intelligence and interactive machines. Subsequently, the probability would increase that the supporting machines could be further optimized, so that they could also help in more demanding tasks.


Attempting to characterize the interactive role of text generators in a human-driven scientific discourse, assuming a certain scientific model, appears to be somewhat clear from a transdisciplinary (and thus structural) perspective. However, such scientific discourse represents only a sub-space of the general human discourse space. In the latter, the reception of texts from the perspective of humans inevitably also has a subjective view [18]: People are used to suspecting a human author behind a text. With the appearance of technical aids, texts have increasingly become products, which increasingly gaining formulations that are not written down by a human author alone, but by the technical aids themselves, mediated by a human author. With the appearance of text generators, the proportion of technically generated formulations increases extremely, up to the case that ultimately the entire text is the direct output of a technical aid. It becomes difficult to impossible to recognize to what extent a controlling human share can still be spoken of here. The human author thus disappears behind a text; the sign reality which does not prevent an existential projection of the inner world of the human reader into a potential human author, but threatens to lose itself or actually loses itself in the real absence of a human author in the face of a chimeric human counterpart. What happens in a world where people no longer have human counterparts?

[18] There is an excellent analysis on this topic by Hannes Bajohr titled “Artifizielle und postartifizielle Texte. Über Literatur und Künstliche Intelligenz” (Artificial and Post-Artificial Texts: On Literature and Artificial Intelligence). It was the Walter-Höllerer-Lecture 2022, delivered on December 8, 2022, at the Technical University of Berlin. The lecture can be accessed here [ ] (Accessed: September 29, 2023). The reference to this lecture was provided to me by Jennifer Becker.

CONTINUE EXPERIMENT WITH CHATGPT4: Science, Empirical Science, Philosophy of Science

Author: Gerd Doeben-Henisch


Time: Aug 31, 2023 – Aug 31, 2023 (10:03h CET)


This text belongs to a series of experiments with chatGPT4. While there exist meanwhile some experiments demonstrating the low quality of chatGPT4 in many qualitative tests [1], the author of this texts wants to check the ability of the chatGPT4 software in reproducing man-like behavior associated with the production and reception of texts on a higher level. I started this series with a first post here[2] and a second post here.[3]



In our yesterday dialogue you gave me many answers. Here a small selection of these: (1) You cannot read a book like a human person, (2) Your database is based on a subset of documents until the year 2021, (3) You don’t “learn” in the same way humans do, (4) You don’t directly cite individual sources in your responses. Instead, your answers are generated based on patterns in the data you have been trained on, (5) Individual responses from you aren’t precisely replicable, (6) You can’t directly attribute your statements to specific sources like a scholarly article or a book. This is indeed a limitation if you’re considering the output as a “scientific description”, (7) The specifics of the training data or exact processes that lead to a particular response from you aren’t transparently available in each interaction. Is this correct?


Yes, your summary of our previous dialogue is accurate based on the points you’ve listed:

  1. I cannot read a book in the same way a human does, meaning I don’t interpret or comprehend in the same manner.
  2. My knowledge is up to September 2021, which means that I might not be aware of events, advancements, or publications after that point.
  3. My “learning” is different from human learning. I don’t learn from new experiences or information after my last update. Instead, I generate responses based on patterns in the data I was trained on.
  4. I don’t cite individual sources in my responses. I provide information based on the patterns recognized during my training.
  5. Responses I give can vary slightly even if the same question is posed multiple times.
  6. I can’t directly attribute my statements to specific, individual sources.
  7. The specifics about my training data and the processes that determine my responses are not transparently provided in each interaction.

You’ve captured the essence of our prior conversation well. If you have further questions or need clarifications on any of these points, please let me know.


  1. Is there a difference in the meanings of the expressions ‘Scientific Text’ and ‘Empirical Scientific Text’?
  2. What distinguishes a ‘scientific text’ from a ‘non-scientific text’?
  3. You mention that it’s important to note that the boundaries between “scientific” and “non-scientific” can sometimes blur, especially when it comes to disciplines like philosophy of science. How do you characterize ‘philosophy of science’?
  4. Considering the aim of ‘Philosophy of Science’: Which role plays the language used for the work of a philosopher of science?
  5. You are stating that ‘language’ plays a central role for science as well as for a philosopher of science. Thus, is it correct to state that ‘Philosophy of Science’ is talking ‘about’ Science?
  6. You state that “Philosophy of Science” stands in a meta-relation to science. Is this compatible with the statement, that a person A (a philosopher of science) is talking about another person B (a scientist) by looking to the behavior of the other person?
  7. You state that the relationship between the philosophy of science and science is multifaceted and cannot be entirely captured by comparing it to one person observing another’s behavior. From your statement I induce (1) that science is an activity realized by many individuals in parallel, a ‘population of scientists’. (2) This population of scientists is following certain commonly accepted ‘rules for science’. (3) The ‘written rules’ for science have some ‘meaning’ which is located ‘inside the scientists’.(4) It is this ‘shared meaning’ inside the scientists which enables a ‘behavior’ which commonly is qualified as ‘science work’. Is this correct?
  8. You are stating that philosophy of science doesn’t just observe the norms and practices of the population of scientists from a distance. It also critically examines, interrogates, and sometimes influences them. Would you agree with the following induced statements: (1) All members of the population of scientists are sharing a commonly accepted meaning associated with the written rules of science. (2) To understand the behavior of scientists one has to learn this shared meaning inside a scientist. (3) A philosopher of science talking ‘about’ the behavior of a population of scientists has to know this shared meaning inside the scientists to be able to describe the scientific behavior. (4) Before ‘reflecting about’ the scientific behavior one has to ‘understand’ it by knowing the shared meanings.

Continuing the Dialogue


Is there a difference in the meanings of the expressions ‘Scientific Text’ and ‘Empirical Scientific Text’?


Yes, there is a distinction between the terms “scientific text” and “empirical scientific text.”

  1. Scientific Text: This is a broad term that refers to any text discussing scientific topics, concepts, theories, and so on. It can encompass a wide variety of writings, from theoretical papers, reviews, and hypotheses to detailed experiments and observations. The main characteristic is that it pertains to or is relevant to science. It could be based on observations, mathematical models, theoretical principles, or any other scientific method.
  2. Empirical Scientific Text: This term is more specific. “Empirical” refers to knowledge derived from observation or experiment, i.e., it’s based on actual, direct observations or experiments in the real world. So, an empirical scientific text is a subset of scientific text that specifically deals with data collected from experiments or observations. It doesn’t rely solely on theoretical arguments or models but provides evidence from real-world data.

In simpler terms, while all empirical scientific texts are scientific texts, not all scientific texts are empirical. Some might be theoretical, for example.


What distinguishes a ‘scientific text’ from a ‘non-scientific text’?


The distinction between a “scientific text” and a “non-scientific text” lies in the content, purpose, methodology, and structure of the writing. Here are some distinguishing features:

  1. Content:
    • Scientific Text: Deals with topics related to natural or social sciences such as biology, chemistry, physics, psychology, sociology, etc.
    • Non-scientific Text: Can deal with any subject, from literature and history to personal opinions or fictional narratives.
  2. Purpose:
    • Scientific Text: Primarily to communicate research findings, theories, methodologies, or review existing literature. It seeks to expand knowledge in a field, challenge existing notions, or validate/invalidate hypotheses.
    • Non-scientific Text: Can have various purposes including entertaining, informing, expressing opinions, narrating stories, etc., without adhering to the scientific method.
  3. Methodology:
    • Scientific Text: Adheres to the scientific method, which involves formulating hypotheses, conducting experiments or observations, analyzing data, and drawing conclusions.
    • Non-scientific Text: Does not necessarily follow any specific methodology, especially not the scientific method.
  4. Structure and Style:
    • Scientific Text: Typically has a structured format, often including sections like introduction, methodology, results, discussion, and conclusion. It uses precise and objective language.
    • Non-scientific Text: Can have a variety of structures and styles depending on the type of writing (e.g., essay, poem, opinion piece, novel). The language can be subjective, emotive, or descriptive.
  5. Evidence and Verification:
    • Scientific Text: Relies on empirical evidence, reproducibility, and peer review to validate claims.
    • Non-scientific Text: May not rely on empirical evidence and is often not subjected to peer review or validation in the same rigorous manner as scientific texts.
  6. Audience:
    • Scientific Text: Often written for other scientists, researchers, or professionals in the field, although there are also scientific texts intended for broader audiences.
    • Non-scientific Text: Can be aimed at the general public, specific interest groups, or any other audience.

That being said, it’s important to note that the boundaries between “scientific” and “non-scientific” can sometimes blur, especially when it comes to disciplines like philosophy of science or when scientific concepts are discussed in popular literature. Moreover, some non-scientific texts, like historical analyses, may incorporate rigorous methodologies and provide evidence for their claims even if they don’t adhere strictly to the scientific method.


You mention that it’s important to note that the boundaries between “scientific” and “non-scientific” can sometimes blur, especially when it comes to disciplines like philosophy of science. How do you characterize ‘philosophy of science’?


Certainly, the philosophy of science is a fascinating discipline that delves into the foundational, methodological, and epistemological questions surrounding science. Here’s a more detailed characterization:

  1. Nature and Purpose:
    • Philosophy of science aims to understand the nature and purpose of scientific activity. It raises questions about what science is, what its objectives are, and what makes scientific activity distinctive from other human pursuits.
  2. Foundational Issues:
    • Philosophers of science explore the basic concepts and principles underlying scientific theories and practices. For example, they might investigate the nature of causation, the concept of a scientific law, or the interpretation of probability in statistical mechanics.
  3. Methodological Examination:
    • It studies the methods used in science, asking questions like: What is the scientific method? Are there universal methods that apply to all scientific disciplines, or do methods differ across fields? How do scientists confirm or falsify theories?
  4. Theory and Observation:
    • One central concern in the philosophy of science is the relationship between theory and observation. Philosophers examine how theories are tested, how they relate to observational data, and the problem of underdetermination (the idea that data can be explained by multiple, sometimes incompatible, theories).
  5. Scientific Explanation:
    • What does it mean to explain a phenomenon scientifically? Philosophers of science analyze different models of explanation, like deductive-nomological, probabilistic, or causal-mechanistic explanations.
  6. Scientific Change and Progress:
    • Philosophers investigate how scientific theories evolve over time, what constitutes scientific progress, and how paradigm shifts (a term popularized by Thomas Kuhn) occur in scientific disciplines.
  7. Realism and Anti-realism:
    • This debate concerns the nature of scientific theories and their relationship to the world. Realists believe that scientific theories aim to describe the world as it truly is, while anti-realists believe that theories are just tools for prediction and may not necessarily reveal the true nature of reality.
  8. Values in Science:
    • Philosophy of science also studies the role of values, both epistemic (related to knowledge) and ethical, in scientific practice. For instance, it may question how societal values influence scientific research or how objective science can truly be.
  9. Interdisciplinary Concerns:
    • The philosophy of science often intersects with other disciplines, such as the history of science, sociology of science, and cognitive science, to provide a holistic understanding of scientific practices.

Philosophy of science doesn’t just deal with abstract or purely theoretical concerns. Many of its insights have practical implications for how science is conducted, taught, funded, and communicated. By scrutinizing the foundational aspects of scientific activity, the philosophy of science enriches our understanding and appreciation of the scientific endeavor.


Considering the aim of ‘Philosophy of Science’: Which role plays the language used for the work of a philosopher of science?


The role of language in the philosophy of science is crucial and multifaceted. Philosophers of science, like philosophers in other areas, are deeply concerned with the nuances, implications, and limitations of language. Here’s why language plays a central role:

  1. Precision in Definition: The clarity of philosophical arguments often hinges on the precise definition of terms. Philosophers of science seek to define terms like “causation,” “law,” “theory,” “model,” “proof,” and “evidence” in ways that capture their use in scientific practice and distinguish them from other, sometimes colloquial, meanings.
  2. Analysis of Theoretical Constructs: Many concepts in science, like “atom,” “gene,” or “black hole,” are theoretical constructs. Philosophers of science investigate the language that scientists use to describe these constructs and how that language relates to observable phenomena.
  3. Interpretation of Models and Theories: Philosophers analyze the language of scientific models and theories to understand their structure, implications, and limitations. For instance, they might scrutinize how particular terms are used metaphorically or analogically.
  4. Examining Paradigms and Frameworks: As mentioned with Thomas Kuhn’s concept of paradigms, the language and conceptual framework of one scientific paradigm might differ radically from another. Philosophers of science explore these differences and their implications.
  5. Exploring Linguistic Structures: Some philosophers, influenced by linguistic philosophy or semiotics, delve into the deeper structures of scientific language. They might ask how the language of a scientific discipline shapes and is shaped by its practices.
  6. Addressing Ambiguities: Philosophers of science highlight and address ambiguities in scientific language, clarifying potential misunderstandings or misinterpretations.
  7. Role of Metaphor and Analogy: Science frequently uses metaphors and analogies to describe complex phenomena (e.g., the “solar system model” of the atom or “genetic code”). Philosophers of science examine the strengths, limitations, and potential pitfalls of these linguistic tools.
  8. Ethical and Societal Implications: Language doesn’t just describe the world; it also shapes how we perceive and interact with it. Philosophers of science may critique the language of science for its ethical or societal implications, such as whether it inadvertently reinforces certain biases or worldviews.
  9. Communication and Pedagogy: Language is the primary medium through which scientific knowledge is communicated, both within the scientific community and to the public. Philosophers might investigate how language can facilitate or hinder scientific understanding, especially in educational contexts.

In summary, for philosophers of science, language is not just a neutral tool but a complex system that influences, and is influenced by, scientific thought and practice. Analyzing and critiquing the language of science helps illuminate the foundational assumptions, methodologies, and implications of scientific activity.


You are stating that ‘language’ plays a central role for science as well as for a philosopher of science. Thus, is it correct to state that ‘Philosophy of Science’ is talking ‘about’ Science?


Yes, that’s a concise and accurate way to put it. The “Philosophy of Science” talks ‘about’ science. It reflects on, analyzes, and critiques the foundational, methodological, epistemological, and sometimes ethical aspects of scientific activity. Philosophers of science are interested in understanding what scientists do, how they do it, why they do it in that way, and what it means to produce scientific knowledge.

In other words, while scientists are primarily concerned with investigating and understanding the natural world (or human societies, in the case of social sciences), philosophers of science are more concerned with investigating and understanding the processes, methodologies, and frameworks that scientists use in their investigations.

So, “Philosophy of Science” stands in a meta-relation to science. Just as meta-ethics in philosophy is concerned with the nature of ethical statements rather than the ethical statements themselves, the philosophy of science is concerned with the nature, methodology, and implications of science rather than the specific scientific facts or theories.


You state that “Philosophy of Science” stands in a meta-relation to science. Is this compatible with the statement, that a person A (a philosopher of science) is talking about another person B (a scientist) by looking to the behavior of the other person?


The analogy of a person A (a philosopher of science) talking about another person B (a scientist) by observing B’s behavior can capture some aspects of the meta-relation between philosophy of science and science, but with certain caveats.

In this analogy:

  • Observation of Behavior: The philosopher of science (Person A) observes, reflects upon, and tries to understand the methodologies, assumptions, and structures that underpin the work of the scientist (Person B). This can be likened to observing the “behavior” of the scientist.
  • Interpretation and Analysis: Just as someone might analyze and interpret a person’s behavior to understand underlying motivations, thoughts, or cultural influences, the philosopher of science analyzes the “behavior” of science to uncover its foundational assumptions, methodologies, and epistemological commitments.

However, the analogy has its limitations:

  1. Beyond Individual Behavior: While the analogy implies observing the actions of an individual (Person B), the philosophy of science often deals with broader structures, paradigms, and historical shifts in scientific thinking, not just the practices of individual scientists.
  2. Engagement with Content: Philosophers of science don’t just observe the “behavior” of scientists. They often engage deeply with the actual content of scientific theories and models, trying to understand and interpret them. It’s not merely an external observation.
  3. Reciprocal Influence: In some cases, the philosophy of science has influenced the way scientists think about their own work. For instance, philosophical discussions about the nature of time, space, or causality might impact theoretical physics. This reciprocal influence doesn’t fit neatly into the analogy.

In summary, while the analogy can serve as a starting point to grasp the meta-relation, the relationship between the philosophy of science and science is multifaceted and cannot be entirely captured by comparing it to one person observing another’s behavior.


You state that the relationship between the philosophy of science and science is multifaceted and cannot be entirely captured by comparing it to one person observing another’s behavior. From your statement I induce (1) that science is an activity realized by many individuals in parallel, a ‘population of scientists’. (2) This population of scientists is following certain commonly accepted ‘rules for science’. (3) The ‘written rules’ for science have some ‘meaning’ which is located ‘inside the scientists’.(4) It is this ‘shared meaning’ inside the scientists which enables a ‘behavior’ which commonly is qualified as ‘science work’. Is this correct?


Your inferences capture important aspects of the relationship between science, scientists, and the philosophy of science, with some qualifications. Let’s review your points:

  1. Population of Scientists: Yes, science is indeed an activity carried out by many individuals, often collaboratively. These individuals collectively form the scientific community or a “population of scientists.”
  2. Rules for Science: Scientists, especially within specific disciplines, generally adhere to a set of methodologies, standards, and norms—often referred to as the “scientific method” or the ethos of science. These can include practices such as peer review, empirical testing, reproducibility, and falsifiability, among others. These “rules” may vary across disciplines but serve to maintain the rigor and credibility of scientific research.
  3. Meaning Inside Scientists: It’s a nuanced point. While there are “written rules” or guidelines for conducting science, the internalization of these rules goes beyond just written directives. Scientists undergo extensive training and socialization, learning not just the overt rules but also the tacit knowledge, values, and norms of their discipline. This internalized understanding guides their scientific practice.
  4. Shared Meaning & Behavior: The shared understanding and internalization of the ethos of science by the scientific community do indeed result in behaviors characteristic of scientific inquiry. This shared foundation helps in ensuring consistency, rigor, and mutual intelligibility in scientific work.

However, it’s also important to note:

  • The evolution of scientific norms and practices: The “rules” and norms of science aren’t static; they evolve over time as science itself progresses and as the broader societal context changes.
  • Diversity within science: Different scientific disciplines might have variations in their methodologies, standards, and practices. For instance, methods in particle physics can differ considerably from ethnographic methods in cultural anthropology, though both are scientific.
  • Role of philosophy of science: The philosophy of science doesn’t just observe these norms and practices from a distance. It critically examines, interrogates, and sometimes influences them. Philosophers of science may raise questions about the nature of evidence, the interpretation of statistical methods, the structure of scientific theories, or the ethical implications of certain scientific practices.

In summary, your inferences offer a structured way of understanding the practice of science and its underlying shared ethos, with the aforementioned nuances and complexities taken into account.


You are stating that philosophy of science doesn’t just observe the norms and practices of the population of scientists from a distance. It also critically examines, interrogates, and sometimes influences them. Would you agree with the following induced statements: (1) All members of the population of scientists are sharing a commonly accepted meaning associated with the written rules of science. (2) To understand the behavior of scientists one has to learn this shared meaning inside a scientist. (3) A philosopher of science talking ‘about’ the behavior of a population of scientists has to know this shared meaning inside the scientists to be able to describe the scientific behavior. (4) Before ‘reflecting about’ the scientific behavior one has to ‘understand’ it by knowing the shared meanings.


Your inferences touch on core ideas related to the nature of scientific practices and the approach of philosophy of science. Let’s review each of your points:

  1. Shared Meaning Among Scientists: While it’s generally true that there’s a commonly accepted ethos or set of standards within scientific disciplines, it’s essential to remember that “shared meaning” isn’t necessarily uniform. Different scientists, based on their training, specific subfields, cultural backgrounds, and individual experiences, might have variations in their interpretations and applications of these rules. However, there’s indeed a foundational consensus that allows for mutual intelligibility and collaboration within the scientific community.
  2. Understanding the Shared Meaning: To understand the behavior of scientists fully, one would benefit from understanding this shared ethos and the underlying tacit knowledge. This includes both formal methodologies and the more subtle, unwritten norms and values of scientific practice.
  3. Philosopher’s Knowledge of Shared Meaning: Yes, for a philosopher of science to offer a meaningful critique, analysis, or description of scientific practices, they need to be familiar with the shared meanings, methods, and values of scientists. However, philosophers also bring an external perspective that can question assumptions and probe deeper into the foundations of these shared meanings.
  4. Understanding Before Reflecting: In general, understanding is a prerequisite for meaningful reflection. To critically analyze or reflect upon scientific practices, one first needs a foundational understanding of those practices. This is akin to the idea that one must understand a game’s rules and objectives before commenting on strategies or critiquing its design.

However, a few considerations:

  • Dynamism of Science: It’s important to recognize that the shared meanings and practices of science aren’t static. They evolve with new discoveries, technological advancements, and shifts in the broader societal context. Philosophers of science not only reflect on current practices but also on these evolutionary changes.
  • Interdisciplinary Nature: The philosophy of science often interacts with other disciplines, such as sociology of science, history of science, or cognitive science. These interdisciplinary engagements enrich the understanding and reflection on scientific practices.
  • Role of Outsider’s Perspective: While understanding from within is crucial, sometimes an outsider or a more distanced perspective can provide insights that might be less obvious to those deeply embedded within a discipline. Philosophers of science can provide this vantage point, revealing biases, assumptions, or unexplored avenues.

In summary, your inferences capture significant aspects of how philosophy of science engages with the practices and underlying meanings of science. The relationship is one of understanding, reflection, critique, and, sometimes, influence.


[1] There was a conference at the Technical University of Darmstadt with the title “KI – Text und Geltung. Wie verändern KI-Textgeneratoren wissenschaftliche Diskurse?” (Translated: AI – Text and Validity. How do AI text generators change scientific discourses? )., . One lecturer Thomas Arnold, gave a speech entitled “Herausforderungen in der Forschung: Mangelnde Reproduzierbarkeit und Erklärbarkeit” (Translated: Challenges in research: lack of reproducibility and explainability), which will be published by de Gruyter at the end of 2023/ or the beginning of 2024.


[3] Gerd Doeben-Henisch, Aug 30, 2023: READING A BOOK, LEARNING, BEING SCIENTIFIC, WIKIPEDIA. A dialogue with chatGPT4 bringing you ‘back to earth’ :

READING A BOOK, LEARNING, BEING SCIENTIFIC, WIKIPEDIA. A dialogue with chatGPT4 bringing you ‘back to earth’

Autor: Gerd Doeben-Henisch

Aug 30, 2023 – Aug 30, 2023



This text belongs to a series of experiments with chatGPT4. While there exist meanwhile some experiments demonstrating the low quality of chatGPT4 in many qualitative tests [1], the author of this texts wants to check the ability of the chatGPT4 software in reproducing man-like behavior associated with the production and reception of texts on a higher level. I started this series with a first post here.[2]


In the following series of dialogues with chatGPT4 the software stated in the beginning that it cannot read a book like a human person. Asking for a special book with the title “Das Experiment sind wir” by the author Christian Stöcker? the software answered, that it doesn’t know the book. Then I tried to ask the software, whether it can nevertheless ‘learn’ something based on the documents given to the software until the year 2021. The answer was quite revealing: “No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.” I didn’t give up to understand what the software is able to do if not ‘reading a book’ or to ‘learn’. I asked: “What can you extract from your text base related to the question what can happen in the head of people if they read a book?” I got a list of general properties summarized from different source (without mentioning these sources). I continued asking the software not about the leaning inside itself but what the software knows about the learning inside humans persons reading a book: “How do you describe the process of learning in human persons while reading a book?” The software did mention some aspects which have to be considered while a human person reads a book. But this gives only some ‘static view’ of the structure being active in the process of reading. More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book? The software gives a detailed breakdown of the learning process in human persons while reading, but again these are statements without any ‘backing up’, no sources,no contexts, but in the everyday world — especially in science –, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’? Then the author did ask the software “Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?” The answer again was very simple: “I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.” Because it is known that wikipedia encyclopedia provides always explicit sources for the texts the author asked directly: Do you know the textbase wikipedia? Clearly, the chatGPT4 knows Wikipedia; besides many positive remarks it answered: “The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.” This sounds quite good for Wikipedia. Thus it is interesting how chatGPT4 compares itself to Wikipedia: Question: “Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?” chatGPT4 doesn’t give a clear answer; it summarizes points for Wikipedia and itself, but leaves the question open. Thus the author continues asking for a more ‘general criterion’, which should be valid for both Wikipedia and chatGPT4: “Which criteria do you know for a good scientific description? ” As usual chatGPT4 gives a long list of criteria . the author asks back “Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.” The answer chatGPT4 gives makes itself nearly unusable for any serious scientific work. Yes, you can use chatGPT4 for some first general information, but without knowing more by yourself, you are really lost. It is surely quite more fruitful to check Wikipedia directly and using the the sources there for further investigations.

Reading a book

I started spontaneously to talk with chatGPT4 about a book from Christian Ströcker with the title “Das Experiment sind wir”, published 2020.

Here is the dialogue:


Can you read a book?


I don’t “read” books in the same way humans do. Instead, I was trained on vast amounts of text data, which includes information from books, articles, websites, and other written sources up to my last update in September 2021.

You can ask me about the content, summaries, themes, and other information from many books, especially those that are well-known or that were published before my last training cut-off. If the book was published after September 2021 or is particularly obscure, I might not have information about it.

If you have a particular book or topic in mind, feel free to ask, and I’ll do my best to provide relevant information!


What can you say about the book entitled “Das Experiment sind wir” by the author Christian Stöcker?


As of my last update in September 2021, “Das Experiment sind wir” by Christian Stöcker hasn’t been one of the primary sources I was trained on. However, Christian Stöcker is known as a journalist and professor in the field of digital communication.

If “Das Experiment sind wir” was published or gained significant attention after my last update, I may not have detailed information about its content, themes, or specific insights.

For a detailed overview or analysis of the book, I would recommend reading reviews, summaries, or author interviews, or accessing the book directly. If you have other questions or need information on a related topic, I’m here to help!


OK, chatGPT4 doesn’t know this book. This is OK, even a high performance System can not know everything.

But interesting is his confession, that chatGPT4 can not read a book as a human person would do. In this sense chatGPT4 is not a ‘typical reader’ but — as wee indeed know — a piece of software, managed by human persons, which decide, which kinds of texts chatGPT4 has to ‘store’ in his memories and which are the basis for hier ‘algorithmic processing’.

But there is left the question whether chatGBT4 at least can learn by its own starting with the text base given to him.


Thus if you have a text base as a point of reference, are you able to learn in some sense by your own?


No, I don’t “learn” in the same way humans do. I don’t actively process new information, form new connections, or change my understanding over time. Instead, my responses are based on patterns in the data I was trained on. My knowledge is static and doesn’t update with new information unless a new version of me is trained by OpenAI with more recent data.

Here’s an analogy: Imagine an extremely comprehensive book. This book can answer a vast array of questions based on its contents, but it can’t write new chapters or update its pages on its own. In a similar manner, I can generate answers based on the data I was trained on, but I can’t independently update that data or “learn” new things.

So, while I can generate responses based on a broad range of existing knowledge, I can’t assimilate or understand new information or events after my last training cut-off in September 2021.


As expected, chatGBT4 can not learn as we humans can learn. To generate summaries can be of some help, but this can not replace ‘true learning’.

But let us stay with the topic of ‘reading a book’:


What can you extract from your text base related to the question what can happen in the head of people if they read a book?


Reading a book can have various cognitive, emotional, and psychological effects on individuals. Here are some insights based on literature, cognitive science, and psychological studies up to my last training cut-off:

  1. Cognitive Stimulation: Engaging with the written word can stimulate the brain. Studies have suggested that mental stimulation can slow the progress (or possibly even prevent) Alzheimer’s and dementia, as keeping your brain active and engaged prevents it from losing power.
  2. Mental Escape: For many, reading a book can act as an escape from the real world. Immersing oneself in a different world, whether it’s a fantasy realm or an historical setting, provides a break from everyday stressors.
  3. Increased Knowledge: Naturally, books inform and educate. This builds a knowledge base that can be applied in real-life situations.
  4. Vocabulary Expansion: Reading regularly can expand one’s vocabulary more than direct teaching or verbal communications.
  5. Improved Focus and Concentration: Unlike skimming short web articles or watching quick videos, reading a book requires a longer attention span and can help improve one’s focus and concentration.
  6. Stress Reduction: A well-written novel or non-fiction book can transport the reader to other realms, distracting them and helping them to de-stress more effectively than other methods.
  7. Improved Analytical Skills: Engaging with complex narratives and plots can help readers improve their analytical skills as they ponder plot developments and character motivations.
  8. Empathy Development: Reading fiction, in particular, allows readers to live vicariously through characters, experiencing a vast range of emotions and situations. This can help in developing a deeper understanding and empathy for others.
  9. Mental Tranquility: Reading subjects like poetry or certain religious texts can provide tranquility, help lower blood pressure, and reduce stress.
  10. Memory Improvement: Reading involves remembering various characters, their backgrounds, histories, and nuances. This can create new brain pathways and strengthen existing ones.
  11. Entertainment: Of course, one of the primary reasons people read is for entertainment. A good story can be incredibly engaging, offering hours of enjoyment.

It’s worth noting that the impact of reading can vary based on the type of book (e.g., fiction vs. non-fiction), the style of writing, the personal experiences and preferences of the reader, and the context in which they’re reading.


chatGPT4 mentions some aspects which have to be considered while a human person reads a book. This gives some ‘static view’ of the structure being active in the process of reading.

More interesting is perhaps a ‘dynamic aspect’ of reading a book which can be circumscribed as ‘learning’: which kinds of changing are typical for human persons while reading a book?


How do you describe the process of learning in human persons while reading a book?


Reading a book is a complex cognitive process that involves multiple levels of mental engagement and comprehension. When a person reads, several processes related to learning occur simultaneously. Here’s a detailed breakdown of the learning process while reading:

  1. Perceptual Processing: This is the basic recognition of words and letters. As your eyes scan a page, the visual cortex interprets the shapes of letters and groups of letters as words. This process becomes automatic with practice and training.
  2. Decoding: This involves understanding the phonetic value of words, which is especially crucial for unfamiliar terms or names. This skill is emphasized heavily in early reading education when children sound out words.
  3. Semantic Processing: Once the words are recognized, the brain works to understand their meaning. This involves linking the words to their associated concepts in the brain.
  4. Syntax and Grammar Processing: The brain understands the structure of sentences and how words relate to each other, determining the relationships among verbs, nouns, adjectives, etc.
  5. Working Memory: This is essential for maintaining an understanding of a sentence or paragraph. As you read, you keep the beginning of a sentence in mind as you progress to the end, allowing you to grasp the entire concept.
  6. Building Connections: Learning often involves connecting new information with prior knowledge. When reading, you constantly relate what you are consuming to what you already know, which can deepen comprehension.
  7. Critical Thinking: Beyond just understanding the content, readers often evaluate, analyze, and critique what they’re reading. They may ask themselves if they agree with the author, if the information is credible, or how it relates to other things they know.
  8. Visualization: Good readers often create mental images of what they are reading. This is especially true for narrative texts where imagining scenes, characters, or events can enhance comprehension.
  9. Emotional Engagement: The content in books can evoke emotions. Whether it’s a novel that tugs at the heartstrings or a non-fiction book that arouses anger or passion about a topic, emotional engagement can increase retention and internalization of content.
  10. Metacognition: This involves thinking about one’s own thinking. Effective readers often monitor their understanding as they read, recognizing when they’re confused and employing strategies to improve comprehension.
  11. Retention & Long-Term Memory: Once the book is done, the process of shifting information from working memory to long-term memory begins. Repetition, discussion, and application of knowledge can aid this transfer.
  12. Application: The ultimate goal of learning is often to apply knowledge. After reading, a person may take actions, make decisions, or change beliefs based on what they’ve learned.

It’s important to note that everyone’s reading process might be a bit different. Some people naturally visualize more, while others might focus on critical analysis. Additionally, background knowledge, context, and the reader’s purpose (e.g., reading for pleasure versus studying for an exam) can influence how these processes unfold.


chatGPT4 presents some facts responding to the question which are — as such — really interesting, but in the everyday world especially in science, it is necessary that a reader can get the knowledge about the sources of statements claiming to be serious: are the presented facts only ‘fakes’, purely wrong, or are there serious people with serious methods of measurement which can provide some certainty that these facts are really ‘true’?

Let us ask chatGPT4:


Your answer consist of many interesting facts. Is it possible to clarify the sources you are using for these statements?


I’m glad you found the information interesting. My design is based on information from a wide range of books, articles, and other educational resources up to my last update in September 2021. However, I don’t directly cite individual sources in my responses. Instead, my answers are generated based on patterns in the data I was trained on.

The steps and processes I outlined for learning while reading a book are based on widely accepted theories and research in the fields of cognitive psychology, education, and linguistics. If you’re interested in diving deeper into these topics, you might consider exploring:

  1. Cognitive psychology textbooks: These delve into the processes of perception, working memory, long-term memory, and more.
  2. Reading and literacy research: Scholars in this field have conducted in-depth studies on the processes of decoding, comprehension, and the strategies good readers use.
  3. Educational psychology: This field explores topics related to learning, including metacognition, critical thinking, and the application of knowledge.

If you’d like specific book or article recommendations, or more in-depth explanations on any of these topics, I can certainly provide them.


Do you know the textbase wikipedia?


Yes, I’m familiar with Wikipedia. Wikipedia is a free online encyclopedia that is collaboratively edited by volunteers from around the world. It’s one of the most visited websites globally and provides information on a vast array of topics.

Wikipedia is known for its “wiki” nature, which means that almost any article can be edited by any user with internet access. This collaborative approach has made Wikipedia a rich and ever-evolving source of information. However, because of this open-editing system, it’s always a good idea to verify critical information from Wikipedia with other reliable sources.

The structure of Wikipedia articles, with their citations, can serve as a good starting point for research on a topic. One can follow the cited sources at the bottom of each article to get more detailed or primary information on a subject.

Would you like more information or have a specific question related to Wikipedia?


chatGPT4 knows wikipedia and gives a quite fair description of wikipedia (probably the English version). Of interest is how chatGPT4 describes the characteristic differences with wikipedia.


Wikipedia has cited sources at the bottom of each article. Does this mean that wikipedia is more trustful as you?


Wikipedia and I serve different purposes, and the trustworthiness of information from either source depends on the context and specific use-case.

Advantages of Wikipedia:

  1. Citations: Wikipedia articles often have extensive lists of references, which allow readers to trace back the origin of specific pieces of information. This provides a level of transparency.
  2. Collaboration: Wikipedia is collectively edited by a vast number of individuals, which can mean many eyes reviewing and refining content.
  3. Current Events: Wikipedia is often updated rapidly in the face of current events, so for very recent information, it can sometimes be more up-to-date than my last training data.

Advantages of ChatGPT (me):

  1. Broad Training Data: I’ve been trained on diverse datasets, including books, articles, and other content. This allows me to provide a wide range of information.
  2. Consistent and Neutral: While Wikipedia aims for a neutral point of view, individual articles might sometimes reflect biases of the contributors. My design aims to minimize bias and provide consistent responses.
  3. Conversational Interface: Users can have a back-and-forth dialogue with me, making it easier to ask follow-up questions or dive deeper into topics.


  1. Vandalism: Wikipedia can be edited by anyone, which sometimes leads to misinformation due to vandalism. However, popular pages often have vigilant editors and protective measures in place.
  2. Knowledge Cutoff: My last training data is from September 2021, so events or advancements after that date are not within my current knowledge.

Ultimately, neither Wikipedia nor I should be seen as infallible. Both can serve as starting points for research, but critical information should be cross-referenced with primary sources or authoritative references. It’s always a good practice to be discerning and critical when interpreting information from any source.


The characterization of wikipedia is still quite fair. But in some sens does chatGPT4 ‘downplay’ it’s lack of real sources, which are the basis of any serious arguing.


Which criteria do you know for a good scientific description?


Scientific descriptions are meant to be precise, clear, and replicable. To ensure a high-quality scientific description, several criteria should be met:

  1. Accuracy: The description should correctly represent observations, data, or results without exaggeration or misrepresentation.
  2. Clarity: The language used should be clear and free from ambiguity. It should be easily understood by those with appropriate background knowledge in the field.
  3. Completeness: All relevant details should be included, especially those that would be necessary for replication or to understand the context of the study.
  4. Consistency: Descriptions should be consistent in terms of units of measurement, terminology, and methodology.
  5. Objectivity: The description should be free from personal bias. Instead of using subjective terms, rely on objective measurements and observations.
  6. Replicability: Enough detail should be provided so that another scientist with the same background knowledge and resources can replicate the study or experiment.
  7. Relevance: Only pertinent details should be included. While completeness is important, overloading a description with irrelevant information can detract from its usefulness.
  8. Use of Standardized Terminology: Using accepted and standardized terms in the field ensures that the description is understood universally among experts.
  9. Citing Sources: Any claim, idea, or method that isn’t original to the current study should be appropriately cited, giving credit to the original sources and allowing readers to delve deeper into the background if they wish.
  10. Avoiding Jargon (when possible): While specialized terms can be necessary, over-reliance on jargon can alienate those slightly outside the immediate field. When jargon is used, it should be defined or explained.
  11. Providing Context: A good scientific description often includes background information or a literature review to give context to the study’s significance and position within the broader field.
  12. Transparency: Limitations, potential sources of error, and uncertainties should be clearly stated. This honesty allows readers to gauge the reliability of the findings and understand any potential caveats.

A good scientific description is essentially a balance between precision and clarity. It should be detailed enough to be of value to experts in the field while still being accessible to those with a general background in the discipline.


The overview of criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of chatGPT4 it is not easily to see, whether these criteria can also applied to the output of chatGPT4.

Lets transform this into a question:


Your overview about criteria for a scientific description includes terms like ‘Replicability’, ‘Citing Sources’, and ‘Transparency’. From the preceding self-description of you it is not easily to see, whether these criteria can also applied to the output of chatGPT4.


That’s an insightful observation. Let’s discuss the applicability of some of these criteria to ChatGPT (like me) and its outputs:

  1. Replicability: While the underlying research and development process of models like mine can be replicated in theory, the specific outputs for each query might vary slightly, even with similar inputs. Therefore, individual responses from me aren’t precisely replicable.
  2. Citing Sources: I don’t provide direct citations for specific statements in my responses. My training data comes from diverse sources, and I generate responses based on patterns in that data. Thus, I can’t directly attribute my statements to specific sources like a scholarly article or a book. This is indeed a limitation if you’re considering the output as a “scientific description.”
  3. Transparency: OpenAI, the organization behind me, has released details about the architecture, training methodology, and intentions behind models like mine. However, the specifics of the training data or exact processes that lead to a particular response aren’t transparently available in each interaction.

In light of this:

  • For scholarly or academic purposes, my responses should be considered as starting points or general overviews. Critical information should always be cross-referenced with authoritative and primary sources.
  • For general inquiries or casual information, my design is to provide accurate and coherent answers based on the vast amount of data I’ve been trained on.

It’s crucial to recognize the intended use and context when considering the applicability of scientific criteria to any tool or source, including ChatGPT.


[1] There was a conference at the Technical University of Darmstadt with the title “KI – Text und Geltung. Wie verändern KI-Textgeneratoren wissenschaftliche Diskurse?” (Translated: AI – Text and Validity. How do AI text generators change scientific discourses? )., . One lecturer Thomas Arnold, gave a speech entitled “Herausforderungen in der Forschung: Mangelnde Reproduzierbarkeit und Erklärbarkeit” (Translated: Challenges in research: lack of reproducibility and explainability), which will be published by de Gruyter at the end of 2023/ or the beginning of 2024.


State Change from Non-Writing to Writing. Working with chatGPT4 in parallel. An Exploration

Author: Gerd Doeben-Henisch


Aug 28, 2023 – Aug 28, 2023 (18:10h CET)

CONTEXT: Man and Machine. One head against Billions of documents …

The author has started an experiment writing two tracks in parallel: the first track is his own writing without chatGPT4, and the next track ist a chat with chatGPT4. The motivation behind this experiment is to get a better estimate how the writing of chatGPT4 differs. While chatGPT4 is working only by statistical patterns of the ‘surface’ of language communications, the author is exploiting his individual ‘meaning knowledge’ built up in his inner states by many years of individual learning as part of certain ‘cultures’ and ‘everyday situations’. Clearly the knowledge of the individual author about texts available is extremely smaller than compared with the knowledge base of chatGPT4. Thus it is an interesting question whether an individual knowledge is generally too bad compared to the text generation capacity of the chatGPT4 software?

While the premises of the original article cannot be easily transferred to the dialog with chatGPT4, one can still get some sense of where chatGPT4’s strengths lie on the one hand, and its weaknesses on the other. For ‘authentic’ writing a replacement by chatGPT4 is not an option, but a ‘cooperation’ between ‘authentic’ writing and chatGPT seems possible and in some contexts certainly even fruitful.

What the Author did Write:

CONTINUOUS REBIRTH – Now. Silence does not help …Exploration

(This text is a translation from a German source using for nearly 90-95% the deepL software without the need to make any changes afterwards. [*])


As written in the previous blog entry, the last blog entry with the topic “Homo Sapiens: empirical and sustainable-empirical theories, emotions, and machines. A Sketch” [1] as a draft speech for the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” [2] Due to the tight time frame (20 min for the talk), this text was very condensed. However, despite its brevity, the condensed text contains a many references to fundamental issues. One of them will be briefly taken up here.

Localization in the text of the lecture

A first hint is found in the section headed “FUTURE AND EMOTIONS” and then in the section “SUSTAINABLE EMPIRICAL THEORY”.

In these sections of the text, attention is drawn to the fact that every explicit worldview is preceded by a phase of ’emergence’ of that worldview, in which there is a characteristic ‘transition’ between the ‘result’ in the form of a possible text and a preceding state in which that text is not yet ‘there’. Of course, in this ‘pre-text phase’ – according to the assumptions in the section “MEANING” – there are many facts of knowledge and emotions in the brain of the ‘author’, all of which are ‘not conscious’, but from which the possible effects in the form of a text emanate. How exactly ‘impact’ is supposed to emanate from this ‘pre-conscious’ knowledge and emotions is largely unclear.

We know from everyday life that external events can trigger ‘perceptions’, which in turn can ‘stimulate’ a wide variety of reactions as ‘triggers’. If we disregard such reactions, which are ‘pre-trained’ in our brain due to frequent practice and which are then almost always ‘automatically’ evoked, it is as a rule hardly predictable whether and how we react.

From non-action to action

This potential transition from non-action to action is omnipresent in everyday life. As such, we normally do not notice it. In special situations, however, where we are explicitly challenged to make decisions, we then suddenly find ourselves in a quasi ‘undefined state’: the situation appears to us as if we are supposed to decide. Should we do anything at all (eat something or not)? Which option is more important (go to the planned meeting or rather finish a certain task)? We want to implement a plan, but which of the many options should we ‘choose’ (… there are at least three alternatives; all have pros and cons)? Choosing one option as highly favored would involve significant changes in one’s life situation; do I really want this? Would it be worth it? The nature of the challenges is as varied as everyday life in many different areas.

What is important, more important?

As soon as one or more than one option appears before the ‘mind’s eye’, one involuntarily asks oneself how ‘seriously’ one should take the options? Are there arguments for or against? Are the reasons ‘credible’? What ‘risks’ are associated with them? What can I expect as ‘positive added value’? What changes for my ‘personal situation’ are associated with it? What does this mean for my ‘environment’? …

What do I do if the ‘recognizable’ (‘rational’) options do not provide a clear result, it ‘contradicts’ my ‘feeling for life’, ‘me’, and I ‘spontaneously’ reject it, spontaneously ‘don’t want’ it, it triggers various ‘strong feelings’ that may ‘overwhelm’ me (anger, fear, disappointment, sadness, …)? …

The transition from non-action to action can ‘activate’ a multitude of ‘rational’ and ’emotional’ aspects, which can – and mostly do – influence each other, to the point that one experiences a ‘helplessness’ entangled in a ‘tangle’ of considerations and emotions: one has the feeling of being ‘stuck’, a ‘clear solution’ seems distant. …

If one can ‘wait’ (hours, days, weeks, …) and/or one has the opportunity to talk about it with other people, then this can often – not always – clarify the situation so far that one thinks to know what one ‘wants now’. But such ‘clarifications’ usually do not get the ‘challenge of a ‘decision’ out of the way. Besides, the ‘escape’ into ‘repressing’ the situation is always an option; sometimes it helps; if it doesn’t help, then the ‘everyday’ situation becomes worse by the ‘repressing’, quite apart from the fact that ‘unsolved problems’ do not ‘disappear’, but live on ‘inside’ and can unfold ‘special effects’ there in many ways, which are very ‘destructive’.


The variety of possible – emotional as well as rational – aspects of the transition from non-action to action are fascinating, but possibly even more fascinating is the basic process itself: at any given moment, every human being lives in a certain everyday situation with a variety of experiences already made, a mixture of ‘rational explanations’ and ’emotional states’. And, depending on the nature of everyday life, one can ‘drift along’ or one has to perceive daily ‘strenuous activities’ in order to maintain the situation of everyday life; in other cases, one has to constantly ‘fight’ in real terms for the continuance in everyday life. One experiences here that the ‘conditions of everyday life’ are largely given to the individual and one can only change them to a limited extent and with a corresponding ‘effort’.

External events (fire, water, violence, war, drought, accident, life forms of a community, workplace, company policy, diseases, aging, …) can of course strongly influence the personal everyday life ‘against one’s will’, but in the end a fundamental autonomy of decision-making remains in the individual: one can decide differently in the next moment than before, even without it being completely clear to oneself in this moment ‘why’ one does it. Whether one calls this ‘not being able to explain’ then ‘goot feeling’ or ‘intuition’ or … all these words only circumscribe a ‘not knowing’ about the processes ‘in us’ that take place and ‘motivate’ us. And even if one has a ‘linguistic explanation’ available, this does not have to mean that a ‘clear insight’ is also connected with these words. The proportion of ‘preconscious/ unconscious’ ‘inner processes’ is high and ultimately these are not explainable ‘out of themselves’: they take place, move us, and we ‘implement them’.

Yes, in every moment we can behave ‘as usual’ and thereby ‘preserve’ the world (or thereby prevent it from becoming different?). But we can also sometimes ‘deviate’ from the usual (and thereby ‘destroy’ something or make something ‘new possible?). All in all, the ‘space of the inside’ remains largely ‘unilluminated’. We hardly if at all understand why we do this; this requires trust in ourselves and the situation. Is the difference to the ‘original birth’ only gradual? Are we in a continuing process of birth that constantly enables the construction of something ‘new’, can support ‘life’ or also ‘weaken’ it?


[Exploration] The format ‘exploration’ is newly introduced in the blog from now on, since there is often the situation that the author at the transition point from ‘non-action to action’, from ‘non-writing to writing’ tries out which emotional and rational ‘reactions’ from the ‘inside’ are ‘spontaneously’ available for a certain question/idea. This is to be distinguished from a ‘systematic elaboration’ of a thought, although the influence of the ‘inside’ is of course never completely excluded. The ‘difference’ between ‘exploratory’ and ‘systematic’ writing has to do with the fact that systematic writing tries to bring the ‘rational dimension’ of the thought to the fore.

[*] German source: ,

[deepL translator]:



Dealing with the topic using chatGPT4

Author: What happens inside a person when it changes from the state of non-writing to the state of writing? Which kinds of factors are involved in this change?


The shift from a state of non-writing to writing is a complex interplay of cognitive, emotional, and physical processes. Several factors contribute to this transformation:

  1. Cognitive Factors:
    • Intent and Purpose: The most immediate factor that leads to writing is the intent. The individual decides there’s a need to communicate or record something. This purpose could be to share knowledge, express emotions, or any other myriad reasons.
    • Idea Formation: The brain begins to formulate ideas, organize thoughts, and consider the message it wants to convey.
    • Recall: The brain accesses stored knowledge about language, grammar, and the topic at hand.
    • Decision Making: Choices are made about wording, tone, and structure.
  2. Emotional Factors:
    • Motivation: An internal or external motivation prompts the writer. This could be intrinsic (like the joy of writing) or extrinsic (like a job requirement).
    • Confidence: A person’s self-efficacy regarding writing can influence their willingness and approach. A confident writer might start without hesitation, while someone less confident might procrastinate or struggle with where to begin.
    • Mood and Emotion: A person’s emotional state can shape the content and tone of their writing. For instance, someone in a cheerful mood might use more positive and lively language.
  3. Physiological and Neuromotor Factors:
    • Fine Motor Skills: The act of writing requires coordination between the brain and muscles, whether it’s holding a pen or typing on a keyboard.
    • Visual Processing: Reading what’s written, planning spatially on the page or screen, and adjusting based on visual feedback are integral to the writing process.
    • Brain Activation: Certain areas of the brain, like Broca’s area and Wernicke’s area, are particularly active during language production.
  4. Environmental Factors:
    • Physical Environment: The availability of tools (pen, paper, computer), a conducive atmosphere, and a comfortable workspace can influence the transition to writing.
    • Cultural and Social Environment: The societal context, intended audience, and cultural norms can shape what and how a person writes.
  5. Educational and Experiential Factors:
    • Training and Practice: A person’s prior experience with writing, including their education, can influence their ease and style of writing.
    • Feedback and Iteration: Previous feedback on one’s writing can affect future attempts. Someone who has been praised might approach writing with more enthusiasm, while critical feedback might make another more cautious or reflective.
  6. External Stimuli:
    • Inspiration: Often, external stimuli—like reading a book, a conversation, a personal experience, or even a sunset—can spark the urge to write.
    • Prompts: In some situations, a specific prompt or question might initiate the writing process.

Understanding these factors and their interplay can be beneficial for educators, writers, and anyone looking to refine or understand their writing process.


Can you describe more the subjective experience of a person, which is changing from the state of non-writing to writing?


Certainly! The transition from non-writing to writing, when viewed from a subjective lens, involves a cascade of internal experiences. Here’s a more in-depth dive into the intimate, often ineffable stages a writer might traverse:

  1. Initial Restlessness: Before the act of writing commences, there’s often a sense of unease or urgency. It’s akin to a pressure building up, a compulsion to get something out, or a feeling that there’s something “there” that needs to be expressed.
  2. Incubation and Percolation: Many writers describe a period where ideas are “brewing” or “stewing” inside them. They might not be actively thinking about these ideas, but somewhere in the backdrop, the mind is processing and refining them. This can feel like daydreaming or being lost in thought.
  3. Flash of Insight: This is the ‘Eureka!’ moment when clarity emerges. It’s the point when the idea crystallizes, and the writer feels an impetus to put pen to paper. It can be a sudden realization or a slow dawning.
  4. Overcoming the Inertia: This is the mental “deep breath” before diving in. There’s a brief struggle, often described as “writer’s block” or resistance, where the weight of the blank page feels daunting.
  5. Entering the Flow: As words start pouring out, many writers describe a state known as “flow” – a term coined by Mihaly Csikszentmihalyi. In this state, writers lose track of time, their surroundings, and sometimes even themselves. They feel as if they are merely a conduit for the words, with the writing almost writing itself.
  6. Bouts of Doubt and Reassurance: Especially in longer writing stints, waves of self-doubt might wash over the writer. Questions like “Is this good enough?” or “Does this make sense?” arise. These periods might be interspersed with moments of confidence and reassurance, where the writer feels they’re on the right track.
  7. Physical Sensations: There might be heightened awareness of the tactile sensation of the pen moving on paper or fingers dancing on the keyboard. Some writers describe a kind of rhythm they get into, almost like a meditative state.
  8. Emotional Catharsis: Particularly for personal or emotive subjects, the act of writing can be cathartic. It’s a release, a way to process feelings, and often brings relief or deeper understanding.
  9. Completion and Exhaustion: As the writing draws to a close, there’s a sense of accomplishment, but also, often, a feeling of emptiness or exhaustion. The mental and emotional energy expended in writing can leave one drained.
  10. Reflection and Revision: Once the initial draft is out, there’s a period of reflection. Reading one’s own words can be an enlightening experience. This phase might involve editing, but it’s also a moment of self-confrontation, seeing one’s thoughts and feelings laid out bare.

The subjective experience of writing is deeply personal and can vary greatly from person to person. Some find it therapeutic, others see it as work, and for many, it’s a passionate craft. Regardless of the perspective, the journey from non-writing to writing is an intimate dance between the conscious and subconscious, emotion and reason, art and discipline.

End of this first small experiment.

Homo Sapiens: empirical and sustained-empirical theories, emotions, and machines. A sketch

Author: Gerd Doeben-Henisch


Aug 24, 2023 — Aug 29, 2023 (10:48h CET)

Attention: This text has been translated from a German source by using the software deepL for nearly 97 – 99% of the text! The diagrams of the German version have been left out.


This text represents the outline of a talk given at the conference “AI – Text and Validity. How do AI text generators change scientific discourse?” (August 25/26, 2023, TU Darmstadt). [1] A publication of all lectures is planned by the publisher Walter de Gruyter by the end of 2023/beginning of 2024. This publication will be announced here then.

Start of the Lecture

Dear Auditorium,

This conference entitled “AI – Text and Validity. How do AI text generators change scientific discourses?” is centrally devoted to scientific discourses and the possible influence of AI text generators on these. However, the hot core ultimately remains the phenomenon of text itself, its validity.

In this conference many different views are presented that are possible on this topic.


My contribution to the topic tries to define the role of the so-called AI text generators by embedding the properties of ‘AI text generators’ in a ‘structural conceptual framework’ within a ‘transdisciplinary view’. This helps the specifics of scientific discourses to be highlighted. This can then result further in better ‘criteria for an extended assessment’ of AI text generators in their role for scientific discourses.

An additional aspect is the question of the structure of ‘collective intelligence’ using humans as an example, and how this can possibly unite with an ‘artificial intelligence’ in the context of scientific discourses.

‘Transdisciplinary’ in this context means to span a ‘meta-level’ from which it should be possible to describe today’s ‘diversity of text productions’ in a way that is expressive enough to distinguish ‘AI-based’ text production from ‘human’ text production.


The formulation ‘scientific discourse’ is a special case of the more general concept ‘human text generation’.

This change of perspective is meta-theoretically necessary, since at first sight it is not the ‘text as such’ that decides about ‘validity and non-validity’, but the ‘actors’ who ‘produce and understand texts’. And with the occurrence of ‘different kinds of actors’ – here ‘humans’, there ‘machines’ – one cannot avoid addressing exactly those differences – if there are any – that play a weighty role in the ‘validity of texts’.


With the distinction in two different kinds of actors – here ‘humans’, there ‘machines’ – a first ‘fundamental asymmetry’ immediately strikes the eye: so-called ‘AI text generators’ are entities that have been ‘invented’ and ‘built’ by humans, it are furthermore humans who ‘use’ them, and the essential material used by so-called AI generators are again ‘texts’ that are considered a ‘human cultural property’.

In the case of so-called ‘AI-text-generators’, we shall first state only this much, that we are dealing with ‘machines’, which have ‘input’ and ‘output’, plus a minimal ‘learning ability’, and whose input and output can process ‘text-like objects’.


On the meta-level, then, we are assumed to have, on the one hand, such actors which are minimally ‘text-capable machines’ – completely human products – and, on the other hand, actors we call ‘humans’. Humans, as a ‘homo-sapiens population’, belong to the set of ‘biological systems’, while ‘text-capable machines’ belong to the set of ‘non-biological systems’.


The transformation of the term ‘AI text generator’ into the term ‘text capable machine’ undertaken here is intended to additionally illustrate that the widespread use of the term ‘AI’ for ‘artificial intelligence’ is rather misleading. So far, there exists today no general concept of ‘intelligence’ in any scientific discipline that can be applied and accepted beyond individual disciplines. There is no real justification for the almost inflationary use of the term AI today other than that the term has been so drained of meaning that it can be used anytime, anywhere, without saying anything wrong. Something that has no meaning can be neither true’ nor ‘false’.


If now the homo-sapiens population is identified as the original actor for ‘text generation’ and ‘text comprehension’, it shall now first be examined which are ‘those special characteristics’ that enable a homo-sapiens population to generate and comprehend texts and to ‘use them successfully in the everyday life process’.


A connecting point for the investigation of the special characteristics of a homo-sapiens text generation and a text understanding is the term ‘validity’, which occurs in the conference topic.

In the primary arena of biological life, in everyday processes, in everyday life, the ‘validity’ of a text has to do with ‘being correct’, being ‘appicable’. If a text is not planned from the beginning with a ‘fictional character’, but with a ‘reference to everyday events’, which everyone can ‘check’ in the context of his ‘perception of the world’, then ‘validity in everyday life’ has to do with the fact that the ‘correctness of a text’ can be checked. If the ‘statement of a text’ is ‘applicable’ in everyday life, if it is ‘correct’, then one also says that this statement is ‘valid’, one grants it ‘validity’, one also calls it ‘true’. Against this background, one might be inclined to continue and say: ‘If’ the statement of a text ‘does not apply’, then it has ‘no validity’; simplified to the formulation that the statement is ‘not true’ or simply ‘false’.

In ‘real everyday life’, however, the world is rarely ‘black’ and ‘white’: it is not uncommon that we are confronted with texts to which we are inclined to ascribe ‘a possible validity’ because of their ‘learned meaning’, although it may not be at all clear whether there is – or will be – a situation in everyday life in which the statement of the text actually applies. In such a case, the validity would then be ‘indeterminate’; the statement would be ‘neither true nor false’.


One can recognize a certain asymmetry here: The ‘applicability’ of a statement, its actual validity, is comparatively clear. The ‘not being applicable’, i.e. a ‘merely possible’ validity, on the other hand, is difficult to decide.

With this phenomenon of the ‘current non-decidability’ of a statement we touch both the problem of the ‘meaning’ of a statement — how far is at all clear what is meant? — as well as the problem of the ‘unfinishedness of our everyday life’, better known as ‘future’: whether a ‘current present’ continues as such, whether exactly like this, or whether completely different, depends on how we understand and estimate ‘future’ in general; what some take for granted as a possible future, can be simply ‘nonsense’ for others.


This tension between ‘currently decidable’ and ‘currently not yet decidable’ additionally clarifies an ‘autonomous’ aspect of the phenomenon of meaning: if a certain knowledge has been formed in the brain and has been made usable as ‘meaning’ for a ‘language system’, then this ‘associated’ meaning gains its own ‘reality’ for the scope of knowledge: it is not the ‘reality beyond the brain’, but the ‘reality of one’s own thinking’, whereby this reality of thinking ‘seen from outside’ has something like ‘being virtual’.

If one wants to talk about this ‘special reality of meaning’ in the context of the ‘whole system’, then one has to resort to far-reaching assumptions in order to be able to install a ‘conceptual framework’ on the meta-level which is able to sufficiently describe the structure and function of meaning. For this, the following components are minimally assumed (‘knowledge’, ‘language’ as well as ‘meaning relation’):

KNOWLEDGE: There is the totality of ‘knowledge’ that ‘builds up’ in the homo-sapiens actor in the course of time in the brain: both due to continuous interactions of the ‘brain’ with the ‘environment of the body’, as well as due to interactions ‘with the body itself’, as well as due to interactions ‘of the brain with itself’.

LANGUAGE: To be distinguished from knowledge is the dynamic system of ‘potential means of expression’, here simplistically called ‘language’, which can unfold over time in interaction with ‘knowledge’.

MEANING RELATIONSHIP: Finally, there is the dynamic ‘meaning relation’, an interaction mechanism that can link any knowledge elements to any language means of expression at any time.

Each of these mentioned components ‘knowledge’, ‘language’ as well as ‘meaning relation’ is extremely complex; no less complex is their interaction.


In addition to the phenomenon of meaning, it also became apparent in the phenomenon of being applicable that the decision of being applicable also depends on an ‘available everyday situation’ in which a current correspondence can be ‘concretely shown’ or not.

If, in addition to a ‘conceivable meaning’ in the mind, we do not currently have any everyday situation that sufficiently corresponds to this meaning in the mind, then there are always two possibilities: We can give the ‘status of a possible future’ to this imagined construct despite the lack of reality reference, or not.

If we would decide to assign the status of a possible future to a ‘meaning in the head’, then there arise usually two requirements: (i) Can it be made sufficiently plausible in the light of the available knowledge that the ‘imagined possible situation’ can be ‘transformed into a new real situation’ in the ‘foreseeable future’ starting from the current real situation? And (ii) Are there ‘sustainable reasons’ why one should ‘want and affirm’ this possible future?

The first requirement calls for a powerful ‘science’ that sheds light on whether it can work at all. The second demand goes beyond this and brings the seemingly ‘irrational’ aspect of ’emotionality’ into play under the garb of ‘sustainability’: it is not simply about ‘knowledge as such’, it is also not only about a ‘so-called sustainable knowledge’ that is supposed to contribute to supporting the survival of life on planet Earth — and beyond –, it is rather also about ‘finding something good, affirming something, and then also wanting to decide it’. These last aspects are so far rather located beyond ‘rationality’; they are assigned to the diffuse area of ’emotions’; which is strange, since any form of ‘usual rationality’ is exactly based on these ’emotions’.[2]


In the context of ‘rationality’ and ’emotionality’ just indicated, it is not uninteresting that in the conference topic ‘scientific discourse’ is thematized as a point of reference to clarify the status of text-capable machines.

The question is to what extent a ‘scientific discourse’ can serve as a reference point for a successful text at all?

For this purpose it can help to be aware of the fact that life on this planet earth takes place at every moment in an inconceivably large amount of ‘everyday situations’, which all take place simultaneously. Each ‘everyday situation’ represents a ‘present’ for the actors. And in the heads of the actors there is an individually different knowledge about how a present ‘can change’ or will change in a possible future.

This ‘knowledge in the heads’ of the actors involved can generally be ‘transformed into texts’ which in different ways ‘linguistically represent’ some of the aspects of everyday life.

The crucial point is that it is not enough for everyone to produce a text ‘for himself’ alone, quite ‘individually’, but that everyone must produce a ‘common text’ together ‘with everyone else’ who is also affected by the everyday situation. A ‘collective’ performance is required.

Nor is it a question of ‘any’ text, but one that is such that it allows for the ‘generation of possible continuations in the future’, that is, what is traditionally expected of a ‘scientific text’.

From the extensive discussion — since the times of Aristotle — of what ‘scientific’ should mean, what a ‘theory’ is, what an ’empirical theory’ should be, I sketch what I call here the ‘minimal concept of an empirical theory’.

  1. The starting point is a ‘group of people’ (the ‘authors’) who want to create a ‘common text’.
  2. This text is supposed to have the property that it allows ‘justifiable predictions’ for possible ‘future situations’, to which then ‘sometime’ in the future a ‘validity can be assigned’.
  3. The authors are able to agree on a ‘starting situation’ which they transform by means of a ‘common language’ into a ‘source text’ [A].
  4. It is agreed that this initial text may contain only ‘such linguistic expressions’ which can be shown to be ‘true’ ‘in the initial situation’.
  5. In another text, the authors compile a set of ‘rules of change’ [V] that put into words ‘forms of change’ for a given situation.
  6. Also in this case it is considered as agreed that only ‘such rules of change’ may be written down, of which all authors know that they have proved to be ‘true’ in ‘preceding everyday situations’.
  7. The text with the rules of change V is on a ‘meta-level’ compared to the text A about the initial situation, which is on an ‘object-level’ relative to the text V.
  8. The ‘interaction’ between the text V with the change rules and the text A with the initial situation is described in a separate ‘application text’ [F]: Here it is described when and how one may apply a change rule (in V) to a source text A and how this changes the ‘source text A’ to a ‘subsequent text A*’.
  9. The application text F is thus on a next higher meta-level to the two texts A and V and can cause the application text to change the source text A.
  1. The moment a new subsequent text A* exists, the subsequent text A* becomes the new initial text A.
  2. If the new initial text A is such that a change rule from V can be applied again, then the generation of a new subsequent text A* is repeated.
  3. This ‘repeatability’ of the application can lead to the generation of many subsequent texts <A*1, …, A*n>.
  4. A series of many subsequent texts <A*1, …, A*n> is usually called a ‘simulation’.
  5. Depending on the nature of the source text A and the nature of the change rules in V, it may be that possible simulations ‘can go quite differently’. The set of possible scientific simulations thus represents ‘future’ not as a single, definite course, but as an ‘arbitrarily large set of possible courses’.
  6. The factors on which different courses depend are manifold. One factor are the authors themselves. Every author is, after all, with his corporeality completely himself part of that very empirical world which is to be described in a scientific theory. And, as is well known, any human actor can change his mind at any moment. He can literally in the next moment do exactly the opposite of what he thought before. And thus the world is already no longer the same as previously assumed in the scientific description.

Even this simple example shows that the emotionality of ‘finding good, wanting, and deciding’ lies ahead of the rationality of scientific theories. This continues in the so-called ‘sustainability discussion’.


With the ‘minimal concept of an empirical theory (ET)’ just introduced, a ‘minimal concept of a sustainable empirical theory (NET)’ can also be introduced directly.

While an empirical theory can span an arbitrarily large space of grounded simulations that make visible the space of many possible futures, everyday actors are left with the question of what they want to have as ‘their future’ out of all this? In the present we experience the situation that mankind gives the impression that it agrees to destroy the life beyond the human population more and more sustainably with the expected effect of ‘self-destruction’.

However, this self-destruction effect, which can be predicted in outline, is only one variant in the space of possible futures. Empirical science can indicate it in outline. To distinguish this variant before others, to accept it as ‘good’, to ‘want’ it, to ‘decide’ for this variant, lies in that so far hardly explored area of emotionality as root of all rationality.[2]

If everyday actors have decided in favor of a certain rationally lightened variant of possible future, then they can evaluate at any time with a suitable ‘evaluation procedure (EVAL)’ how much ‘percent (%) of the properties of the target state Z’ have been achieved so far, provided that the favored target state is transformed into a suitable text Z.

In other words, the moment we have transformed everyday scenarios into a rationally tangible state via suitable texts, things take on a certain clarity and thereby become — in a sense — simple. That we make such transformations and on which aspects of a real or possible state we then focus is, however, antecedent to text-based rationality as an emotional dimension.[2]


After these preliminary considerations, the final question is whether and how the main question of this conference, “How do AI text generators change scientific discourse?” can be answered in any way?

My previous remarks have attempted to show what it means for humans to collectively generate texts that meet the criteria for scientific discourse that also meets the requirements for empirical or even sustained empirical theories.

In doing so, it becomes apparent that both in the generation of a collective scientific text and in its application in everyday life, a close interrelation with both the shared experiential world and the dynamic knowledge and meaning components in each actor play a role.

The aspect of ‘validity’ is part of a dynamic world reference whose assessment as ‘true’ is constantly in flux; while one actor may tend to say “Yes, can be true”, another actor may just tend to the opposite. While some may tend to favor possible future option X, others may prefer future option Y. Rational arguments are absent; emotions speak. While one group has just decided to ‘believe’ and ‘implement’ plan Z, the others turn away, reject plan Z, and do something completely different.

This unsteady, uncertain character of future-interpretation and future-action accompanies the Homo Sapiens population from the very beginning. The not understood emotional complex constantly accompanies everyday life like a shadow.

Where and how can ‘text-enabled machines’ make a constructive contribution in this situation?

Assuming that there is a source text A, a change text V and an instruction F, today’s algorithms could calculate all possible simulations faster than humans could.

Assuming that there is also a target text Z, today’s algorithms could also compute an evaluation of the relationship between a current situation as A and the target text Z.

In other words: if an empirical or a sustainable-empirical theory would be formulated with its necessary texts, then a present algorithm could automatically compute all possible simulations and the degree of target fulfillment faster than any human alone.

But what about the (i) elaboration of a theory or (ii) the pre-rational decision for a certain empirical or even sustainable-empirical theory ?

A clear answer to both questions seems hardly possible to me at the present time, since we humans still understand too little how we ourselves collectively form, select, check, compare and also reject theories in everyday life.

My working hypothesis on the subject is: that we will very well need machines capable of learning in order to be able to fulfill the task of developing useful sustainable empirical theories for our common everyday life in the future. But when this will happen in reality and to what extent seems largely unclear to me at this point in time.[2]



[2] Talking about ’emotions’ in the sense of ‘factors in us’ that move us to go from the state ‘before the text’ to the state ‘written text’, that hints at very many aspects. In a small exploratory text “State Change from Non-Writing to Writing. Working with chatGPT4 in parallel” ( ) the author has tried to address some of these aspects. While writing it becomes clear that very many ‘individually subjective’ aspects play a role here, which of course do not appear ‘isolated’, but always flash up a reference to concrete contexts, which are linked to the topic. Nevertheless, it is not the ‘objective context’ that forms the core statement, but the ‘individually subjective’ component that appears in the process of ‘putting into words’. This individual subjective component is tentatively used here as a criterion for ‘authentic texts’ in comparison to ‘automated texts’ like those that can be generated by all kinds of bots. In order to make this difference more tangible, the author decided to create an ‘automated text’ with the same topic at the same time as the quoted authentic text. For this purpose he used chatGBT4 from openAI. This is the beginning of a philosophical-literary experiment, perhaps to make the possible difference more visible in this way. For purely theoretical reasons, it is clear that a text generated by chatGBT4 can never generate ‘authentic texts’ in origin, unless it uses as a template an authentic text that it can modify. But then this is a clear ‘fake document’. To prevent such an abuse, the author writes the authentic text first and then asks chatGBT4 to write something about the given topic without chatGBT4 knowing the authentic text, because it has not yet found its way into the database of chatGBT4 via the Internet.

Eva Jablonka, Marion J. Lamb, “Traditions and Cumulative Evolution: How a New Lifestyle is Evolving”, 2017 (Review 2022, 2023)

(Last change: July 13, 2023)

(The following text was created from a German text with the support of the software deepL.)


This is a short review of an article from Eva Jablonka, Marion J. Lamb from 2017 talking about their book „Evolution in vier Dimensionen. Wie Genetik, Epigenetik, Verhalten und Symbole die Geschichte des Lebens prägen (Traditions and cumulative evolution: how a new lifestyle is evolving)“, Stuttgart, S. Hirzel Verlag, published in 2017. There was an earlier English edition (2005) with the title „Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life“, MIT Press. MIT Press comments the 2005 English edition as follows: „Ideas about heredity and evolution are undergoing a revolutionary change. New findings in molecular biology challenge the gene-centered version of Darwinian theory according to which adaptation occurs only through natural selection of chance DNA variations. In Evolution in Four Dimensions, Eva Jablonka and Marion Lamb argue that there is more to heredity than genes. They trace four „dimensions“ in evolution—four inheritance systems that play a role in evolution: genetic, epigenetic (or non-DNA cellular transmission of traits), behavioral, and symbolic (transmission through language and other forms of symbolic communication). These systems, they argue, can all provide variations on which natural selection can act. Evolution in Four Dimensions offers a richer, more complex view of evolution than the gene-based, one-dimensional view held by many today. The new synthesis advanced by Jablonka and Lamb makes clear that induced and acquired changes also play a role in evolution.

The article (in German) was published in pp. 141-146 in: Regina Oehler, Petra Gehring, Volker Mosbrugger (eds.), 2017, Series: Senckenberg Book 78, “Biologie und Ethik: Life as a Project. Ein Funkkolleg Lesebuch”, Stuttgart, E. Schweizerbart’sche Verlagsbuchhandlung (Nägele u. Obermiller) and Senckenberg Nature Research Society.

Main Positions extracted from the Text

Preparing an understanding of the larger text of the book the author has tried to extract the most important assumptions/ hypotheses from the short article:

  1. There is an existing ‘nature’ as a variable quantity with ‘nature-specific’ properties, and
  2. in this nature there are biological populations as a ‘component of nature’, which appear as ‘environment’ for their own members as well as for other populations themselves.
  3. Populations are themselves changeable.
  4. The members of a biological population are able to respond to properties of the surrounding nature – with the other members of the population as a component of the environment (self-reference of a population) – by a specific behavior.
  5. A behavior can be changed in its form as well as related to a specific occasion.
  6. Due to the self-referentiality of a population, a population can therefore interactively change its own behavior
  7. interact variably with the environment through the changed behavior (and thereby change the environment itself to a certain extent).
  8. It turns out that members of a population can recall certain behaviors over longer periods of time depending on environmental characteristics.
  9. Due to differences in lifespan as well as memory, new behaviors can be transferred between generations, allowing for transmission beyond one generation.
  10. Furthermore, it is observed that the effect of genetic information can be meta-genetically (epigenetically) different in the context of reproduction, with these meta-genetic (epigenetic) changes occurring during lifetime. The combination of genetic and epigenetic factors can affect offspring. The effect of such epigenetically influenced changes in actual behavior (phenotype) is not linear.


For the further discussion, it is helpful to clarify at this point which are the basic (guiding) terms that will shape the further discourse. This will be done in the form of ‘tentative’ definitions. If these should prove to be ‘inappropriate’ in the further course, then one can modify them accordingly.

Three terms seem to play a role as such guiding terms at this point: ‘population’, ‘culture’ and – anticipating the discussion – ‘society’.

Def1: Population

Population here is minimally meant to be such a grouping of biological individuals that form a biological reproductive community (cf. [1])

Def2: Culture

In common usage, the term ‘culture’ is restricted to the population of ‘people’. [2] Here the proposal is made to let ‘culture’ begin where biological populations are capable of minimal tradition formation based on their behavioral space. This expands the scope of the concept of ‘culture’ beyond the population of humans to many other biological populations, but not all.

Def3: Society

The term ‘society’ gains quite different meanings depending on the point of view (of a discipline). Here the term shall be defined minimalistically with reference to the effect that biologically a ‘society’ is minimally present if there is a biological population in which ‘culture’ occurs in a minimal way.

It will be further considered how these processes are to be understood in detail and what this may mean from a philosophical point of view.



[1] Population in wkp-en:

[] Culture in wkp-en:

[] Society in wkp-en:


(July 12, 2023 – August 24, 2023)

(The following text was created with the support of the software deepL from a German text)

–!! To be continued !!–

–!! See new comment at the end of the text (Aug 24, 2023)!!–


We live in a time in which – if one takes different perspectives – very many different partial world views can be perceived, world views which are not easily ‘compatible’ with each other. Thus, so far, the ‘physical’ and the ‘biological’ worldviews do not necessarily seem to be ‘aligned’ with each other. In addition, there are different directions within each of these worldviews. Where do the social sciences stand here: Not physics, not biology, but then what? The economic sciences also seem to be ‘surfing across’ everything else … this list could easily be extended. Within these assemblies of worldviews, a new worldview emerges quite freshly, that of so-called ‘artificial intelligence’; it is also almost completely unmediated with everything else, but makes heavy use of terminology borrowed from psychology and biology, without adopting the usual conceptual contexts of these terms.

This diversity can be seen as positive if it stimulates thinking and thus perhaps enables new exciting ‘syntheses’. However, there is nothing to be seen of ‘syntheses’ far and wide. The terms ‘interdisciplinary’, ‘multidisciplinary’ or even ‘transdisciplinary’ are probably used more and more often in texts, as a reminder, as a call to integrate diversity in a fruitful way, but there is not much to be seen of it yet. The average university teacher at an average university still tends to be ‘punished’ for venturing out of his disciplinary niche. The ‘curricular norms’ that determine what time may be spent on what content with how many students do not normally provide for multidisciplinary or even transdisciplinary teaching. And when the single-science trained researcher throws himself into a multidisciplinary research project (if he does it at all), then in the end usually only single-science comes out again ….

Against this panorama of many worldviews, the following text will attempt to interpret how the concept of ‘intelligence’ could be grasped and classified today – taking into account the whole range of worldviews. Thereby a special accent will be put on the phenomenon ‘Homo sapiens’: Although Homo sapiens is only a very small sub-population within the whole of the biological, in the course of evolution it takes nevertheless up to now a special position in multiple senses, which shall be considered in this attempt of interpretation.

‘INTELLIGENCE’ – An interpretive hypothesis

This text intends a conceptual clarification of the concept ‘intelligence’ in the larger context of ‘biological systems’. ‘Machine systems’ with specific ‘behavioral properties’ that show ‘similarities’ to such properties that are ‘usually’ called ‘intelligent’ in the case of biological systems are then called ‘machine forms of intelligence’ in this text. However, ‘similarities’ in ‘behavior’ cannot be used to infer ‘similarities in enabling structures’. A ‘behavior X’ can be produced by a variety of ‘enabling structures’ which may be different among themselves. Statements about the ‘intelligence of a system’ therefore refer specifically to ‘behavioral properties’ of that system that can be observed within a particular ‘action environment’ within a particular ‘time interval’. Explicit talk about ‘intelligence’ further presupposes that there is a ‘virtual concept intelligence’ represented in the form of a text, which is able to classify the many individual ’empirical observations’ into ‘virtual contexts/relationships’ in such a way, that both (i) when certain behaviors ‘occur’ with the ‘virtual concept intelligence’ one can assign (classify) the occurring phenomena to the area of ‘intelligent behavior’, and that (ii) with the ‘virtual concept intelligence’ starting from a ‘given situation’ one can make conditional ‘predictions’ about ‘possible behaviors’ that can be ‘expected’ and ’empirically verified’ by the target system.

New Comment

While I was preparing a public lecture for a conference at the Technical University of Darmstadt (Germany) ( ) I decided to abandon the concept of ‘intelligence’ as well as ‘artificial intelligence’ for the near future. The meaning of these concepts is meanwhile completely ‘blurred’ / ‘fuzzy’; to use these concepts or not doesn’t change anything.


(July 7, 2023 – July 7, 2023)

(This text was translated from the German source with the deepL software (


Following the basic considerations on the possibility/impossibility of a generally valid morality in this finite-dynamic world, a small look at the ‘phenomenon of life’ shall be suggested here, based on the currently popular concept of ‘sustainability’.


In the year 2023, the term ‘sustainability’ is on – almost – everyone’s lips; not only positively (That’s it; we have to do that, …) but very well also negatively, rejecting (What nonsense; we don’t need it, …). In addition, the many billions of people who have never heard of sustainability … Since the fundamental ‘Brundtland Report’ of 1987 [1], the United Nations has been trying to raise the awareness of all governments for the topic of ‘sustainability’ in ever new conferences with ever new emphases and possible recommendations for implementation. How far this has been successful so far can be judged by everyone who looks at the course of world events.

At this point, we would like to focus on one particular aspect of sustainability, an aspect that seems to be somehow ‘invisible’ until today, although it is fundamental for the understanding and success of the project ‘sustainability’. Without this aspect, there will be no effective sustainability.

Simple example: In a certain place on the planet Earth, there is a well from which one can draw about 180 liters of water per day so far. In itself, it is neither much nor little. But if plants, animals or humans have to live from this water, then this water can become ‘too little’ very quickly. In addition, there is the ‘ambient temperature’: do we have 10 °C, 20 °C, …, 50 °C …? Also, it is not unimportant ‘from where’ the well gets its water: does it come from (i) near-surface water from a nearby stream? or from (ii) deeper renewable groundwater (iii) or from …

If this well is in a village with 20 families, then the water becomes a ‘scarce resource’ in view of the ‘need’. For the daily needs of the families, the plants and possibly for animals this water will not be enough. Whatever happens/will happen now in this village with this scarce resource depends on the ‘knowledge’/’experience’ available in the minds of its inhabitants; plus the kind of ’emotions’ that are ‘operative’ in the same minds, and somehow – more or less consciously/unconsciously – certain ‘values’ (what to do in a certain situation). A ‘borderline case’ would be (i) that people have a great ‘fear’ to die, that therefore they would not shy away from ‘killing’ the others, if they ‘don’t know’ that there are no alternatives…; another case (ii) would be that besides the emotion ‘fear’ they also have an ’emotion’ ‘connectedness with the others’, supplemented by a value concept ‘one does not kill relatives/friends’. Therefore then perhaps rather the attempt to look together the ‘death by thirst’ into the eyes. Another case (iii) could be that at least one member of the village ‘knows’ when and how there could be a solution of the problem (differently sure), that the majority of the village ‘trusts’ him, and that ‘concrete behaviors are available’ to implement the solution.

If in case (iii) a solution is known ‘in principle’, but it is not known with which
‘measures’ this solution can be achieved, then case (i) or (ii) applies again. If in case (iii) the majority ‘does not trust’ the one person who says he has ‘knowledge/experience’ to find a solution, then ‘dejection’/’despondency’ may arise. Very bad it would be if no one in the village had the slightest bit of
knowledge, from which a useful action could be derived. Or, not less badly, individuals ‘believe’ that they have a knowledge which promises a remedy, but this ‘believed solution’ turns out to be a ‘mistake’.

What this simple example can clarify is that a ‘resource’ as such is neither good nor bad, neither little nor much. Decisive is the existence of a ‘need’, and a need is ultimately always coupled to the ‘existence of biological life forms’! ‘Biological life forms’ – thus ‘life’ – represent that phenomenon on the planet earth, to whose basic characteristics it belongs to have a ‘need’ of resources which are necessary so that life ‘can realize itself’, that ‘life can live’.

If one refers to the 17 development goals of the United Nations, valid from January 2016, addressed to nation states [2], then one can recognize many partial goals, which seem helpful for the promotion of the ‘life of life’, but one will miss the clear classification of the human population as a partial population in the total phenomenon ‘life’. All ‘non-human’ life is only granted a meaning in the haze of the 17 development goals, insofar as it appears helpful for the ‘life of the human sub-population’.

What is missing is a fundamental determination of what the phenomenon of life on the planet Earth represents as part of the entire universe: is it a random phenomenon that currently exists but to which no further significance is to be attached; it can also disappear again. Or must the phenomenon of life as a part of the universe be classified as an ‘extraordinary phenomenon’, indicating fundamental properties of the universe, pointing far beyond anything we have been accustomed to think of as reality, as possible future?

If we classify the ‘phenomenon of life’ as an ‘extraordinary phenomenon of global importance’ – and indeed the ‘whole life’ !!!. -, then the question of the ‘preservation’ of this whole life together with its manifold interactions must be in the center of the considerations and one must pursue ‘knowing-learning-questioning’ in a corresponding everyday life the questions, what this means; at the same time one must work ‘acting’ on a lasting shaping of the ‘conditions for a global life’.

Against this background, a culture that puts ‘unimportant things’ on top 1 and at the same time marginalizes what is fundamentally important for life appears as the perfect recipe for a quick common death. This common death, fragmented into many millions of individual deaths, is not simply ‘a death’; it destroys the ‘heart of the universe’ itself.


[1] Brundtland Report of 1987:

[2] The 17 Sustainable Development Goals see:

COLLECTIVE (man-machine) INTELLIGENCE and SUSTAINABILITY. An investigation

(June 21, 2023 – June 22, 2023)

–!! Not yet finished !!–


The steady progress of science has defeated many familiar ideas from the past and this change of concepts continues. This belongs to concepts like ‘intelligence’, ‘collective intelligence’ , ‘man, ‘machine’, ‘artificial intelligence’, ‘life’, ‘matter’ and many more.

Such changes with concepts are always difficult to describe. Ideally one would be an ‘external observer’ with a ‘full view’ of everything which is going on, and additionally one possesses a ‘full knowledge’ about all the features and dynamics of the field of the phenomena.

But we aren’t. We are part of the process ourselves . Our understanding is interspersed with familiar images and at the same time with new questions and new partial views. Under these conditions to find a ‘consistent new view’ of the whole process can only be worked out step wise, associated with experiments to check the viability of a new aspect of the new view.

And, one should not forget, the ‘reader’ of a text from lives under the same conditions: a mixture of everything is possible; therefore an understanding can crash not because a certain text is ‘wrong’ or ‘bad’ or whatever, but because at that moment of reading the ‘models in the heads’ of reader and writer are not ‘overlapping enough’. Then there is no chance of understanding because we depend completely from the ‘models in our heads’.

Accepting this the following text is an undertaken to describe a special view of life in this universe be laying out some possible principles how this new view could be constructed following these principles.


Because at the beginning of this writing the final outcome is open and the ‘way to reach the result’ is as such difficult, the author decided to make the research process directly the content of a process article.

The following parts of the process article seem to be important:

  1. Describe a ‘working hypotheses’ at the start.
  2. Look for ‘arguments pro or contra’.
  3. Look for ‘other texts’ related to these arguments (always pro & contra).
  4. Make decisions after every step, whether an argument (and possibly different texts) supports or criticizes or modifies the working hypothesis.
  5. Give a new version of the working hypothesis, if necessary.

Moreover it has to be ‘monitored’ (Meta-Level), whether this procedure works satisfyingly.


To begin, a first version of the working hypothesis has to be formulated. What is ‘given’ as an ‘assumption’ are the concepts ‘COLLECTIVE INTELLIGENCE’ with the special focus on the role of the intelligence of ‘man’ and ‘machines’ as part of a — possibly larger — concept of ‘INTELLIGENCE’. Furthermore it is assumed, that these concepts shall be investigated in the context of the question of a possible ‘SUSTAINABILITY’ of the hybrid ‘man-machine’ cooperation as part of the ‘whole life (the ‘biosphere’)’ on this planet, even extended to the whole known universe.

To elaborate these concepts in more concreteness and as a ‘hypothesis’ which can be ‘tested’ in the future, whether it ‘works’ or not, one needs a ‘minimal vision’ of what shall be assumed as ‘wishful future’ for a biosphere with a man-machine pair as part of it.

A ‘wishful future’ which can be ‘tested’ has to be (i) a ‘description of a state’, located some time ahead, and (ii) the ‘way into this future’ must be describable such, that we have a clear ‘starting point’ — e.g. the year 2023 — and (iii) that we have a sufficient knowledge about all possible changes, which can ‘transform/ change’ the actual situation step wise, that it is highly probable that we will reach finally the ‘envisioned future state’. Here highly import are especially those changes, which can be triggered by our own actions as humankind. And it has to be mentioned (iv), that we would need clear instructions how to apply the changes in order to be successful.

To (i): Wishful State

What would a citizen somewhere on this planet answer, if he would be asked “What do you think is a ‘wishful state’ in the future?”

It needs not too much fantasy that we would get nearly as many different answer as there are citizens living on this planet.

To (ii): The ‘way into this future’

To (iii): ‘Knowledge about all possible changes’

To (iv): ‘Clear instructions how to apply’


wkp-en :=

[2023] Raymond NobleUniversity College LondonDenis NobleUniversity of Oxford, Understanding Living Systems, Cambridge University Press. (Expected Online Publication June 23). Words by the publisher: “Life is definitively purposive and creative. Organisms use genes in controlling their destiny. This book presents a paradigm shift in understanding living systems. The genome is not a code, blueprint or set of instructions. It is a tool orchestrated by the system. This book shows that gene-centrism misrepresents what genes are and how they are used by living systems. It demonstrates how organisms make choices, influencing their behaviour, their development and evolution, and act as agents of natural selection. It presents a novel approach to fundamental philosophical and cultural issues, such as free-will. Reading this book will make you see life in a new light, as a marvellous phenomenon, and in some sense a triumph of evolution. We are not in our genes, our genes are in us.”

[2023]  Benedict RattiganDenis NobleAfiq Hatta, (Eds), The Language of Symmetry, CRC Press

[2022] RAYMOND NOBLE and DENIS NOBLE, Physiology restores purpose to evolutionary biology, Biological Journal of the Linnean Society, 2022, XX, 1–13. With 3 figures. Abstract: “Life is purposefully creative in a continuous process of maintaining integrity; it adapts to counteract change. This is an ongoing, iterative process. Its actions are essentially directed to this purpose. Life exists to exist. Physiology is the study of purposeful living function. Function necessarily implies purpose. This was accepted all the way from William Harvey in the 17th century, who identified the purpose of the heart to pump blood and so feed the organs and tissues of the body, through many 19th and early 20th century examples. But late 20th century physiology was obliged to hide these ideas in shame. Teleology became the ‘lady who no physiologist could do without, but who could not be acknowledged in public.’ This emasculation of the discipline accelerated once the Central Dogma of molecular biology was formulated, and once physiology had become sidelined as concerned only with the disposable vehicle of evolution. This development has to be reversed. Even on the practical criterion of relevance to health care, gene-centrism has been a disaster, since prediction from elements to the whole system only rarely succeeds, whereas identifying whole system functions invariably makes testable predictions at an elemental level.”

[2017] Manuel Vogel, Review: From matter to life: information and causality, edited by S. I. Walker, P. C. W. Davies and G. F. R. Ellis: Scope: edited book. Level: general readership, review in Contemporary Physics · June 2017

[2017] S. I. Walker, P. C. W.Davies and G. F. R. Ellis (Eds), From MATTER to LIFE. Information and Causality, Cambridge University Press

[2017] Denis Noble, Dance to the Tune of Life. Biological Relativity, Cambridge University Press

[2007] Denis Noble, Video Lecture, 2007, “Principle of Systems Biology illustrated using the Virtual Heart”, URL:

[2006] Denis Noble, The Music of Life. Biology beyond the genome, Oxford University Press Inc., New York

[] Denis Noble in wkp-en:


(June 20, 2023 – June 22, 2023)

(This text is a translation from the German blog of the author. The translation is supported by the deepL Software)


The meaning of and adherence to moral values in the context of everyday actions has always been a source of tension, debate, and tangible conflict.

This text will briefly illuminate why this is so, and why it will probably never be different as long as we humans are the way we are.


In this text it is assumed that the reality in which we ‘find’ ourselves from childhood is a ‘finite’ world. By this is meant that no phenomenon we encounter in this world – ourselves included – is ‘infinite’. In other words, all resources we encounter are ‘finite’. Even ‘solar energy’, which is considered ‘renewable’ in today’s parlance, is ‘finite’, although this finiteness outlasts the lifetimes of many generations of humans.

But this ‘finiteness’ is no contradiction to the fact that our finite world is continuously in a ‘process of change’ fed from many sides. An ‘itself-self-changing finiteness’ is with it, a something which in and in itself somehow ‘points beyond itself’! The ‘roots’ of this ‘immanent changeability’ are to a large extent perhaps still unclear, but the ‘effects’ of the ‘immanent changeability’ indicate that the respective ‘concrete finite’ is not the decisive thing; the ‘respective concrete finite’ is rather a kind of ‘indicator’ for an ‘immanent change cause’ which ‘manifests itself’ by means of concrete finites in change. The ‘forms of concrete manifestations of change’ can therefore perhaps be a kind of ‘expression’ of something that ‘works immanently behind’.

In physics there is the pair of terms ‘energy’ and ‘mass’, the latter as synonym for ‘matter’. Atomic physics and quantum mechanics have taught us that the different ‘manifestations of mass/matter’ can only be a ‘state form of energy’. The everywhere and always assumed ‘energy’ is that ‘enabling factor’, which can ‘manifest’ itself in all the known forms of matter. ‘Changing-matter’ can then be understood as a form of ‘information’ about the ‘enabling energy’.

If one sets what physics has found out so far about ‘energy’ as that form of ‘infinity’ which is accessible to us via the experiential world, then the various ‘manifestations of energy’ in diverse ‘forms of matter’ are forms of concrete finites, which, however, are ultimately not really finite in the context of infinite energy. All known material finites are only ‘transitions’ in a nearly infinite space of possible finites, which is ultimately grounded in ‘infinite energy’. Whether there is another ‘infinity’ ‘beside’ or ‘behind’ or ‘qualitatively again quite different to’ the ‘experienceable infinity’ is thus completely open.”[1]


Our normal life context is what we now call ‘everyday life’: a bundle of regular processes, often associated with characteristic behavioral roles. This includes the experience of having a ‘finite body’; that ‘processes take time in real terms’; that each process is characterized by its own ‘typical resource consumption’; that ‘all resources are finite’ (although there can be different time scales here (see the example with solar energy)).

But also here: the ’embeddedness’ of all resources and their consumption in a comprehensive variability makes ‘snapshots’ out of all data, which have their ‘truth’ not only ‘in the moment’, but in the ‘totality of the sequence’! In itself ‘small changes’ in the everyday life can, if they last, assume sizes and achieve effects which change a ‘known everyday life’ so far that long known ‘views’ and ‘long practiced behaviors’ are ‘no longer correct’ sometime: in that case the format of one’s own thinking and behavior can come into increasing contradiction with the experiential world. Then the point has come where the immanent infinity ‘manifests itself’ in the everyday finiteness and ‘demonstrates’ to us that the ‘imagined cosmos in our head’ is just not the ‘true cosmos’. In the end this immanent infinity is ‘truer’ than the ‘apparent finiteness’.


Beside the life-free material processes in this finite world there are since approx. 3.5 billion years the manifestations, which we call ‘life’, and very late – quasi ‘just now’ – showed up in the billions of life forms one, which we call ‘Homo sapiens’. That is us.

The today’s knowledge of the ‘way’, which life has ‘taken’ in these 3.5 billion years, was and is only possible, because science has learned to understand the ‘seemingly finite’ as ‘snapshot’ of an ongoing process of change, which shows its ‘truth’ only in the ‘totality of the individual moments’. That we as human beings, as the ‘latecomers’ in this life-creation-process’, have the ability to ‘recognize’ successive ‘moments’ ‘individually’ as well as ‘in sequence’, is due to the special nature of the ‘brain’ in the ‘body’ and the way in which our body ‘interacts’ with the surrounding world. So, we don’t know about the ‘existence of an immanent infinity’ ‘directly’, but only ‘indirectly’ through the ‘processes in the brain’ that can identify, store, process and ‘arrange’ moments in possible sequences in a ‘neuronally programmed way’. So: our brain enables us on the basis of a given neuronal and physical structure to ‘construct’ an ‘image/model’ of a possible immanent infinity, which we assume to ‘represent’ the ‘events around us’ reasonably well.


One characteristic attributed to Homo Sapiens is called ‘thinking’; a term which until today is described only vaguely and very variously by different sciences. From another Homo Sapiens we learn about his thinking only by his way of ‘behaving’, and a special case of it is ‘linguistic communication’.

Linguistic communication is characterized by the fact that it basically works with ‘abstract concepts’, to which as such no single object in the real world directly corresponds (‘cup’, ‘house’, ‘dog’, ‘tree’, ‘water’ etc.). Instead, the human brain assigns ‘completely automatically’ (‘unconsciously’!) most different concrete perceptions to one or the other abstract concept in such a way that a human A can agree with a human B whether one assigns this concrete phenomenon there in front to the abstract concept ‘cup’, ‘house’, ‘dog’, ‘tree’, or ‘water’. At some point in everyday life, person A knows which concrete phenomena can be meant when person B asks him whether he has a ‘cup of tea’, or whether the ‘tree’ carries apples etc.

This empirically proven ‘automatic formation’ of abstract concepts by our brain is not only based on a single moment, but these automatic construction processes work with the ‘perceptual sequences’ of finite moments ’embedded in changes’, which the brain itself also automatically ‘creates’. ‘Change as such’ is insofar not a ‘typical object’ of perception, but is the ‘result of a process’ taking place in the brain, which constructs ‘sequences of single perceptions’, and these ‘calculated sequences’ enter as ‘elements’ into the formation of ‘abstract concepts’: a ‘house’ is from this point of view not a ‘static concept’, but a concept, which can comprise many single properties, but which is ‘dynamically generated’ as a ‘concept’, so that ‘new elements’ can be added or ‘existing elements’ may be ‘taken away’ again.


(The words are from the German text)

Although there is no universally accepted comprehensive theory of human thought to date, there are many different models (everyday term for the more correct term ‘theories’) that attempt to approximate important aspects of human thought.

The preceding image shows the outlines of a minimally simple model to our thinking.

This model assumes that the surrounding world – with ourselves as components of that world – is to be understood as a ‘process’ in which, at a chosen ‘point in time’, one can describe in an idealized way all the ‘observable phenomena’ that are important to the observer at that point in time. This description of a ‘section of the world’ is here called ‘situation description’ at time t or simply ‘situation’ at t.

Then one needs a ‘knowledge about possible changes’ of elements of the situation description in the way (simplified): ‘If X is element of situation description at t, then for a subsequent situation at t either X is deleted or replaced by a new X*’. There may be several alternatives for deletion or replacement with different probabilities. Such ‘descriptions of changes’ are here simplified called ‘change rules’.

Additionally, as part of the model, there is a ‘game instruction’ (classically: ‘inference term’), which explains when and how to apply a change rule to a given situation Sit at t in such a way that at the subsequent time t+1, there is a situation Sit* in which the changes have been made that the change rule describes.

Normally, there is more than one change rule that can be applied simultaneously with the others. This is also part of the game instructions.

This minimal model can and must be seen against the background of continuous change.

For this structure of knowledge it is assumed that one can describe ‘situations’, possible changes of such a situation, and that one can have a concept how to apply descriptions of recognized possible changes to a given situation.

With the recognition of an immanent infinity manifested in many concrete finite situations, it is immediately clear that the set of assumed descriptions of change should correspond with the observable changes, otherwise the theory has little practical use. Likewise, of course, it is important that the assumed situation descriptions correspond with the observable world. Fulfilling the correspondence requirements or checking that they are true is anything but trivial.


To these ‘correspondence requirements’ here some additional considerations, in which the view of the everyday perspective comes up.

It is to be noted that a ‘model’ is not the environment itself, but only a ‘symbolic description’ of a section of the environment from the point of view and with the understanding of a human ‘author’! To which properties of the environment a description refers, only the author himself knows, who ‘links’ the chosen ‘symbols’ (text or language) ‘in his head’ with certain properties of the environment, whereby these properties of the environment must also be represented ‘in the head’, quasi ‘knowledge images’ of ‘perception events’, which have been triggered by the environmental properties. These ‘knowledge images in the head’ are ‘real’ for the respective head; compared to the environment, however, they are basically only ‘fictitious’; unless there is currently a connection between current fictitious ‘images in the head’ and the ‘current perceptions’ of ‘environmental events’, which makes the ‘concrete elements of perception’ appear as ‘elements of the fictitious images’. Then the ‘fictitious’ pictures would be ‘fictitious and real’.

Due to the ‘memory’, whose ‘contents’ are more or less ‘unconscious’ in the ‘normal state’, we can however ‘remember’ that certain ‘fictitious pictures’ were once ‘fictitious and real’ in the past. This can lead to a tendency in everyday life to ascribe a ‘presumed reality’ to fictional images that were once ‘real’ in the past, even in the current present. This tendency is probably of high practical importance in everyday life. In many cases these ‘assumptions’ also work. However, this ‘spontaneous-for-real-holding’ can often be off the mark; a common source of error.

The ‘spontaneous-for-real-holding’ can be disadvantageous for many reasons. For example, the fictional images (as inescapably abstract images) may in themselves be only ‘partially appropriate’. The context of the application may have changed. In general, the environment is ‘in flux’: facts that were given yesterday may be different today.

The reasons for the persistent changes are different. Besides such changes, which we could recognize by our experience as an ‘identifiable pattern’, there are also changes, which we could not assign to a pattern yet; these can have a ‘random character’ for us. Finally there are also the different ‘forms of life’, which are basically ‘not determined’ by their system structure in spite of all ‘partial determinateness’ (one can also call this ‘immanent freedom’). The behavior of these life forms can be contrary to all other recognized patterns. Furthermore, life forms behave only partially ‘uniformly’, although everyday structures with their ‘rules of behavior’ – and many other factors – can ‘push’ life forms with their behavior into a certain direction.

If one remembers at this point again the preceding thoughts about the ‘immanent infinity’ and the view that the single, finite moments are only understandable as ‘part of a process’, whose ‘logic’ is not decoded to a large extent until today, then it is clear, that any kind of ‘modeling’ within the comprehensive change processes can only have a preliminary approximation character, especially since it is aggravated by the fact that the human actors are not only ‘passively receiving’, but at the same time always also ‘actively acting’, and thereby they influence the change process by their actions! These human influences result from the same immanent infinity as those which cause all other changes. The people (like the whole life) are thus inevitably real ‘co-creative’ …. with all the responsibilities which result from it.


What exactly one has to understand by ‘morality’, one has to read out of many hundreds – or even more – different texts. Every time – and even every region in this world – has developed different versions.

In this text it is assumed that with ‘moral’ such ‘views’ are meant, which should contribute to the fact that an individual person (or a group or …) in questions of the ‘decision’ of the kind “Should I rather do A or B?” should get ‘hints’, how this question can be answered ‘best’.

If one remembers at this point what was said before about that form of thinking which allows ‘prognoses’ (thinking in explicit ‘models’ or ‘theories’), then there should be an ‘evaluation’ of the ‘possible continuations’ independent of a current ‘situation description’ and independent of the possible ‘knowledge of change’. So there must be ‘besides’ the description of a situation as it ‘is’ at least a ‘second level’ (a ‘meta-level’), which can ‘talk about’ the elements of the ‘object-level’ in such a way that e.g. it can be said that an ‘element A’ from the object-level is ‘good’ or ‘bad’ or ‘neutral’ or with a certain gradual ‘tuning’ ‘good’ or ‘bad’ or ‘neutral’ at the meta-level. This can also concern several elements or whole subsets of the object level. This can be done. But for it to be ‘rationally acceptable’, these valuations would have to be linked to ‘some form of motivation’ as to ‘why’ this valuation should be accepted. Without such a ‘motivation of evaluations’ such an evaluation would appear as ‘pure arbitrariness’.

At this point the ‘air’ becomes quite ‘thin’: in the history so far no convincing model for a moral justification became known, which is in the end not dependent from the decision of humans to set certain rules as ‘valid for all’ (family, village, tribe, …). Often the justifications can still be located in the concrete ‘circumstances of life’, just as often the concrete circumstances of life ‘recede into the background’ in the course of time and instead abstract concepts are introduced, which one endows with a ‘normative power’, which elude a more concrete analysis. Rational access is then hardly possible, if at all.

In a time like in the year 2023, in which the available knowledge is sufficient to be able to recognize the interdependencies of literally everybody from everybody, in addition the change dynamics, which can threaten with the components ‘global warming’ the ‘sustainable existence of life on earth’ substantially, ‘abstractly set normative terms’ appear not only ‘out of time’, no, they are highly dangerous, since they can substantially hinder the preservation of life in the further future.

META-MORAL (Philosophy)

The question then arises whether this ‘rational black hole’ of ‘justification-free normative concepts’ marks the end of human thinking or whether thinking should instead just begin here?

Traditionally, ‘philosophy’ understands itself as that attitude of thinking, in which every ‘given’ – including any kind of normative concepts – can be made an ‘object of thinking’. And just the philosophical thinking has produced exactly this result in millennia of struggle: there is no point in thinking, from which all ought/all evaluating can be derived ‘just like that’.

In the space of philosophical thinking, on the meta-moral level, it is possible to ‘thematize’ more and more aspects of our situation as ‘mankind’ in a dynamic environment (with man himself as part of this environment), to ‘name’ them, to place them in a ‘potential relations’, to make ‘thinking experiments’ about ‘possible developments’, but this philosophical meta-moral knowledge is completely transparent and always identifiable. The inferences about why something seems ‘better’ than something else are always ’embedded’, ‘related’. The demands for an ‘autonomous morality’, for an ‘absolute morality’ besides philosophical thinking appear ‘groundless’, ‘arbitrary’, ‘alien’ to the ‘matter’ against this background. A rational justification is not possible.

A ‘rationally unknowable’ may exist, exists even inescapably, but this rationally unknowable is our sheer existence, the actual real occurrence, for which so far there is no rational ‘explanation’, more precisely: not yet. But this is not a ‘free pass’ for irrationality. In ‘irrationality’ everything disappears, even the ‘rationally unrecognizable’, and this belongs to the most important ‘facts’ in the world of life.


[1] The different forms of ‘infinity’, which have been introduced into mathematics with the works of Georg Cantor and have been intensively further investigated, have nothing to do with the experienceable finiteness/ infinity described in the text: . However, if one wants to ‘describe’ the ‘experience’ of real finiteness/ infinity, then one will possibly want to fall back on descriptive means of mathematics. But it is not a foregone conclusion whether the mathematical concepts ‘harmonize’ with the empirical experience standing to the matter.